Welcome to the Linux Foundation Forum!

curl'ing service fails (times out) unless made from node not hosting the pod

On both labs 2 (curl a service (pod is on the worker) from master), and lab 3 (curl registry on port 5000 running on worker from master), my network calls time out. If i issue the same curl statement from the worker it works. this indicates that inter-node http traffic is blocked or failing. My two VMs are deployed on GCP on the default VPC. I have verified that I have a firewall rule enabling all traffic on all ports internally between machines. i read on a forum thread that apparmor could also possibly get in the way so i terminated that service on both nodes. i'm at my wit's end as to why i can't make an http call from master to worker. the hint on page 21 of the lab instructions (chapter 3) is not specific enough to allow me to figure out the proper way to configure GCP: "if the connection hangs it may be due to a firewall issue; ..ensure your instances are running using VPC setup and all ports are allowed.." -- please provide explicit steps. thanks.

Comments

  • EitanSuez
    EitanSuez Posts: 8
    edited January 2019

    thinking about this further.. since my cluster is up and my worker joined it, and commands from master such as kubectl get nodes indicate that the two nodes are communicating just fine, i must conclude this issue has nothing to do with the vpc setup in gcp. i suspect it has to do with how kube-proxy and iptables are configured perhaps..

  • i'm looking at the k8smaster.sh script, it issues kubeadm init with --pod-network-cidr of 192.16.8.0.0/16, but my clusterip is in the 10.99 range. i wonder if perhaps the --service-cidr flag should accompany that command?

  • chrispokorni
    chrispokorni Posts: 2,309

    Hi @EitanSuez,
    The cluster IP is expected to be in a different IP range because it is not a pod, thus will not receive an IP from the range specified for pods.
    The default VPC on GCP may be blocking some ports - it is a strange behavior even when you create a firewall rule to open all ports, and you expect all traffic to go thru. The suggestion from prior posts is that you create a new VPC with a new firewall rule open to all traffic, and add your VM instances in this custom VPC.
    Ubuntu on GCP typically has ufw (Ubuntu firewall) inactive, but it can't hurt to verify that too, to make sure it is not active and blocking some of the ports.
    Regards,
    -Chris

  • Thanks Chris, I'll give that a go today and report back..

  • yep, worked like a charm! i can confirm that when i create my own vpc and add in my own allow-internal firewall rule to that vpc, then the problem i describe above does not manifest itself.

Categories

Upcoming Training