Welcome to the Linux Foundation Forum!

Lab 2.3 Unable to Curl

Hello Experts,

I am trying to do Lab 2.3

ubuntu@k8instance1:~/LFD259/SOLUTIONS/s_02$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
basicpod 1/1 Running 0 5m23s 192.168.1.3 k8instance2

When i try to do curl
curl http://192.168.1.3

I get no out. It connection times out.

Comments

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi,
    Curl timing out from the master node to the worker node is an indication of a networking issue between your nodes. There may be an infrastructure firewall blocking traffic to some ports, or blocking some protocols. Or you may have firewalls running in the VMs.
    Please make sure all traffic is allowed if you have a custom VPC network and Firewall rule at infra level, and disable all firewalls at the VMs' OS level.
    Regards,
    -Chris

  • which particular ports do I have to see? Also when i see the ip address of my minon or worker node, I don't see set as 192.168.1.3, which I get when I do kubectl get pod -wide. I am runnin gthis in a cloud. All outgoing firewall/ports are open.

  • chrispokorni
    chrispokorni Posts: 2,155

    Have a custom VPC network created for the purpose of this course, attach an all open firewall rule - all ports all protocols all sources/destinations, and spin up your VM instances in this VPC.
    192.168.1.3 is not a node IP, it is the IP of your pod, and it is probably running on the worker node.
    Check any possible firewalls in your VMs as well.
    Regards,
    -Chris

  • Hello. I have a all portocols and ports open on my vcn/vpc. The pod is running on my worker node. All fire wall open as well. How to trouble shoot.

  • stilwalli
    stilwalli Posts: 9
    edited March 2019

    .

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @stilwalli,
    If the new custom VPC has a new custom firewall rule open to all traffic (all ports, all protocols, all sources/destinations) and your instances have disabled/inactive firewalls, then a good way to troubleshoot the networking between your nodes is to use netcat (nc) and Wireshark on your nodes to determine where is your traffic blocked.
    Providing a snippet of your outputs may also help.
    Regards,
    -Chris

  • vasyhin
    vasyhin Posts: 15

    I have the same issue on Azure VMs.
    Azure Network Security Groups are good - all traffic inbound/outbound is allowed.
    I guess it's smth with Calico setup probably.
    Found here the link to Azure-Vnet CNI plugin, but with it has been installed - networking between nodes becomes completely broken (Pods can't be scheduled on the Workers). So I reverted Azure-Vnet plugin from my Nodes.
    Here is what I have on Master node

    sa@ub16:~$ ifconfig
    cali02b0884454b Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:1461 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1492 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:115291 (115.2 KB)  TX bytes:549569 (549.5 KB)
    
    califb8a5d80f8f Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:1450 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1488 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:114409 (114.4 KB)  TX bytes:546997 (546.9 KB)
    
    docker0   Link encap:Ethernet  HWaddr 02:42:f1:7e:0e:78
              inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
    eth0      Link encap:Ethernet  HWaddr 00:0d:3a:a2:3b:d7
              inet addr:10.0.0.4  Bcast:10.0.0.255  Mask:255.255.255.0
              inet6 addr: fe80::20d:3aff:fea2:3bd7/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:30029 errors:0 dropped:0 overruns:0 frame:0
              TX packets:39385 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:7448880 (7.4 MB)  TX bytes:24883847 (24.8 MB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:219221 errors:0 dropped:0 overruns:0 frame:0
              TX packets:219221 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:62199452 (62.1 MB)  TX bytes:62199452 (62.1 MB)
    
    tunl0     Link encap:IPIP Tunnel  HWaddr
              inet addr:192.168.0.1  Mask:255.255.255.255
              UP RUNNING NOARP  MTU:1440  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:300 (300.0 B)
    

    and this is what I have on Worker node

    sa@ub16-02:~$ ifconfig
    cali3fc1b4ac805 Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:14 errors:0 dropped:0 overruns:0 frame:0
              TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:1647 (1.6 KB)  TX bytes:1939 (1.9 KB)
    
    docker0   Link encap:Ethernet  HWaddr 02:42:65:65:7e:18
              inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
    eth0      Link encap:Ethernet  HWaddr 00:0d:3a:a0:a0:f7
              inet addr:10.0.0.5  Bcast:10.0.0.255  Mask:255.255.255.0
              inet6 addr: fe80::20d:3aff:fea0:a0f7/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:38007 errors:0 dropped:0 overruns:0 frame:0
              TX packets:39018 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:19645066 (19.6 MB)  TX bytes:6811361 (6.8 MB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:6294 errors:0 dropped:0 overruns:0 frame:0
              TX packets:6294 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:453032 (453.0 KB)  TX bytes:453032 (453.0 KB)
    
    tunl0     Link encap:IPIP Tunnel  HWaddr
              inet addr:192.168.1.1  Mask:255.255.255.255
              UP RUNNING NOARP  MTU:1440  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    

    I'm not sure if it's good or not as my networking knowledges are said to say poor.
    Please let me know what I need to check / configure ?
    Any help is appreciated. It actually blocks my Labs (will try to continue work on labs on the only master node for now....)

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @vasyhin,
    There is an earlier entry in this forum where @crixo posted a solution to the Calico issue with Kubernetes on Azure.
    Check it out and see if it helps:

    https://forum.linuxfoundation.org/discussion/855882/labs-on-azure#latest

    Regards,
    -Chris

  • If you happen to be on GCP, when I got to the curl step in Exercise 2.3 it did not work in my default VPC with default rules.

    Thankfully project calico has documented what solved it for me.

    Just follow the instructions up to the point of 1.2 Setting up GCE networking, then try the curl command again. Works for me.

  • chrispokorni
    chrispokorni Posts: 2,155

    Because of know issues with the default VPC and default firewall rules, GCE requirements for a custom VPC and firewall rule have been included in Exercise 2.1.

  • vasyhin
    vasyhin Posts: 15

    Eventually I was able to set up Azure VMs to work with Node-To-Pod (on different node) networking.
    The fix was - option#2 from https://docs.projectcalico.org/v3.6/reference/public-cloud/azure#about-calico-on-azure - to use Flannel instead of Calico.
    Simple follow this instructions. Technically you just need to call kubectl apply -f canal.yaml - and that's all - networking is good.
    Master node ifconfig

    root@ub16:~# ifconfig
    docker0   Link encap:Ethernet  HWaddr 02:42:44:ae:5c:e7
              inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
    eth0      Link encap:Ethernet  HWaddr 00:0d:3a:a2:3b:d7
              inet addr:10.0.0.4  Bcast:10.0.0.255  Mask:255.255.255.0
              inet6 addr: fe80::20d:3aff:fea2:3bd7/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:217050 errors:0 dropped:0 overruns:0 frame:0
              TX packets:125007 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:79910881 (79.9 MB)  TX bytes:59916636 (59.9 MB)
    
    flannel.1 Link encap:Ethernet  HWaddr ee:6a:57:01:1c:81
              inet addr:192.168.0.0  Bcast:0.0.0.0  Mask:255.255.255.255
              inet6 addr: fe80::ec6a:57ff:fe01:1c81/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
              RX packets:10 errors:0 dropped:0 overruns:0 frame:0
              TX packets:14 errors:0 dropped:13 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:2238 (2.2 KB)  TX bytes:894 (894.0 B)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:538480 errors:0 dropped:0 overruns:0 frame:0
              TX packets:538480 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:149692520 (149.6 MB)  TX bytes:149692520 (149.6 MB)
    
    tunl0     Link encap:IPIP Tunnel  HWaddr
              inet addr:192.168.0.1  Mask:255.255.255.255
              UP RUNNING NOARP  MTU:1440  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:600 (600.0 B)
    
    

    Master node route -n

    root@ub16:~# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 eth0
    10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
    168.63.129.16   10.0.0.1        255.255.255.255 UGH   0      0        0 eth0
    169.254.169.254 10.0.0.1        255.255.255.255 UGH   0      0        0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 *
    192.168.1.0     192.168.1.0     255.255.255.0   UG    0      0        0 flannel.1
    

    Worker node ifconfig

    root@ub16-02:~# ifconfig
    cali3fc1b4ac805 Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:14 errors:0 dropped:0 overruns:0 frame:0
              TX packets:31 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:2546 (2.5 KB)  TX bytes:2264 (2.2 KB)
    
    cali840fd25a13b Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:1605 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1662 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:126497 (126.4 KB)  TX bytes:605941 (605.9 KB)
    
    calif0fcaea2af1 Link encap:Ethernet  HWaddr ee:ee:ee:ee:ee:ee
              inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1
              RX packets:1599 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1678 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:126294 (126.2 KB)  TX bytes:606640 (606.6 KB)
    
    docker0   Link encap:Ethernet  HWaddr 02:42:f7:28:60:10
              inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
    eth0      Link encap:Ethernet  HWaddr 00:0d:3a:a0:a0:f7
              inet addr:10.0.0.5  Bcast:10.0.0.255  Mask:255.255.255.0
              inet6 addr: fe80::20d:3aff:fea0:a0f7/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:118396 errors:0 dropped:0 overruns:0 frame:0
              TX packets:87652 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:113745169 (113.7 MB)  TX bytes:11960941 (11.9 MB)
    
    flannel.1 Link encap:Ethernet  HWaddr ce:ef:74:d4:d7:06
              inet addr:192.168.1.0  Bcast:0.0.0.0  Mask:255.255.255.255
              inet6 addr: fe80::ccef:74ff:fed4:d706/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
              RX packets:14 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10 errors:0 dropped:13 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:894 (894.0 B)  TX bytes:2238 (2.2 KB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:2233 errors:0 dropped:0 overruns:0 frame:0
              TX packets:2233 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:160422 (160.4 KB)  TX bytes:160422 (160.4 KB)
    
    tunl0     Link encap:IPIP Tunnel  HWaddr
              inet addr:192.168.1.1  Mask:255.255.255.255
              UP RUNNING NOARP  MTU:1440  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    

    Worker node route -n

    root@ub16-02:~# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 eth0
    10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
    168.63.129.16   10.0.0.1        255.255.255.255 UGH   0      0        0 eth0
    169.254.169.254 10.0.0.1        255.255.255.255 UGH   0      0        0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.0.0     192.168.0.0     255.255.255.0   UG    0      0        0 flannel.1
    192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 *
    192.168.1.2     0.0.0.0         255.255.255.255 UH    0      0        0 cali840fd25a13b
    192.168.1.3     0.0.0.0         255.255.255.255 UH    0      0        0 calif0fcaea2af1
    192.168.1.4     0.0.0.0         255.255.255.255 UH    0      0        0 cali3fc1b4ac805
    
    
  • vasyhin
    vasyhin Posts: 15

    @serewicz I believe it makes sense to mention that for Azure canal.yaml should be used instead of calico.yamlin Lab 2.2. It took me 2 weeks to understand completely what the issue and how it can be fixed

  • serewicz
    serewicz Posts: 1,000

    As has been mentioned we do not test or configure for Azure as a lab system.

  • Ran thru this issue on 12-14-2022, following https://training.linuxfoundation.org/cm/LFD259/LabSetup-GCE.mp4 for GCE to create firewall rule after the VPN is up. Couldn't get it to work at the first time and scratched out pretty good amount of hair... =/

    Decided to trash the VMs and VPC and all those temp-back-and-forth firewall rules at GCE. Started from scratch and got it to work this time.

    So, the trick is that I have included the firewall rule option during the VPC creation.

    At Firewall rules session, look for a name called "allow-custom" and EDIT it to include both the current subnet and 0.0.0.0/0 and make sure it's "Allow all".

    Nothing needs to be changed inside the VM, I mean I didn't have to mess with the iptables and UFW at all. It just works out of the box.

    I am now a happy camper!

    Have fun and cheers! =)

  • I suffered the same (curl hanging) in 2.3 in AWS.
    It turned out my AWS subnet had a security group which allowed TCP traffic, but no UDP traffic. Thus kubernetes operations worked fine (incl. worker node registering itself to the cp, pods getting started, etc.), but "application layer" connectivity didn't work. I can only guess Cilium uses UDP under the hood.

    Remedy: allow all traffic in the subnet - both TCP and UDP.

  • Hi @grzegon,

    The Overview section of Lab 2.1 does recommend "no firewall" while the AWS demo video from the introductory chapter presents a SG configuration suitable for the lab environment.

    Regards,
    -Chris

  • @grzegon said:
    I suffered the same (curl hanging) in 2.3 in AWS.
    It turned out my AWS subnet had a security group which allowed TCP traffic, but no UDP traffic. Thus kubernetes operations worked fine (incl. worker node registering itself to the cp, pods getting started, etc.), but "application layer" connectivity didn't work. I can only guess Cilium uses UDP under the hood.

    Remedy: allow all traffic in the subnet - both TCP and UDP.

    This guy right here is a life saver. I was almost losing my mind because of this

Categories

Upcoming Training