Welcome to the Linux Foundation Forum!

Lab 2.3 - curl on nginx timeout

andrea.calvario
andrea.calvario Posts: 22
edited August 2021 in LFD259 Class Forum

Good morning all,
as described in the title I have an issue with the last part of the lab.
When I try to curl the nginx through the public IP receive a timeout.

in7rud3r@in7rud3r-VMUK8s:~$ uname -a
Linux in7rud3r-VMUK8s 5.8.0-55-generic #62-Ubuntu SMP Tue Jun 1 08:21:18 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
basicservice   NodePort    10.103.240.174   <none>        80:32542/TCP   9m23s
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP        54d
in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
87.17.220.201
in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:32542
curl: (28) Failed to connect to 87.17.220.201 port 32542: Connection timed out

Same problem on both nodes.
I'm using two virtual machines on a local VirtualBox.

Have you any suggestions? Does anyone, that meets the same problem?

thanks in advance.

Andy!

Comments

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrea.calvario,

    Can you confirm that your pod is running, that the selector of the service matches the pod's label, and the endpoint is showing the IP address of the pod?

    Regards,
    -Chris

  • Hi Chris, thanks for your answer...
    I try to recreate the service and the pod, but the issue is present againt...

    Let me report here your points...

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
    NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    basicservice   NodePort    10.109.102.131   <none>        80:31573/TCP   6s
    kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP        54d
    in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
    87.17.220.201
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pod
    NAME       READY   STATUS    RESTARTS   AGE
    basicpod   2/2     Running   0          117s
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe service
    serviceaccounts  services         
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe services basicservice 
    Name:                     basicservice
    Namespace:                default
    Labels:                   <none>
    Annotations:              <none>
    Selector:                 type=webserver
    Type:                     NodePort
    IP Families:              <none>
    IP:                       10.109.102.131
    IPs:                      10.109.102.131
    Port:                     <unset>  80/TCP
    TargetPort:               80/TCP
    NodePort:                 <unset>  31573/TCP
    Endpoints:                192.168.140.141:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:                   <none>
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get endpoints
    NAME           ENDPOINTS            AGE
    basicservice   192.168.140.141:80   16h
    kubernetes     192.168.1.50:6443    54d
    in7rud3r@in7rud3r-VMUK8s:~$ cat ./LFD259/SOLUTIONS/s_02/basic.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: basicpod
      labels:
        type: webserver
    spec:
      containers:
      - name: webcont
        image: nginx
        ports:
        - containerPort: 80
      - name: fdlogger
        image: fluent/fluentd
    in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:31573
    curl: (28) Failed to connect to 87.17.220.201 port 31573: Connection timed out
    

    It seems that the service is not labeled (it has the same selector), but looking at the lab document I cannot identify the point where instructed to label the service, I can see the label on the pod at step 9, confirmed by the description of the pod as follow:

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe pod basicpod 
    Name:         basicpod
    Namespace:    default
    Priority:     0
    Node:         in7rud3r-vmuk8s-n2/192.168.1.161
    Start Time:   Tue, 10 Aug 2021 18:14:59 +0200
    Labels:       type=webserver
    Annotations:  cni.projectcalico.org/containerID: 153106fffc2f34001e8f36403d0a28d48d418031bba7060b6a441d4e172d2364
                  cni.projectcalico.org/podIP: 192.168.140.141/32
                  cni.projectcalico.org/podIPs: 192.168.140.141/32
    Status:       Running
    IP:           192.168.140.141
    IPs:
      IP:  192.168.140.141
    Containers:
      webcont:
        Container ID:   docker://6e86830dcc7f28c22e804eb703a7f066886d1f88e21cae1c001a92acdb0af250
        Image:          nginx
        Image ID:       docker-pullable://nginx@sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
        Port:           80/TCP
        Host Port:      0/TCP
        State:          Running
          Started:      Tue, 10 Aug 2021 18:15:03 +0200
        Ready:          True
        Restart Count:  0
        Environment:    <none>
    [...]
    

    I'm mistaking something?

    Thanks in advance Chris!

    Andy!

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrea.calvario,

    The configuration seems correct. What happens when you attempt to curl the pod's ephemeral IP (192.168.x.y) from each node - the control-plane and worker? What about running curl on the service's ClusterIP, again from each node?

    Regards,
    -Chris

  • The response was the same from both nodes (timeout).
    Anyway, now I have another problem, I went on with lab 3 and after restarting the two VMs (as requested in the lab), the cluster stop working, perhaps due to an IP change of both VM. I tried to restore it but couldn't, the nodes seem to works, but the status is NotReady. I think I will have to recreate the cluster from scratch, with the opportunity I try to rerun the whole process, if the problem persists I will resume this post.

    Thanks for the moment.

    P.S.: Let me tell that kubernetes is very sensible to network changes! :D

  • Well,

    cluster recreated from scratch, but the issue persists.

    As said before, the timeout is on both nodes (cp and worker).

    Calling the cluster-IP on port 80 works.

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods
    NAME       READY   STATUS    RESTARTS   AGE
    basicpod   2/2     Running   0          19s
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
    NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    basicservice   NodePort    10.107.53.88   <none>        80:31330/TCP   24s
    kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP        31m
    in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
    87.17.220.201
    in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:31330
    curl: (28) Failed to connect to 87.17.220.201 port 31330: Connection timed out
    in7rud3r@in7rud3r-VMUK8s:~$ curl http://10.107.53.88:80
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    
    

    The same happens on the worker node:

    in7rud3r@in7rud3r-VMUK8s-n2:~$ kubectl get pods,svc
    NAME           READY   STATUS    RESTARTS   AGE
    pod/basicpod   2/2     Running   0          7m39s
    
    NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    service/basicservice   NodePort    10.107.53.88   <none>        80:31330/TCP   7m29s
    service/kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP        39m
    in7rud3r@in7rud3r-VMUK8s-n2:~$ curl http://87.17.220.201:31330
    curl: (28) Failed to connect to 87.17.220.201 port 31330: Connection timed out
    in7rud3r@in7rud3r-VMUK8s-n2:~$ curl http://10.107.53.88:80
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    

    Any idea?

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrea.calvario,

    From your detailed outputs it seems that your cluster works as expected. What you are experiencing may be related to the networking between your guest VMs and the host.

    How did you setup the networking for your guest VMs? What type of network adapters are configured on the VMs? What IP address is 87.x.x.x? How are your VMs configured to access that IP?

    How did you set up your cluster infrastrucure for the previous course LFS258? Did it work then? Are you following the same process now?

    Regards,
    -Chris

  • Hi Chris,
    I'm back from three days of not so simple situation, but I'm here now.

    Well, let me answer your questions.

    How did you setup the networking for your guest VMs?
    Both VMs are in "Bridged Adapter network" as shown in the screenshot and I reserve the IP Address of my network on the router to the two VM based on the mac address they are using.

    What type of network adapters are configured on the VMs?
    It's a classic ethernet adapter controller

    in7rud3r@in7rud3r-VMUK8s:~$ lspci | egrep -i --color 'network|ethernet|wireless|wi-fi'
    00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
    
    in7rud3r@in7rud3r-VMUK8s:~$ sudo lshw -class network
      *-network                 
           description: Ethernet interface
           product: 82540EM Gigabit Ethernet Controller
           vendor: Intel Corporation
           physical id: 3
           bus info: pci@0000:00:03.0
           logical name: enp0s3
           version: 02
           serial: 08:00:27:85:a1:45
           size: 1Gbit/s
           capacity: 1Gbit/s
           width: 32 bits
           clock: 66MHz
           capabilities: pm pcix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
           configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=192.168.1.160 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=1Gbit/s
           resources: irq:19 memory:f0200000-f021ffff ioport:d020(size=8)
    
    in7rud3r@in7rud3r-VMUK8s:~$ sudo ethtool enp0s3
    Settings for enp0s3:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: off (auto)
        Supports Wake-on: umbg
        Wake-on: d
            Current message level: 0x00000007 (7)
                                   drv probe link
        Link detected: yes
    

    What IP address is 87.x.x.x? How are your VMs configured to access that IP?
    In this case, I think I'm not sure about that, following the lab instruction, the address comes out from the curl ifconfig.io, but I made nothing to configure it in that way, same output from the host machine where the VirtualBox VMs are running (probably due to the fact that they use a bridged adapter network).

    How did you set up your cluster infrastrucure for the previous course LFS258?
    This is the same cluster I used for the previous course, reconfigured based on the instruction provided in lab 1 of this course.

    Did it work then? Are you following the same process now?
    Yes, I have had some problems that I have almost always managed to solve in one way or another.

    Thanks for your support Chris, really appreciated it.

    Andy

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrea.calvario,

    I would recommend setting "Promiscuous Mode" to Allow All traffic, and ensuring that the two VirtualBox VMs IP addresses do not overlap with the 192.168.0.0/16 range, which is the default Pod network managed by Calico.

    Then I would retrieve the guest VM IP instead of the host IP, with ip a or hostname -I and use that IP address to test the NodePort. If port forwarding is not configured between the host and guest, the curl would fail as you have seen earlier.

    The VirtualBox VMs and their networking configuration should be the same as before. It is only the cluster bootstrapping process that is slightly different.

    Regards,
    -Chris

  • Thanks Chris,

    After changing the network configuration as suggested ("Promiscuous Mode" to "Allow All") nothing change.

    this is my the output of the comands you ask for.

    ┌─[bit00451@bitn0451][~]
    └─▪ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
        link/ether 48:2a:e3:0e:36:5c brd ff:ff:ff:ff:ff:ff
    3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 20:16:b9:29:fb:2b brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.97/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp4s0
           valid_lft 11603sec preferred_lft 11603sec
        inet6 fe80::e53b:1326:31:a84a/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:f5:39:ba:b1 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    ┌─[bit00451@bitn0451][~]
    └─▪hostname -I
    192.168.1.97 172.17.0.1 
    

    Using that IP (without any port forwarding) the test fails, with "connection refused" message (but I don't understand the objective of the test).

    I try also restarting the two nodes, but nothing change!

    Andy!

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrea.calvario,

    Perhaps the following documentation page will provide additional helpful insight on the issue.

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

    Regards,
    -Chris

  • Well, solved, it is probably my personal problem due to the fact that I am using virtual machines in a bridged network on my host machine, I imagine that if you are following the recommended configuration in the cloud (google, amazon, etc...), you should not have problems. However, I report what I discovered and how I solved it.
    Being in a bridged network, the external IP address of the VMs coincides with the public address of my router. Trying to reach the virtual machines from the outside without having configured a port mapping on the router, I could not. It was enough for me to open the specific port on the router to the address of the internal network of the VM to solve.
    Thanks anyway Chris for the support and sorry for the long-winded post.

  • sweller
    sweller Posts: 2

    @andrea.calvario THANK YOU. I logged into my router and, lo and behold, I had the same issue with 2 ubuntu 20.04 vms running on vmware. Made a port forwarding rule like you suggested and everything worked

Categories

Upcoming Training