Welcome to the Linux Foundation Forum!

Lab 2.3 - curl on nginx timeout

Posts: 22
edited August 2021 in LFD259 Class Forum

Good morning all,
as described in the title I have an issue with the last part of the lab.
When I try to curl the nginx through the public IP receive a timeout.

  1. in7rud3r@in7rud3r-VMUK8s:~$ uname -a
  2. Linux in7rud3r-VMUK8s 5.8.0-55-generic #62-Ubuntu SMP Tue Jun 1 08:21:18 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  3. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. basicservice NodePort 10.103.240.174 <none> 80:32542/TCP 9m23s
  6. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 54d
  7. in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
  8. 87.17.220.201
  9. in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:32542
  10. curl: (28) Failed to connect to 87.17.220.201 port 32542: Connection timed out

Same problem on both nodes.
I'm using two virtual machines on a local VirtualBox.

Have you any suggestions? Does anyone, that meets the same problem?

thanks in advance.

Andy!

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 2,443

    Hi @andrea.calvario,

    Can you confirm that your pod is running, that the selector of the service matches the pod's label, and the endpoint is showing the IP address of the pod?

    Regards,
    -Chris

  • Hi Chris, thanks for your answer...
    I try to recreate the service and the pod, but the issue is present againt...

    Let me report here your points...

    1. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. basicservice NodePort 10.109.102.131 <none> 80:31573/TCP 6s
    4. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 54d
    5. in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
    6. 87.17.220.201
    7. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pod
    8. NAME READY STATUS RESTARTS AGE
    9. basicpod 2/2 Running 0 117s
    10. in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe service
    11. serviceaccounts services
    12. in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe services basicservice
    13. Name: basicservice
    14. Namespace: default
    15. Labels: <none>
    16. Annotations: <none>
    17. Selector: type=webserver
    18. Type: NodePort
    19. IP Families: <none>
    20. IP: 10.109.102.131
    21. IPs: 10.109.102.131
    22. Port: <unset> 80/TCP
    23. TargetPort: 80/TCP
    24. NodePort: <unset> 31573/TCP
    25. Endpoints: 192.168.140.141:80
    26. Session Affinity: None
    27. External Traffic Policy: Cluster
    28. Events: <none>
    29. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get endpoints
    30. NAME ENDPOINTS AGE
    31. basicservice 192.168.140.141:80 16h
    32. kubernetes 192.168.1.50:6443 54d
    33. in7rud3r@in7rud3r-VMUK8s:~$ cat ./LFD259/SOLUTIONS/s_02/basic.yaml
    34. apiVersion: v1
    35. kind: Pod
    36. metadata:
    37. name: basicpod
    38. labels:
    39. type: webserver
    40. spec:
    41. containers:
    42. - name: webcont
    43. image: nginx
    44. ports:
    45. - containerPort: 80
    46. - name: fdlogger
    47. image: fluent/fluentd
    48. in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:31573
    49. curl: (28) Failed to connect to 87.17.220.201 port 31573: Connection timed out

    It seems that the service is not labeled (it has the same selector), but looking at the lab document I cannot identify the point where instructed to label the service, I can see the label on the pod at step 9, confirmed by the description of the pod as follow:

    1. in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe pod basicpod
    2. Name: basicpod
    3. Namespace: default
    4. Priority: 0
    5. Node: in7rud3r-vmuk8s-n2/192.168.1.161
    6. Start Time: Tue, 10 Aug 2021 18:14:59 +0200
    7. Labels: type=webserver
    8. Annotations: cni.projectcalico.org/containerID: 153106fffc2f34001e8f36403d0a28d48d418031bba7060b6a441d4e172d2364
    9. cni.projectcalico.org/podIP: 192.168.140.141/32
    10. cni.projectcalico.org/podIPs: 192.168.140.141/32
    11. Status: Running
    12. IP: 192.168.140.141
    13. IPs:
    14. IP: 192.168.140.141
    15. Containers:
    16. webcont:
    17. Container ID: docker://6e86830dcc7f28c22e804eb703a7f066886d1f88e21cae1c001a92acdb0af250
    18. Image: nginx
    19. Image ID: docker-pullable://nginx@sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
    20. Port: 80/TCP
    21. Host Port: 0/TCP
    22. State: Running
    23. Started: Tue, 10 Aug 2021 18:15:03 +0200
    24. Ready: True
    25. Restart Count: 0
    26. Environment: <none>
    27. [...]

    I'm mistaking something?

    Thanks in advance Chris!

    Andy!

  • Posts: 2,443

    Hi @andrea.calvario,

    The configuration seems correct. What happens when you attempt to curl the pod's ephemeral IP (192.168.x.y) from each node - the control-plane and worker? What about running curl on the service's ClusterIP, again from each node?

    Regards,
    -Chris

  • The response was the same from both nodes (timeout).
    Anyway, now I have another problem, I went on with lab 3 and after restarting the two VMs (as requested in the lab), the cluster stop working, perhaps due to an IP change of both VM. I tried to restore it but couldn't, the nodes seem to works, but the status is NotReady. I think I will have to recreate the cluster from scratch, with the opportunity I try to rerun the whole process, if the problem persists I will resume this post.

    Thanks for the moment.

    P.S.: Let me tell that kubernetes is very sensible to network changes! :D

  • Well,

    cluster recreated from scratch, but the issue persists.

    As said before, the timeout is on both nodes (cp and worker).

    Calling the cluster-IP on port 80 works.

    1. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods
    2. NAME READY STATUS RESTARTS AGE
    3. basicpod 2/2 Running 0 19s
    4. in7rud3r@in7rud3r-VMUK8s:~$ kubectl get svc
    5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    6. basicservice NodePort 10.107.53.88 <none> 80:31330/TCP 24s
    7. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31m
    8. in7rud3r@in7rud3r-VMUK8s:~$ curl ifconfig.io
    9. 87.17.220.201
    10. in7rud3r@in7rud3r-VMUK8s:~$ curl http://87.17.220.201:31330
    11. curl: (28) Failed to connect to 87.17.220.201 port 31330: Connection timed out
    12. in7rud3r@in7rud3r-VMUK8s:~$ curl http://10.107.53.88:80
    13. <!DOCTYPE html>
    14. <html>
    15. <head>
    16. <title>Welcome to nginx!</title>
    17. <style>
    18. body {
    19. width: 35em;
    20. margin: 0 auto;
    21. font-family: Tahoma, Verdana, Arial, sans-serif;
    22. }
    23. </style>
    24. </head>
    25. <body>
    26. <h1>Welcome to nginx!</h1>
    27. <p>If you see this page, the nginx web server is successfully installed and
    28. working. Further configuration is required.</p>
    29.  
    30. <p>For online documentation and support please refer to
    31. <a href="http://nginx.org/">nginx.org</a>.<br/>
    32. Commercial support is available at
    33. <a href="http://nginx.com/">nginx.com</a>.</p>
    34.  
    35. <p><em>Thank you for using nginx.</em></p>
    36. </body>
    37. </html>
    38.  

    The same happens on the worker node:

    1. in7rud3r@in7rud3r-VMUK8s-n2:~$ kubectl get pods,svc
    2. NAME READY STATUS RESTARTS AGE
    3. pod/basicpod 2/2 Running 0 7m39s
    4.  
    5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    6. service/basicservice NodePort 10.107.53.88 <none> 80:31330/TCP 7m29s
    7. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m
    8. in7rud3r@in7rud3r-VMUK8s-n2:~$ curl http://87.17.220.201:31330
    9. curl: (28) Failed to connect to 87.17.220.201 port 31330: Connection timed out
    10. in7rud3r@in7rud3r-VMUK8s-n2:~$ curl http://10.107.53.88:80
    11. <!DOCTYPE html>
    12. <html>
    13. <head>
    14. <title>Welcome to nginx!</title>
    15. <style>
    16. body {
    17. width: 35em;
    18. margin: 0 auto;
    19. font-family: Tahoma, Verdana, Arial, sans-serif;
    20. }
    21. </style>
    22. </head>
    23. <body>
    24. <h1>Welcome to nginx!</h1>
    25. <p>If you see this page, the nginx web server is successfully installed and
    26. working. Further configuration is required.</p>
    27.  
    28. <p>For online documentation and support please refer to
    29. <a href="http://nginx.org/">nginx.org</a>.<br/>
    30. Commercial support is available at
    31. <a href="http://nginx.com/">nginx.com</a>.</p>
    32.  
    33. <p><em>Thank you for using nginx.</em></p>
    34. </body>
    35. </html>

    Any idea?

  • Posts: 2,443

    Hi @andrea.calvario,

    From your detailed outputs it seems that your cluster works as expected. What you are experiencing may be related to the networking between your guest VMs and the host.

    How did you setup the networking for your guest VMs? What type of network adapters are configured on the VMs? What IP address is 87.x.x.x? How are your VMs configured to access that IP?

    How did you set up your cluster infrastrucure for the previous course LFS258? Did it work then? Are you following the same process now?

    Regards,
    -Chris

  • Hi Chris,
    I'm back from three days of not so simple situation, but I'm here now.

    Well, let me answer your questions.

    How did you setup the networking for your guest VMs?
    Both VMs are in "Bridged Adapter network" as shown in the screenshot and I reserve the IP Address of my network on the router to the two VM based on the mac address they are using.

    What type of network adapters are configured on the VMs?
    It's a classic ethernet adapter controller

    1. in7rud3r@in7rud3r-VMUK8s:~$ lspci | egrep -i --color 'network|ethernet|wireless|wi-fi'
    2. 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
    3.  
    4. in7rud3r@in7rud3r-VMUK8s:~$ sudo lshw -class network
    5. *-network
    6. description: Ethernet interface
    7. product: 82540EM Gigabit Ethernet Controller
    8. vendor: Intel Corporation
    9. physical id: 3
    10. bus info: pci@0000:00:03.0
    11. logical name: enp0s3
    12. version: 02
    13. serial: 08:00:27:85:a1:45
    14. size: 1Gbit/s
    15. capacity: 1Gbit/s
    16. width: 32 bits
    17. clock: 66MHz
    18. capabilities: pm pcix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
    19. configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=192.168.1.160 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=1Gbit/s
    20. resources: irq:19 memory:f0200000-f021ffff ioport:d020(size=8)
    21.  
    22. in7rud3r@in7rud3r-VMUK8s:~$ sudo ethtool enp0s3
    23. Settings for enp0s3:
    24. Supported ports: [ TP ]
    25. Supported link modes: 10baseT/Half 10baseT/Full
    26. 100baseT/Half 100baseT/Full
    27. 1000baseT/Full
    28. Supported pause frame use: No
    29. Supports auto-negotiation: Yes
    30. Supported FEC modes: Not reported
    31. Advertised link modes: 10baseT/Half 10baseT/Full
    32. 100baseT/Half 100baseT/Full
    33. 1000baseT/Full
    34. Advertised pause frame use: No
    35. Advertised auto-negotiation: Yes
    36. Advertised FEC modes: Not reported
    37. Speed: 1000Mb/s
    38. Duplex: Full
    39. Auto-negotiation: on
    40. Port: Twisted Pair
    41. PHYAD: 0
    42. Transceiver: internal
    43. MDI-X: off (auto)
    44. Supports Wake-on: umbg
    45. Wake-on: d
    46. Current message level: 0x00000007 (7)
    47. drv probe link
    48. Link detected: yes

    What IP address is 87.x.x.x? How are your VMs configured to access that IP?
    In this case, I think I'm not sure about that, following the lab instruction, the address comes out from the curl ifconfig.io, but I made nothing to configure it in that way, same output from the host machine where the VirtualBox VMs are running (probably due to the fact that they use a bridged adapter network).

    How did you set up your cluster infrastrucure for the previous course LFS258?
    This is the same cluster I used for the previous course, reconfigured based on the instruction provided in lab 1 of this course.

    Did it work then? Are you following the same process now?
    Yes, I have had some problems that I have almost always managed to solve in one way or another.

    Thanks for your support Chris, really appreciated it.

    Andy

  • Posts: 2,443

    Hi @andrea.calvario,

    I would recommend setting "Promiscuous Mode" to Allow All traffic, and ensuring that the two VirtualBox VMs IP addresses do not overlap with the 192.168.0.0/16 range, which is the default Pod network managed by Calico.

    Then I would retrieve the guest VM IP instead of the host IP, with ip a or hostname -I and use that IP address to test the NodePort. If port forwarding is not configured between the host and guest, the curl would fail as you have seen earlier.

    The VirtualBox VMs and their networking configuration should be the same as before. It is only the cluster bootstrapping process that is slightly different.

    Regards,
    -Chris

  • Thanks Chris,

    After changing the network configuration as suggested ("Promiscuous Mode" to "Allow All") nothing change.

    this is my the output of the comands you ask for.

    1. ┌─[bit00451@bitn0451][~]
    2. └─▪ip a
    3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    5. inet 127.0.0.1/8 scope host lo
    6. valid_lft forever preferred_lft forever
    7. inet6 ::1/128 scope host
    8. valid_lft forever preferred_lft forever
    9. 2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    10. link/ether 48:2a:e3:0e:36:5c brd ff:ff:ff:ff:ff:ff
    11. 3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    12. link/ether 20:16:b9:29:fb:2b brd ff:ff:ff:ff:ff:ff
    13. inet 192.168.1.97/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp4s0
    14. valid_lft 11603sec preferred_lft 11603sec
    15. inet6 fe80::e53b:1326:31:a84a/64 scope link noprefixroute
    16. valid_lft forever preferred_lft forever
    17. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    18. link/ether 02:42:f5:39:ba:b1 brd ff:ff:ff:ff:ff:ff
    19. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    20. valid_lft forever preferred_lft forever
    21. ┌─[bit00451@bitn0451][~]
    22. └─▪hostname -I
    23. 192.168.1.97 172.17.0.1

    Using that IP (without any port forwarding) the test fails, with "connection refused" message (but I don't understand the objective of the test).

    I try also restarting the two nodes, but nothing change!

    Andy!

  • Posts: 2,443

    Hi @andrea.calvario,

    Perhaps the following documentation page will provide additional helpful insight on the issue.

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

    Regards,
    -Chris

  • Well, solved, it is probably my personal problem due to the fact that I am using virtual machines in a bridged network on my host machine, I imagine that if you are following the recommended configuration in the cloud (google, amazon, etc...), you should not have problems. However, I report what I discovered and how I solved it.
    Being in a bridged network, the external IP address of the VMs coincides with the public address of my router. Trying to reach the virtual machines from the outside without having configured a port mapping on the router, I could not. It was enough for me to open the specific port on the router to the address of the internal network of the VM to solve.
    Thanks anyway Chris for the support and sorry for the long-winded post.

  • Posts: 2

    @andrea.calvario THANK YOU. I logged into my router and, lo and behold, I had the same issue with 2 ubuntu 20.04 vms running on vmware. Made a port forwarding rule like you suggested and everything worked

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training