Welcome to the Linux Foundation Forum!

Lab 9.2 NodePort is only accessible on the node where the pod is running

Hello,

I'm on the exercise 9.2. I created the deployment using nginx-one.yaml and exposed NodePort with:

kubectl -n accounting expose deployment nginx-one --type=NodePort --name=service-lab

But the port is only accessible on the node that the pod is running. The External Traffic Policy is Cluster and I also tried iptables -P FORWARD ACCEPT which was the main solution on the internet. My CNI works on both nodes (cp and worker) and coredns is running on on cp node.

What can be wrong here? Thanks in advance.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Answers

  • Posts: 2,443

    Hi @elifcan,

    Without knowing how your infrastructure is configured, it is quite difficult to answer the question

    What can be wrong here?

    A networking related issue, typically originates from a misconfigured infrastructure. However, if the infrastructure is configured correctly, iptables are rarely manipulated by the user - they are managed by the kube-proxy node agent instead.

    What type of infrastructure is hosting your cluster: cloud or local hypervisor? What are the sizes of your VMs: cpu, mem, disk? What is the guest OS running your VMs? What type of network interface(s) are attached to each VM? If in the cloud, is there a VPC, firewall/Security Group configured? What is the Kubernetes release, what CNI plugin runs in your cluster?

    When testing the nodeport functionality, what was the source of the traffic? Did you use the curl command or a browser? Did you test the nodes' private IPs and public IPs?

    What are the outputs of the following commands:

    1. kubectl get nodes -o wide
    2. kubectl get pods -A -o wide

    Regards,
    -Chris

  • Posts: 3
    edited May 2024

    Hi Chris,

    Thanks for your answer. I'm using two cloud VMs hosted by Hetzner. They are both Ubuntu 22.04.4 LTS. CP node is 2 CPU, 8 GB RAM, 80 GB disk and worker node is 2 CPU, 4 GB RAM, 40 GB disk. I used public IP of these nodes and tried both curl and browser.

    kubectl describe svc -n accounting service-lab

    1. Name: service-lab
    2. Namespace: accounting
    3. Labels: system=secondary
    4. Annotations: <none>
    5. Selector: system=secondary
    6. Type: NodePort
    7. IP Family Policy: SingleStack
    8. IP Families: IPv4
    9. IP: 10.98.25.200
    10. IPs: 10.98.25.200
    11. Port: <unset> 80/TCP
    12. TargetPort: 80/TCP
    13. NodePort: <unset> 31485/TCP
    14. Endpoints: 192.168.1.139:80,192.168.1.140:80
    15. Session Affinity: None
    16. External Traffic Policy: Cluster
    17. Events: <none>

    kubectl get nodes -o wide

    1. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    2. kubecka Ready control-plane 20d v1.29.1 37.27.42.60 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.6.28
    3. kubenode1 Ready <none> 13d v1.29.1 37.27.82.153 <none> Ubuntu 22.04.4 LTS 5.15.0-105-generic containerd://1.7.2

    kubectl get pods -A -o wide

    1. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. accounting nginx-one-8697dd5b94-mlbh5 1/1 Running 0 20h 192.168.1.140 kubenode1 <none> <none>
    3. accounting nginx-one-8697dd5b94-t97pv 1/1 Running 0 20h 192.168.1.139 kubenode1 <none> <none>
    4. default ubuntu 1/1 Running 0 19h 192.168.1.222 kubenode1 <none> <none>
    5. kube-system cilium-mqrts 1/1 Running 0 20d 37.27.42.60 kubecka <none> <none>
    6. kube-system cilium-mzgp5 1/1 Running 1 (5d23h ago) 13d 37.27.82.153 kubenode1 <none> <none>
    7. kube-system cilium-operator-788c4f69bc-6btwc 1/1 Running 1 (5d23h ago) 12d 37.27.82.153 kubenode1 <none> <none>
    8. kube-system cilium-operator-788c4f69bc-vjddt 1/1 Running 1 (12d ago) 12d 37.27.42.60 kubecka <none> <none>
    9. kube-system coredns-76f75df574-v58cq 1/1 Running 0 12d 192.168.0.122 kubecka <none> <none>
    10. kube-system coredns-76f75df574-z5hjx 1/1 Running 0 12d 192.168.0.224 kubecka <none> <none>
    11. kube-system etcd-kubecka 1/1 Running 0 12d 37.27.42.60 kubecka <none> <none>
    12. kube-system kube-apiserver-kubecka 1/1 Running 0 12d 37.27.42.60 kubecka <none> <none>
    13. kube-system kube-controller-manager-kubecka 1/1 Running 0 12d 37.27.42.60 kubecka <none> <none>
    14. kube-system kube-proxy-jdrsx 1/1 Running 1 (5d23h ago) 12d 37.27.82.153 kubenode1 <none> <none>
    15. kube-system kube-proxy-khbbq 1/1 Running 0 12d 37.27.42.60 kubecka <none> <none>
    16. kube-system kube-scheduler-kubecka 1/1 Running 0 12d 37.27.42.60 kubecka <none> <none>

    kubectl -n accounting get svc

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. service-lab NodePort 10.98.25.200 <none> 80:31485/TCP 20h

    I'm trying curl public_ip_of_cp_node:31485 it doesn't work. But curl public_ip_of_worker_node:31845 works.

    Best
    Elifcan

  • Posts: 3

    Hi Chris,

    Right after sending the above message, I figured out the reason, it was because ufw was enabled on the CP node :)

    Thanks for your time.
    Best,
    Elifcan

  • Posts: 2,443

    Hi @elifcan,

    Right, that is one of the reasons why the recommended guest OS is still Ubuntu 20.04 LTS, as you may have noticed in the lab guide. So far the use of Ubuntu 22.04 LTS produced inconsistent behaviors across various cloud and local environments, especially around networking that is such a vital resource for a healthy Kubernetes cluster.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training