Welcome to the Linux Foundation Forum!

Lab 9.2 NodePort is only accessible on the node where the pod is running

Hello,

I'm on the exercise 9.2. I created the deployment using nginx-one.yaml and exposed NodePort with:

kubectl -n accounting expose deployment nginx-one --type=NodePort --name=service-lab

But the port is only accessible on the node that the pod is running. The External Traffic Policy is Cluster and I also tried iptables -P FORWARD ACCEPT which was the main solution on the internet. My CNI works on both nodes (cp and worker) and coredns is running on on cp node.

What can be wrong here? Thanks in advance.

Answers

  • chrispokorni
    chrispokorni Posts: 2,354

    Hi @elifcan,

    Without knowing how your infrastructure is configured, it is quite difficult to answer the question

    What can be wrong here?

    A networking related issue, typically originates from a misconfigured infrastructure. However, if the infrastructure is configured correctly, iptables are rarely manipulated by the user - they are managed by the kube-proxy node agent instead.

    What type of infrastructure is hosting your cluster: cloud or local hypervisor? What are the sizes of your VMs: cpu, mem, disk? What is the guest OS running your VMs? What type of network interface(s) are attached to each VM? If in the cloud, is there a VPC, firewall/Security Group configured? What is the Kubernetes release, what CNI plugin runs in your cluster?

    When testing the nodeport functionality, what was the source of the traffic? Did you use the curl command or a browser? Did you test the nodes' private IPs and public IPs?

    What are the outputs of the following commands:

    kubectl get nodes -o wide
    kubectl get pods -A -o wide
    

    Regards,
    -Chris

  • elifcan
    elifcan Posts: 3
    edited May 1

    Hi Chris,

    Thanks for your answer. I'm using two cloud VMs hosted by Hetzner. They are both Ubuntu 22.04.4 LTS. CP node is 2 CPU, 8 GB RAM, 80 GB disk and worker node is 2 CPU, 4 GB RAM, 40 GB disk. I used public IP of these nodes and tried both curl and browser.

    kubectl describe svc -n accounting service-lab

    Name:                     service-lab
    Namespace:                accounting
    Labels:                   system=secondary
    Annotations:              <none>
    Selector:                 system=secondary
    Type:                     NodePort
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.98.25.200
    IPs:                      10.98.25.200
    Port:                     <unset>  80/TCP
    TargetPort:               80/TCP
    NodePort:                 <unset>  31485/TCP
    Endpoints:                192.168.1.139:80,192.168.1.140:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:                   <none>
    

    kubectl get nodes -o wide

    NAME        STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
    kubecka     Ready    control-plane   20d   v1.29.1   37.27.42.60    <none>        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.6.28
    kubenode1   Ready    <none>          13d   v1.29.1   37.27.82.153   <none>        Ubuntu 22.04.4 LTS   5.15.0-105-generic   containerd://1.7.2
    

    kubectl get pods -A -o wide

    NAMESPACE     NAME                               READY   STATUS    RESTARTS        AGE   IP              NODE        NOMINATED NODE   READINESS GATES
    accounting    nginx-one-8697dd5b94-mlbh5         1/1     Running   0               20h   192.168.1.140   kubenode1   <none>           <none>
    accounting    nginx-one-8697dd5b94-t97pv         1/1     Running   0               20h   192.168.1.139   kubenode1   <none>           <none>
    default       ubuntu                             1/1     Running   0               19h   192.168.1.222   kubenode1   <none>           <none>
    kube-system   cilium-mqrts                       1/1     Running   0               20d   37.27.42.60     kubecka     <none>           <none>
    kube-system   cilium-mzgp5                       1/1     Running   1 (5d23h ago)   13d   37.27.82.153    kubenode1   <none>           <none>
    kube-system   cilium-operator-788c4f69bc-6btwc   1/1     Running   1 (5d23h ago)   12d   37.27.82.153    kubenode1   <none>           <none>
    kube-system   cilium-operator-788c4f69bc-vjddt   1/1     Running   1 (12d ago)     12d   37.27.42.60     kubecka     <none>           <none>
    kube-system   coredns-76f75df574-v58cq           1/1     Running   0               12d   192.168.0.122   kubecka     <none>           <none>
    kube-system   coredns-76f75df574-z5hjx           1/1     Running   0               12d   192.168.0.224   kubecka     <none>           <none>
    kube-system   etcd-kubecka                       1/1     Running   0               12d   37.27.42.60     kubecka     <none>           <none>
    kube-system   kube-apiserver-kubecka             1/1     Running   0               12d   37.27.42.60     kubecka     <none>           <none>
    kube-system   kube-controller-manager-kubecka    1/1     Running   0               12d   37.27.42.60     kubecka     <none>           <none>
    kube-system   kube-proxy-jdrsx                   1/1     Running   1 (5d23h ago)   12d   37.27.82.153    kubenode1   <none>           <none>
    kube-system   kube-proxy-khbbq                   1/1     Running   0               12d   37.27.42.60     kubecka     <none>           <none>
    kube-system   kube-scheduler-kubecka             1/1     Running   0               12d   37.27.42.60     kubecka     <none>           <none>
    

    kubectl -n accounting get svc

    NAME          TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    service-lab   NodePort   10.98.25.200   <none>        80:31485/TCP   20h
    

    I'm trying curl public_ip_of_cp_node:31485 it doesn't work. But curl public_ip_of_worker_node:31845 works.

    Best
    Elifcan

  • elifcan
    elifcan Posts: 3

    Hi Chris,

    Right after sending the above message, I figured out the reason, it was because ufw was enabled on the CP node :)

    Thanks for your time.
    Best,
    Elifcan

  • chrispokorni
    chrispokorni Posts: 2,354

    Hi @elifcan,

    Right, that is one of the reasons why the recommended guest OS is still Ubuntu 20.04 LTS, as you may have noticed in the lab guide. So far the use of Ubuntu 22.04 LTS produced inconsistent behaviors across various cloud and local environments, especially around networking that is such a vital resource for a healthy Kubernetes cluster.

    Regards,
    -Chris

Categories

Upcoming Training