Welcome to the Linux Foundation Forum!

Issue deploying simple nginx pod.

abelpatel
abelpatel Posts: 13
edited September 2023 in LFD259 Class Forum

Hi,

I am revisiting lab2, just so can I can practice my ability to create a Pod and nodePort.

So I ran the below command to generate the yaml file

kubectl run nginx --port=80 --image=nginx -o yml --dry-run=client >abeltest.yaml

Then run the below to create the Pod

kubectl create -f abeltest.yaml

I see the Pod is deployed to my workerNode, all appears well then a few mins later it goes in to a "CrashLoopBackOff" status

From the events section:

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28m default-scheduler Successfully assigned default/nginx to worker-node01 Normal Pulled 28m kubelet Successfully pulled image "nginx" in 838.676532ms Normal Pulled 26m kubelet Successfully pulled image "nginx" in 905.76152ms Normal Created 25m (x3 over 28m) kubelet Created container nginx Normal Started 25m (x3 over 28m) kubelet Started container nginx Normal Pulled 25m kubelet Successfully pulled image "nginx" in 939.107479ms Normal SandboxChanged 24m (x3 over 26m) kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulling 23m (x5 over 28m) kubelet Pulling image "nginx" Warning BackOff 8m9s (x50 over 25m) kubelet Back-off restarting failed container Normal Killing 3m6s (x8 over 27m) kubelet Stopping container nginx

Also when I run k get pods -A -o wide I get the below:

So the calico-node-r2mqq Pod is also having an issue whereas the calico-node-fsszs Pod is in a running state

The events tab from calico-node-fsszs:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned kube-system/calico-node-r2mqq to worker-node01
Normal Pulled 20m kubelet Container image "docker.io/calico/node:v3.25.0" already present on machine
Normal Started 20m kubelet Started container calico-node
Normal Created 20m kubelet Created container calico-node
Warning Unhealthy 20m (x2 over 20m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
Normal Killing 20m kubelet Stopping container calico-node
Normal Created 20m (x2 over 20m) kubelet Created container upgrade-ipam
Normal Started 20m (x2 over 20m) kubelet Started container upgrade-ipam
Normal SandboxChanged 20m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 20m (x2 over 20m) kubelet Container image "docker.io/calico/cni:v3.25.0" already present on machine
Normal Created 20m (x2 over 20m) kubelet Created container install-cni
Normal Started 20m (x2 over 20m) kubelet Started container install-cni
Normal Pulled 20m (x2 over 20m) kubelet Container image "docker.io/calico/cni:v3.25.0" already present on machine
Normal Started 20m (x2 over 20m) kubelet Started container mount-bpffs
Normal Created 20m (x2 over 20m) kubelet Created container mount-bpffs
Normal Pulled 20m (x2 over 20m) kubelet Container image "docker.io/calico/node:v3.25.0" already present on machine
Warning BackOff 27s (x81 over 19m) kubelet Back-off restarting failed container

I am running a clustered Vagrant VM on my local machine, The VMs are running ubuntu 22.04, the Cluster looks ok:

vagrant@vagrant:~$ kubectl cluster-info
Kubernetes control plane is running at https://10.0.0.10:6443
CoreDNS is running at https://10.0.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

`

I am bit lost, it looks like worker Node is under resourced, but if so how can I check this? when I run htop on both Control Plane and worker nodes. The Control Plane is using around 800mb and the WorkNode is using 260MB. My host machine has plenty of memory/cpu.

Control Plane is setup as follows:

CPU: 2
Memory: 4GB

Work Node:
CPU: 4
Memory: 4GB

Is there anything else I can check?

Comments

  • Hello @abelpatel

    Your worker node is in notready state. We can do couple of things to fix this,

    1. It is not ready because of the calico-node-r2mqq pod. Delete this pod - calico-node-r2mqq and a new calico pod will be recreated and the worker node will go into ready state.

    2. If not, then restart the containterd and kubelet on worker nodes,
      systemctl restart containerd
      systemctl restart kubelet

    3. On cp node - execute the command - kubectl get nodes
      let me know, how it went and we will take it from there...
  • @fazlur.khan - thanks, so the Cluster is in a better shape.

    I followed the steps to restart containerd and kubelet services.

    I deleted the problematic calico-node-xx Pod however the new one calicao-node-cg27q still has an issue.

    It's like there is an issue with Node01.

    As the problematic Pods are always on the worker node.

  • Hi @abelpatel,

    Please check your VM network configuration. A single bridged adapter per VM was most successful for a local Kubernetes installation. Also, enable promiscuous mode to "allow all" ingress traffic.

    The recommended guest OS for the lab environment is still Ubuntu 20.04 LTS. Some issues have been reported in earlier discussions for environments running Ubuntu 22.04 LTS.

    Regards,
    -Chris

  • @chrispokorni - thanks for your help. I managed to deploy my cluster on Ubuntu 20.04 and have not experienced any issues so far.

    I've left the Network adaptor to "host-only" and promiscous mode to "allow all"

Categories

Upcoming Training