Welcome to the Linux Foundation Forum!

Lab 3.2. Worker Node Status "NotReady"

Hi,

I'm running through Lab 3.2. I'm using VirtualBox with Ubuntu 24.04 guest machines.

My Control Plane and Worker Nodes are configured with 2 network adapters. Adapter 1 is default NAT adapter for internet access and Adapter 2 is host-only with promiscuous mode set to allow all.

My hosts can ping each other fine and I have joined the cluster from the worker machine.

When I run kubectl get nodes from the Control Plane machine, it shows my Worker Node with Status "NotReady". If I run kubectl describe node for my worker node machine, is says cni plugin not initialized (please see below).

Ready False Mon, 06 Jan 2025 17:17:21 +0000 Mon, 06 Jan 2025 17:15:51 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Is anybody able to help me with where I'm going wrong?

Many Thanks,
Paul

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 3

    Hi all,

    Please see below for a little more context. Any help would be greatly appreciated.

    kubectl get nodes

    1. controlplane Ready control-plane 25h v1.30.1
    2. workernode1 NotReady <none> 24h v1.30.1

    kubectl get pods -o wide -A

    1. kube-system cilium-96v98 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    2. kube-system cilium-envoy-6h8jz 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    3. kube-system cilium-envoy-ltp7h 1/1 Running 2 (89m ago) 24h 192.168.58.21 workernode1 <none> <none>
    4. kube-system cilium-operator-64767f6566-68vt8 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    5. kube-system cilium-operator-64767f6566-bc8cc 0/1 CrashLoopBackOff 45 (53s ago) 25h 192.168.58.21 workernode1 <none> <none>
    6. kube-system cilium-rmlp2 0/1 Init:CrashLoopBackOff 17 (3m42s ago) 24h 192.168.58.21 workernode1 <none> <none>
    7. kube-system coredns-7db6d8ff4d-v7jd6 1/1 Running 3 (89m ago) 25h 10.0.0.128 controlplane <none> <none>
    8. kube-system coredns-7db6d8ff4d-xnf5j 1/1 Running 3 (89m ago) 25h 10.0.0.20 controlplane <none> <none>
    9. kube-system etcd-controlplane 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    10. kube-system kube-apiserver-controlplane 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    11. kube-system kube-controller-manager-controlplane 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    12. kube-system kube-proxy-4qndb 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>
    13. kube-system kube-proxy-7p64d 1/1 Running 2 (89m ago) 24h 192.168.58.21 workernode1 <none> <none>
    14. kube-system kube-scheduler-controlplane 1/1 Running 3 (89m ago) 25h 192.168.58.20 controlplane <none> <none>

    journalctl -u kubelet

    1. Jan 07 13:24:33 workernode1 kubelet[867]: E0107 13:24:33.243442 867 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=fal>
    2. Jan 07 13:24:33 workernode1 kubelet[867]: I0107 13:24:33.314967 867 scope.go:117] "RemoveContainer" containerID="112920bbfd3a8428c491f74df7332ba6a550a6dd>
    3. Jan 07 13:24:33 workernode1 kubelet[867]: E0107 13:24:33.315868 867 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\"

    Many Thanks,
    Paul

  • Posts: 2,443

    Hi @blackball,

    Thanks for all the detailed outputs. The first issue I see is with the two network interfaces on each VM. This is often reported in the forums to cause the cluster nodes to misbehave once joined together. My recommendation is to provision your VMs with a single bridged network interface per VM, with promiscuous mode enabled to allow all inbound traffic. Also, ensure the virtual disk is fully allocated, and not dynamically allocated.

    Since your VMs are assigned IP addresses from the 192.168.0.0/ network, please ensure that the kubeadm-config.yaml manifest is updated with a different pod network range, perhaps 10.200.0.0/16, AND the cilium-cni.yaml manifest also updated with the same 10.200.0.0/16 CIDR, prior to initializing the control plane and launching the cni plugin. The kubeadm-config.yaml manifest available in SOLUTIONS shows 192.168.0.0/16 - which would overlap with your VirtualBox VM IP addresses, and the cilium-cni.yaml manifest shows the 10.0.0.0/8 CIDR, that could eventually overlap service cluster IP addresses (10.96.0.0/12). Keeping the three networks distinct improves readability and prevents any routing issues caused by overlapping IP networks.

    Regards,
    -Chris

  • Posts: 3

    Hi Chris,

    Thanks for the detailed response. :smile:

    I've redeployed my VM's using bridged network adapter and changed the network ranges the prevent overlaps. Now it's working fine.

    Cheers,
    Paul

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training