Welcome to the Linux Foundation Forum!

LAB 3.X Worker node remains in NotReady state

Options

I follow the lab manual step by step, but still the worker remains NotReady. The control plane is prepared and functioning while the worker not. I am currently using Virtualbox for testing, network interfaces are using PROMISCUOUS mode (both set up in network configuration of virtualbox and flag visible when doing ip link).

This is the error showed by the kubelet in the worker node:

E0921 17:06:38.561928 6940 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

For some strange reason, when cilium is deployed its configuration file /etc/cni/net.d/05-cilium.conflist is only present on the control plane, while on the worker node, the directory /etc/cni/net.d/ is empty. If I manually create such file on the worker node then it
becomes ready, but then other errors arise.

This is what comes up when the worker is ready (after manually adding the cni configuration), the pods for cilium on the worker are not ready and showing CrashLoopBackOff.

E0921 17:12:14.231789 9444 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cilium-operator pod=cilium-operator-788c4f69bc-7vwg6_kube-system(d5a3dbed-4191-481a-8d66-07e542e9cc6c)\"" pod="kube-system/cilium-operator-788c4f69bc-7vwg6" podUID="d5a3dbed-4191-481a-8d66-07e542e9cc6c"

I have literally followed every single millimeter of the lab manuals, I really don't know what is wrong.

Comments

  • chrispokorni
    chrispokorni Posts: 2,190
    edited September 2023
    Options

    Hi @mariano.dangelo,

    How many network interfaces on each VM? Is promiscuous set to "allow all"? What type of network is selected (bridged, nat, etc.)? What are the IP addresses of the VMs? What is the CPU count, memory and disk size of each VM? What is the guest OS on the VMs?

    Regards,
    -Chris

  • mariano.dangelo
    Options

    Hello @chrispokorni,
    each vm has 2 network interfaces, one NAT and the other one is host-only. Promiscuous is indeed set to "allow all". The network as I said is a NAT network. IP addresses of VMs are static, 172.23.19.0/24 network, with IPs being 172.23.19.30 and 172.23.19.40. Each VM has 4 CPUs and 8GB of RAM, with a disk size of 25GB. The guest OS on the VMs is Ubuntu 20.04. I am sure that the control-plane's alias/domain name is correct, I can see it f

  • chrispokorni
    Options

    Hi @mariano.dangelo,

    For a successful cluster bootstrapping on VirtualBox please use a single bridged adapter on each VM, with promiscuous mode enabled to "allow all".

    Regards,
    -Chris

  • mariano.dangelo
    Options

    hello @chrispokorni,

    Thank you, I'll try. I think this should be also included in the lab manual, as there is nothing regarding bridged adapters, but just NAT networks. Should I disable my host-only adapter as well? How could that harm my cluster?

Categories

Upcoming Training