Welcome to the Linux Foundation Forum!

Cillium setup prevents 2nd node from joining the cluster

I'm not sure if this is intended but I found that executing steps exactly as they are in the Kubernetes Fundamentals LFS258 Lab 3.1. Install Kubernetes, then setting up the second node in Lab 3.2. Grow the Cluster results in the second node not being able to join the cluster, likely due to Cilium setup:

error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: Get "https://k8scp:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

I removed the Cillium part and redeployed both nodes, resulting in success:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 503.011763ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

CP:

kubectl get nodes
NAME               STATUS     ROLES           AGE     VERSION
cp                 NotReady   control-plane   6m48s   v1.32.1
ip-192-168-0-112   NotReady   <none>          3m49s   v1.32.1

I was wondering is this accurate and what would have happened if I

  • initialized CP
  • joined 2nd node to the cluster
  • set up Cillium
  • tried joining 3rd node to the cluster <-- will this fail?

Comments

  • chrispokorni
    chrispokorni Posts: 2,501

    Hi @krzysztofcyran93,

    Please clarify first which course you enrolled - LFS258 (according to your notes) or LFD259 (you posted in the LFD259 forum). Cluster setup instructions are distinct between the two courses.

    Regards,
    -Chris

  • After further investigation I think this is the ip assignment issue and is conflicting. I cannot find the precise requirements for the vpc and what the IP addresses should be for the nodes.

    One of the subnets I deploy the nodes in are, for example deployed in 192.168.1.0/24.
    I understand this is conflicting with the kubeadm-config.yaml

    networking: 
      podSubnet: 192.168.0.0/16. 
    

    I changed that to 192.168.15.0/24, and changed cilium-ini.yaml
    cluster-pool-ipv4-cidr: "192.168.0.0/16"
    to

    cluster-pool-ipv4-cidr: "192.168.15.0/24"  
    
    

    The node could join but I continue having issues:

    cilium-vqd7f   0/1     Running   9 (3m9s ago)   41m
    
    

    checked the errors

    kubectl -n kube-system logs cilium-vqd7f | grep error
    time="2025-09-09T18:16:00Z" level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=daemon
    

    It doesn't look like the node is getting that configuration passed

    kubectl get nodes -o yaml | grep podCIDR
        podCIDR: 192.168.15.0/24
        podCIDRs:
    
    

    I'll get back to it tomorrow

Categories

Upcoming Training