Welcome to the Linux Foundation Forum!

Node worker status is now: CIDRNotAvailable

jonpibo
jonpibo Posts: 1
edited February 1 in LFS258 Class Forum

During the cluster installation in the "Lab 3.1 - Install Kubernetes" I notice the following

There is a section where we are instructed to copy the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yaml" to
create a configuration file for the cluster. Following the initial content of the file.

apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16

But seeing the "podSubnet" field this ip net will include the current ip range in my local network "192.168.1.0/24", so
to prevent any conflict I changed to 192.168.2.0/24, at this point everything looks good, I started the cluster installation
with the configuration like this.

apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.2.0/24

I continue with CNI part which instructed to apply the Cillium yaml file that came with the tar ball file of the course
the process was sucesss, so I continue with my Lab thinking everything was good.

student@cp:˜$ find $HOME -name cilium-cni.yaml
student@cp:˜$ kubectl apply -f /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml

After certain time I continue with the labs joining the worked node, but I notice the following event, but the cluster seems
to be operating normally.

student@cp:~$ kubectl get events | grep CIDR
70s Normal CIDRNotAvailable node/worker Node worker status is now: CIDRNotAvailable
student@cp:~$

I remembered when I did the previous version of this training that there was an additional note regarding the "podSubnet"
in the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yam" telling the following "# <-- Match the IP range from the CNI config file"
as you can see folliwing.

1 apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.27.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16 # <-- Match the IP range from the CNI config file

At that point I realized something was wrong in my cluster configuration and the CNI from the very begining, because I did not change or match any IP range
in the CNI file and checking the node IP addres range assigment I noticed that in fact the worker node did not have any IP pool assigned for PODs.

student@cp:~$ kubectl get node cp -o yaml | grep CIDR:
podCIDR: 192.168.2.0/24
student@cp:~$ kubectl get node worker -o yaml | grep CIDR:
student@cp:~$

I think there will be an instruction as in the previous version of the training were we must be instructed in how to setup correctly the field "podSubnet"
before continue with the Cluster installation to prevent those issues. The intermediate solution will be as I guess is to lookup in the CNI configuration file
and match the "cluster-pool-ipv4-cidr" with our field "podSubnet" before start the Cluster Installation.

student@cp:~$ grep -E "cidr|mask" /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
policy-cidr-match-mode: ""
k8s-require-ipv4-pod-cidr: "false"
k8s-require-ipv6-pod-cidr: "false"
cluster-pool-ipv4-cidr: "10.0.0.0/8" <--- This one
cluster-pool-ipv4-mask-size: "24"
vtep-cidr: ""
vtep-mask: ""
- ciliumcidrgroups
- ciliumcidrgroups.cilium.io
student@cp:~$

I stated that the previous is a "intemediate" solution for our lab, because checking the paramater "service-cluster-ip-range" which represent
the service ip address ranged, that will be in conflict in some point in the future with the "podSubnet", because the net "10.0.0.0/8" include
the range "10.96.0.0/12" in some point of the subneting process.

student@cp:~$ sudo grep "service-cluster" /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/12
student@cp:~$

Address: 10.0.0.0
Netmask: 255.0.0.0 = 8
Wildcard: 0.255.255.255
=>
Network: 10.0.0.0/8
Broadcast: 10.255.255.255
HostMin: 10.0.0.1
HostMax: 10.255.255.254
Hosts/Net: 16777214

Comments

  • chrispokorni
    chrispokorni Posts: 2,419

    Hi @jonpibo,

    When bootstrapping the Kubernetes cluster it is critical that the three network layers do not overlap: VMs, Service, and Pod CIDR.
    First, considering that the default Kubernetes Service IP range is 10.96.0.0/12, we can keep it as is and make adjustments to the VM IP range, and ultimately the Pod CIDR.
    Second, managing the VM IP range is done at the hypervisor or cloud VPC/subnet level, and it seems you are using the 192.168.1.0/24 range.
    Finally, ensure the Pod CIDR does not overlap with either one of the two earlier ranges. However, this requires two distinct steps. Step one - edit the kubeadm-config.yaml manifest with the desired podSubnet range (let's use 10.200.0.0/16 as an example, as it does not overlap the earlier two ranges); then initialize the cluster with the kubeadm init ... command. Step two - edit the cilium-cni.yaml manifest's cluster-pool-ipv4-cidr value to match the podSubnet value, that is 10.200.0.0/16; then launch the Cilium CNI.

    Following these steps should help you to bring up the cluster and reach the ready state with both nodes.

    Regards,
    -Chris

Categories

Upcoming Training