Node worker status is now: CIDRNotAvailable
During the cluster installation in the "Lab 3.1 - Install Kubernetes" I notice the following
There is a section where we are instructed to copy the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yaml" to
create a configuration file for the cluster. Following the initial content of the file.
apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16
But seeing the "podSubnet" field this ip net will include the current ip range in my local network "192.168.1.0/24", so
to prevent any conflict I changed to 192.168.2.0/24, at this point everything looks good, I started the cluster installation
with the configuration like this.
apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.2.0/24
I continue with CNI part which instructed to apply the Cillium yaml file that came with the tar ball file of the course
the process was sucesss, so I continue with my Lab thinking everything was good.
student@cp:˜$ find $HOME -name cilium-cni.yaml
student@cp:˜$ kubectl apply -f /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
After certain time I continue with the labs joining the worked node, but I notice the following event, but the cluster seems
to be operating normally.
student@cp:~$ kubectl get events | grep CIDR
70s Normal CIDRNotAvailable node/worker Node worker status is now: CIDRNotAvailable
student@cp:~$
I remembered when I did the previous version of this training that there was an additional note regarding the "podSubnet"
in the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yam" telling the following "# <-- Match the IP range from the CNI config file"
as you can see folliwing.
1 apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.27.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16 # <-- Match the IP range from the CNI config file
At that point I realized something was wrong in my cluster configuration and the CNI from the very begining, because I did not change or match any IP range
in the CNI file and checking the node IP addres range assigment I noticed that in fact the worker node did not have any IP pool assigned for PODs.
student@cp:~$ kubectl get node cp -o yaml | grep CIDR:
podCIDR: 192.168.2.0/24
student@cp:~$ kubectl get node worker -o yaml | grep CIDR:
student@cp:~$
I think there will be an instruction as in the previous version of the training were we must be instructed in how to setup correctly the field "podSubnet"
before continue with the Cluster installation to prevent those issues. The intermediate solution will be as I guess is to lookup in the CNI configuration file
and match the "cluster-pool-ipv4-cidr" with our field "podSubnet" before start the Cluster Installation.
student@cp:~$ grep -E "cidr|mask" /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
policy-cidr-match-mode: ""
k8s-require-ipv4-pod-cidr: "false"
k8s-require-ipv6-pod-cidr: "false"
cluster-pool-ipv4-cidr: "10.0.0.0/8" <--- This one
cluster-pool-ipv4-mask-size: "24"
vtep-cidr: ""
vtep-mask: ""
- ciliumcidrgroups
- ciliumcidrgroups.cilium.io
student@cp:~$
I stated that the previous is a "intemediate" solution for our lab, because checking the paramater "service-cluster-ip-range" which represent
the service ip address ranged, that will be in conflict in some point in the future with the "podSubnet", because the net "10.0.0.0/8" include
the range "10.96.0.0/12" in some point of the subneting process.
student@cp:~$ sudo grep "service-cluster" /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/12
student@cp:~$
Address: 10.0.0.0
Netmask: 255.0.0.0 = 8
Wildcard: 0.255.255.255
=>
Network: 10.0.0.0/8
Broadcast: 10.255.255.255
HostMin: 10.0.0.1
HostMax: 10.255.255.254
Hosts/Net: 16777214
Comments
-
Hi @jonpibo,
When bootstrapping the Kubernetes cluster it is critical that the three network layers do not overlap: VMs, Service, and Pod CIDR.
First, considering that the default Kubernetes Service IP range is 10.96.0.0/12, we can keep it as is and make adjustments to the VM IP range, and ultimately the Pod CIDR.
Second, managing the VM IP range is done at the hypervisor or cloud VPC/subnet level, and it seems you are using the 192.168.1.0/24 range.
Finally, ensure the Pod CIDR does not overlap with either one of the two earlier ranges. However, this requires two distinct steps. Step one - edit thekubeadm-config.yaml
manifest with the desiredpodSubnet
range (let's use10.200.0.0/16
as an example, as it does not overlap the earlier two ranges); then initialize the cluster with thekubeadm init ...
command. Step two - edit thecilium-cni.yaml
manifest'scluster-pool-ipv4-cidr
value to match thepodSubnet
value, that is10.200.0.0/16
; then launch the Cilium CNI.Following these steps should help you to bring up the cluster and reach the ready state with both nodes.
Regards,
-Chris0
Categories
- All Categories
- 230 LFX Mentorship
- 230 LFX Mentorship: Linux Kernel
- 812 Linux Foundation IT Professional Programs
- 365 Cloud Engineer IT Professional Program
- 183 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 151 Cloud Native Developer IT Professional Program
- 138 Express Training Courses & Microlearning
- 138 Express Courses - Discussion Forum
- Microlearning - Discussion Forum
- 6.4K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 71 LFC131 Class Forum
- 44 LFD102 Class Forum
- 228 LFD103 Class Forum
- 19 LFD110 Class Forum
- 42 LFD121 Class Forum
- 18 LFD133 Class Forum
- 8 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 5 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 702 LFD259 Class Forum
- 111 LFD272 Class Forum - Discontinued
- 4 LFD272-JP クラス フォーラム
- 13 LFD273 Class Forum
- 186 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 3 LFS116 Class Forum
- 7 LFS118 Class Forum
- LFS120 Class Forum
- 9 LFS142 Class Forum
- 8 LFS144 Class Forum
- 4 LFS145 Class Forum
- 3 LFS146 Class Forum
- 2 LFS148 Class Forum
- 15 LFS151 Class Forum
- 4 LFS157 Class Forum
- 45 LFS158 Class Forum
- LFS158-JP クラス フォーラム
- 10 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 32 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム - Discontinued
- 19 LFS203 Class Forum
- 135 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 5 LFS245 Class Forum
- LFS246 Class Forum
- LFS248 Class Forum
- 52 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 156 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 9 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 129 LFS260 Class Forum
- 160 LFS261 Class Forum
- 43 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 25 LFS268 Class Forum
- 32 LFS269 Class Forum
- 6 LFS270 Class Forum
- 202 LFS272 Class Forum - Discontinued
- 2 LFS272-JP クラス フォーラム
- 4 LFS147 Class Forum
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 12 LFW111 Class Forum
- 262 LFW211 Class Forum
- 184 LFW212 Class Forum
- 15 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 797 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 104 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 759 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 470 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 69 Mobile Computing
- 18 Android
- 38 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 374 Off Topic
- 115 Introductions
- 174 Small Talk
- 24 Study Material
- 807 Programming and Development
- 304 Kernel Development
- 485 Software Development
- 1.8K Software
- 263 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 102 All In Program
- 102 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)