Node worker status is now: CIDRNotAvailable
During the cluster installation in the "Lab 3.1 - Install Kubernetes" I notice the following
There is a section where we are instructed to copy the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yaml" to
create a configuration file for the cluster. Following the initial content of the file.
apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16
But seeing the "podSubnet" field this ip net will include the current ip range in my local network "192.168.1.0/24", so
to prevent any conflict I changed to 192.168.2.0/24, at this point everything looks good, I started the cluster installation
with the configuration like this.
apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.30.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.2.0/24
I continue with CNI part which instructed to apply the Cillium yaml file that came with the tar ball file of the course
the process was sucesss, so I continue with my Lab thinking everything was good.
student@cp:˜$ find $HOME -name cilium-cni.yaml
student@cp:˜$ kubectl apply -f /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
After certain time I continue with the labs joining the worked node, but I notice the following event, but the cluster seems
to be operating normally.
student@cp:~$ kubectl get events | grep CIDR
70s Normal CIDRNotAvailable node/worker Node worker status is now: CIDRNotAvailable
student@cp:~$
I remembered when I did the previous version of this training that there was an additional note regarding the "podSubnet"
in the file "/home/student/LFS258/SOLUTIONS/s_03/kubeadm-config.yam" telling the following "# <-- Match the IP range from the CNI config file"
as you can see folliwing.
1 apiVersion: kubeadm.k8s.io/v1beta3
2 kind: ClusterConfiguration
3 kubernetesVersion: 1.27.1 #<-- Use the word stable for newest version
4 controlPlaneEndpoint: "k8scp:6443" #<-- Use the alias we put in /etc/hosts not the IP
5 networking:
6 podSubnet: 192.168.0.0/16 # <-- Match the IP range from the CNI config file
At that point I realized something was wrong in my cluster configuration and the CNI from the very begining, because I did not change or match any IP range
in the CNI file and checking the node IP addres range assigment I noticed that in fact the worker node did not have any IP pool assigned for PODs.
student@cp:~$ kubectl get node cp -o yaml | grep CIDR:
podCIDR: 192.168.2.0/24
student@cp:~$ kubectl get node worker -o yaml | grep CIDR:
student@cp:~$
I think there will be an instruction as in the previous version of the training were we must be instructed in how to setup correctly the field "podSubnet"
before continue with the Cluster installation to prevent those issues. The intermediate solution will be as I guess is to lookup in the CNI configuration file
and match the "cluster-pool-ipv4-cidr" with our field "podSubnet" before start the Cluster Installation.
student@cp:~$ grep -E "cidr|mask" /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
policy-cidr-match-mode: ""
k8s-require-ipv4-pod-cidr: "false"
k8s-require-ipv6-pod-cidr: "false"
cluster-pool-ipv4-cidr: "10.0.0.0/8" <--- This one
cluster-pool-ipv4-mask-size: "24"
vtep-cidr: ""
vtep-mask: ""
- ciliumcidrgroups
- ciliumcidrgroups.cilium.io
student@cp:~$
I stated that the previous is a "intemediate" solution for our lab, because checking the paramater "service-cluster-ip-range" which represent
the service ip address ranged, that will be in conflict in some point in the future with the "podSubnet", because the net "10.0.0.0/8" include
the range "10.96.0.0/12" in some point of the subneting process.
student@cp:~$ sudo grep "service-cluster" /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/12
student@cp:~$
Address: 10.0.0.0
Netmask: 255.0.0.0 = 8
Wildcard: 0.255.255.255
=>
Network: 10.0.0.0/8
Broadcast: 10.255.255.255
HostMin: 10.0.0.1
HostMax: 10.255.255.254
Hosts/Net: 16777214
Comments
-
Hi @jonpibo,
When bootstrapping the Kubernetes cluster it is critical that the three network layers do not overlap: VMs, Service, and Pod CIDR.
First, considering that the default Kubernetes Service IP range is 10.96.0.0/12, we can keep it as is and make adjustments to the VM IP range, and ultimately the Pod CIDR.
Second, managing the VM IP range is done at the hypervisor or cloud VPC/subnet level, and it seems you are using the 192.168.1.0/24 range.
Finally, ensure the Pod CIDR does not overlap with either one of the two earlier ranges. However, this requires two distinct steps. Step one - edit thekubeadm-config.yaml
manifest with the desiredpodSubnet
range (let's use10.200.0.0/16
as an example, as it does not overlap the earlier two ranges); then initialize the cluster with thekubeadm init ...
command. Step two - edit thecilium-cni.yaml
manifest'scluster-pool-ipv4-cidr
value to match thepodSubnet
value, that is10.200.0.0/16
; then launch the Cilium CNI.Following these steps should help you to bring up the cluster and reach the ready state with both nodes.
Regards,
-Chris0 -
Hi Chris, thanks a lot for you repply.
Yeap following your advice we can setup the cluster correctly.
-- Jonmar
0
Categories
- All Categories
- 203 LFX Mentorship
- 203 LFX Mentorship: Linux Kernel
- 812 Linux Foundation IT Professional Programs
- 365 Cloud Engineer IT Professional Program
- 183 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 151 Cloud Native Developer IT Professional Program
- 119 Express Training Courses
- 119 Express Courses - Discussion Forum
- Microlearning - Discussion Forum
- 6.4K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 46 LFD102 Class Forum
- 229 LFD103 Class Forum
- 16 LFD110 Class Forum
- 44 LFD121 Class Forum
- LFD125 Class Forum
- 16 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 71 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 4 LFD237 Class Forum
- 23 LFD254 Class Forum
- 677 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 197 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 7 LFS118 Class Forum
- LFS120 Class Forum
- 1 LFS142 Class Forum
- 2 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 15 LFS148 Class Forum
- 7 LFS151 Class Forum
- 1 LFS157 Class Forum
- 49 LFS158 Class Forum
- LFS158-JP クラス フォーラム
- 4 LFS162 Class Forum
- 1 LFS166 Class Forum
- 2 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 1 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 3 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 19 LFS203 Class Forum
- 109 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 50 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- LFS246 Class Forum
- LFS248 Class Forum
- 44 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 143 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 10 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 134 LFS260 Class Forum
- 160 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 21 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 6 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 2 LFS147 Class Forum
- LFS274 Class Forum
- 4 LFS281 Class Forum
- 14 LFW111 Class Forum
- 257 LFW211 Class Forum
- 184 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- 2 SKF201 Class Forum
- 786 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 752 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 349 Ubuntu
- 463 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 69 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 364 Off Topic
- 115 Introductions
- 170 Small Talk
- 20 Study Material
- 518 Programming and Development
- 289 Kernel Development
- 211 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 78 All In Program
- 78 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)