Lab 3.3. - 3.4. calico/node is not ready
I have been following LFS258 lab exercises from 3.1 till 3.3.
Controller/Master and worker node installation and growing cluster has been succesfull and workder node reported it's status as ready.
I used docker as container engine:
docker version - 19.03.6
kube client & master version - v1.18.1
From controller/master node 'ip a' output I see that I have 'tunl0' interface created.
In chapter/exercise 3.4 when I execute 'kubectl create deployment nginx --image=nginx' I see that pod is stuck on 'ContainerCreating' status.
'kubectl -n default describe pod nginx-f89759699-frqxj' output shows me:
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "64b6688978e818556f498fcd33b6d17f50a2f2da8ba8740a514801734adc810b" network for pod "nginx-f89759699-frqxj": networkPlugin cni failed to set up pod "nginx-f89759699-frqxj_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
When I check calico pod on worker node I see that it's crashlooping - 'kubectl -n kube-system describe pod calico-node-jcdz5' shows me:
"Liveness probe failed: calico/node is not ready: bird/confd is not live: exit status 1"
"Liveness probe failed: calico/node is not ready: Felix is not live: Get "http://localhost:9099/liveness": dial tcp 127.0.0.1:9099: connect: connection refused"
"Container calico-node failed liveness probe, will be restarted"
"Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory"
I have been following instructions step by step without any exceptions this far.
What am I doing wrong?
Comments
-
Hi @zrks,
Calico is highly dependent on the node-to-node networking of your infrastructure. What is your infrastructure and how is it configured? Are you on a local hypervisor, cloud VMs? What type of firewalls do you have at infra level and/or at VM OS level?
If it is just a glitch, you may try to delete the misbehaving pod and allow the controller to re-create it, or
kubectl delete -f calico.yaml
and thenkubectl apply -f calico.yaml
to re-deploy all calico related artifacts. If this does not resolve your issue, then you need to investigate the inter-node networking configuration.Regards,
-Chris0 -
Hi, I have the same problem.
I´m using 2 virtualbox machines.
Before joining the node I get this:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-86bddfcff-jgqrl 1/1 Running 0 3m2s
calico-node-f97ml 1/1 Running 0 3m2s
coredns-f9fd979d6-9xj86 1/1 Running 0 6m26s
coredns-f9fd979d6-ccs6l 1/1 Running 0 6m26s
etcd-k8smaster 1/1 Running 0 6m42s
kube-apiserver-k8smaster 1/1 Running 0 6m41s
kube-controller-manager-k8smaster 1/1 Running 0 6m42s
kube-proxy-vtsxq 1/1 Running 0 6m26s
kube-scheduler-k8smaster 1/1 Running 0 6m42sAnd all looks right. After join the node I get this:
vagrant$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-86bddfcff-jgqrl 1/1 Running 0 5m48s
calico-node-74bbx 0/1 Running 0 51s
calico-node-f97ml 1/1 Running 0 5m48s
coredns-f9fd979d6-9xj86 1/1 Running 0 9m12s
coredns-f9fd979d6-ccs6l 1/1 Running 0 9m12s
etcd-k8smaster 1/1 Running 0 9m28s
kube-apiserver-k8smaster 1/1 Running 0 9m27s
kube-controller-manager-k8smaster 1/1 Running 0 9m28s
kube-proxy-hnklj 1/1 Running 0 51s
kube-proxy-vtsxq 1/1 Running 0 9m12s
kube-scheduler-k8smaster 1/1 Running 0 9m28sI see the pod calico-node-74bbx that is not running and If I see the events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m36s (x5 over 6m52s) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal Scheduled 6m26s default-scheduler Successfully assigned kube-system/calico-kube-controllers-86bddfcff-jgqrl to k8smaster
Warning FailedCreatePodSandBox 6m25s kubelet, k8smaster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "70731438fd40ce4cde0f82936903a5372d703554150f20d08e62ac148f965f35" network for pod "calico-kube-controllers-86bddfcff-jgqrl": networkPlugin cni failed to set up pod "calico-kube-controllers-86bddfcff-jgqrl_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 6m20s kubelet, k8smaster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "45a9225d424a1e7ee246b09ec1b4b8a96a442c560928e23a2402d6aff329650e" network for pod "calico-kube-controllers-86bddfcff-jgqrl": networkPlugin cni failed to set up pod "calico-kube-controllers-86bddfcff-jgqrl_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 6m18s kubelet, k8smaster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f7a67cb7e42081870a94e469ccd99e93bf8cfb1c92cae1c60b209617013584f5" network for pod "calico-kube-controllers-86bddfcff-jgqrl": networkPlugin cni failed to set up pod "calico-kube-controllers-86bddfcff-jgqrl_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Normal SandboxChanged 6m15s (x4 over 6m24s) kubelet, k8smaster Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 6m15s kubelet, k8smaster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c1a72c68263c8f6fcadaa32f6fa31b9518224404c1ac447324b4278b854a6ab2" network for pod "calico-kube-controllers-86bddfcff-jgqrl": networkPlugin cni failed to set up pod "calico-kube-controllers-86bddfcff-jgqrl_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Normal Pulling 6m13s kubelet, k8smaster Pulling image "docker.io/calico/kube-controllers:v3.17.2"
Normal Pulled 6m6s kubelet, k8smaster Successfully pulled image "docker.io/calico/kube-controllers:v3.17.2" in 7.401631705s
Normal Created 6m6s kubelet, k8smaster Created container calico-kube-controllers
Normal Started 6m5s kubelet, k8smaster Started container calico-kube-controllersand it is true
stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/there is no nodename file there.
I have tried to to delete and apply calico again and it does not resolve the problem.
I agree that it must be a problem with the network but I really don't know what to check (all seems ok)
any sugesstion?
Thanks
0 -
Hi @emiliano.sutil,
Every automation tool added into the infrastructure provisioning and then the Kubernetes cluster bootstrapping processes introduces extra configuration options that may adversely impact the cluster build process. It could by any number of things, from VM virtual hardware profile (CPU, mem, disk, NIC), guest OS, networking options, IP address management, host and guest firewall rules, permissions, etc...
When provisioning VirtualBox VMs, it is important to follow the VM sizing guide found in the Overview section of Lab 3.1, the gest OS suggested, and most importantly ensure that the VM IP addresses assigned by the VirtualBox hypervisor do not overlap with the Pod IP network 192.168.0.0/16 managed by the Calico network plugin. For VM networking configuration it is recommended to enable the promiscuous mode and set it to allow all inbound traffic. Also, disabling all guest OS firewalls is recommended for the labs. Ina few similar cases the host OS firewall rules have also adversely impacted the Kubernetes cluster bootstrapping on VirtualBox.
I noticed that you deviated from the installation steps, and are using
k8smaster
as the hostname of your master/control plane node. That is not the intent of the lab guide.k8smaster
is assumed to be only an alias set in the/etc/hosts
files, to help with cluster bootstrapping at first, and later with High Availability (HA) configuration.Regards,
-Chris0 -
thanks @chrispokorni for your answer
I suspect the problem is with the overlap of ips (in fact, the overlap ;-) ). I'm going to test that hypothesis.
I let you know if this fix my problem.Regards.
0 -
Hello @chrispokorni
I think I have found my problem. I'll try to explain (perhaps this could be useful for someone):
On the first hand I'm using virtual box with vagrant.
I have seen that all my machines have 2 interfaces:
enp0s3: 10.0.2.15
enp0s8: 10.128.1.3 (on the first try, the ip overlaps with the calico settings)Well, all the machines have the same ip on the interface enp0s3: 10.0.2.15 and when I init the cluster it takes that ip it assumes that ip as the ip. For example you can see this on the output
...
certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
...Well, I have run this command to init the cluster:
kubeadm init --apiserver-advertise-address=10.128.1.3 --apiserver-cert-extra-sans=10.128.1.3 --node-name k8smaster --pod-network-cidr=192.168.0.0/16And now all it works as expected.
My only question now is: how can I init the cluster with this parameters on the the kubeadm-config.yaml?
Do you see any problem with this?
Thanks.
3 -
Hi @emiliano.sutil Thanks so much for the solution. I had the exact same problem like you as I built my VMs using vagrant and ended up having 2 NICs. I used the same kubeadm init command that you mentioned above and it works for me. let me play around more on how to add these configuration into kubeadm config yaml file.
Thanks
Thin0 -
I had the same problem with Virtualbox running Ubuntu 20.04. Both of the VMs have 2 network interfaces, a NAT for internet and an Internal Network for communicating with each other. Since the NAT is the first interface, kubadm picked that IP and caused the issue. @emiliano.sutil 's solution worked perfectly to solve the problem.
0 -
@emiliano.sutil Thanks so much for your post. I also used Virtual box with Vagrant with 2 interfaces also defined (enp0s3@10.0.2.15 and enp0s8@192.168.5.11). I tried to run kubeadm init with predefined config file, but my pod from worker-1 node is always "CrashLoopBackOff".
After running following init command, my work-1 node has successfully joined control nodel.
kubeadm init --apiserver-advertise-address=192.168.5.11 --apiserver-cert-extra-sans=192.168.5.11 --node-name master-1 --pod-network-cidr=192.168.0.0/16I definitely would give it a try on kubeadm config yaml file, probably some file like following:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.21.1
controlPlaneEndpoint: "k8scp:6443"
networking:
podSubnet: 192.168.0.0/16
localAPIEndpoint:
advertiseAddress: 192.168.5.11
bindPort: 443Thanks
Shao0 -
After reading the kubeadm init documentation and kubeadm configuration for v1beta2, comparing with the above working
kubeadm init
command and testing, I managed to create a kubeadm-config.yaml that appears to be working.apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.128.0.100 bindPort: 6443 --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.21.1 controlPlaneEndpoint: "k8scp:6443" networking: podSubnet: 192.168.0.0/16
where 10.128.0.100 is the IP of the second network interface of my Vagrant VirtualBox VM, In the /etc/hosts, k8scp is mapped to that IP.
0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 154 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)