Lab 3.1 - Calico readiness issues
Hey folks, I'm running into an issue that is making me want to pull my hair out. AWS EC2 instances, running ubuntu, 20.04 latest.
Went through the following steps:
Pulled non-kube apps from apt, set swapoff, set modprobe options, made sure containerd had a config file, set kubernetes.conf file up in /etc/sysctl.d/, ran sysctl --system, used apt to install containerd and kubeadm/kubectl/kubelet, held version on kube stuff, made sure my hosts file was good, ran kubeadm with no problems. Kubeadm as follows:
kubeadm init --kubernetes-version 1.23.0 --cri-socket=/var/run/containerd/containerd.sock --pod-network-cidr 192.168.0.0/16 --upload-certs
I go to apply the calico config (wget to pull down the config, kubectl apply -f calico.yaml to apply), and then after a few minutes I get this in my calico node:
Normal Started 2m32s kubelet Started container calico-node Warning Unhealthy 2m31s kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory Warning Unhealthy 2m29s (x2 over 2m30s) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
What's interesting is if I destroy the calico configuration, and run the same command as admin, I get slightly different results:
Warning Unhealthy 12m (x2 over 12m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused Warning Unhealthy 12m kubelet Readiness probe failed: calico/node is not ready: felix is not ready: readiness probe reporting 503
Security group is wide open (ALL/ALL for inbound and outbound on ipv4 and ipv6). I can post netcat results or similar if that would be helpful to demonstrate SG openness. eth0 for this environment is as follows:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000 link/ether 0a:55:d8:36:be:b1 brd ff:ff:ff:ff:ff:ff inet 10.0.0.72/24 brd 10.0.0.255 scope global dynamic eth0 valid_lft 1880sec preferred_lft 1880sec inet6 fe80::855:d8ff:fe36:beb1/64 scope link valid_lft forever preferred_lft forever
I've never put together a kubernetes control plane before and this is very disheartening. I feel like I'm missing something. I almost want to try and roll this on local VMs but I feel like that could be a bad time too. Any help here would be amazing, this has to be something simple.
Answers
-
So I tried this in GCP (Using the tutorial on standing up VMs in GCP, everything is open) as well and I'm getting the same thing.
I swapped to Flannel and it's working fine.
I assume this is some arcane issue beyond my comprehension at this point.
0 -
Hi @vivicus,
While setting up VMs on both AWS and GCP, did you happen to follow the two demo videos from the introductory chapter?
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
It's a little late for this (I already installed Flannel, I have no means of showing you a failed kube node and pod structure at this point), but...
So I did follow the two videos, yes. Additionally I followed Calico's guidelines for removing src/dst. Everything would start normally except for calico nodes. Initially I followed the course documents exactly but then shifted to what was in the solutions file "containerd-setup.txt" to see if that changed anything. I would clear iptables as well during reconfiguration attempts (
iptables -v -t nat -F && iptables -v -t mangle -F && iptables -v -F && iptables -v -X
) to ensure nothing weird was happening with respect to reconfig. I would also clear those things that kubeadm didn't clear and noted it didn't clear to make sure nothing weird was happening.Calico nodes would start but quickly (~20 seconds, probably less) enter a warning state (using
kubectl describe -n kube-system pods <calico node name>
would show it). Left to their own devices they would enter a crashloop after some time.I tried terminating and creating fresh nodes on AWS, creating fresh nodes on GCP, same thing. It was either something I was doing, or it's something with the latest calico being applied that is interacting weirdly. I'm unsure which, but I'd lean towards me. The weird thing to me is that Flannel was literally download the yaml, edit accordingly, apply, and it was ready to go. No issues.
0 -
Hi @vivicus,
I attempted to reproduce your issue but I was unable to do so, since calico worked as expected every time. My students in class were also successful in installing calico over the past few weeks.
I am wondering what it is in your environment that prevents calico from running.
Regards,
-Chris0 -
@vivicus you showed that you use
--pod-network-cidr 192.168.0.0/16
, but then demonstrate some interface withinet 10.0.0.72/24
. What is that? And also you said that applied all-all, but for what cidr block, for what traffic? Checkkubectl describe ippool
after you install it should be the same with your pod network cidr. Try to use All traffic, all-all, 0.0.0.0/0 rule for SG if you didn't. And by the way AWS uses 192.168.0.0/16 CIDR for EC2 instances, so you need to change pod CIDR to e.g. 10.0.0.0/160
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 153 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)