Lab 3.1 - Calico readiness issues

Hey folks, I'm running into an issue that is making me want to pull my hair out. AWS EC2 instances, running ubuntu, 20.04 latest.
Went through the following steps:
Pulled non-kube apps from apt, set swapoff, set modprobe options, made sure containerd had a config file, set kubernetes.conf file up in /etc/sysctl.d/, ran sysctl --system, used apt to install containerd and kubeadm/kubectl/kubelet, held version on kube stuff, made sure my hosts file was good, ran kubeadm with no problems. Kubeadm as follows:
kubeadm init --kubernetes-version 1.23.0 --cri-socket=/var/run/containerd/containerd.sock --pod-network-cidr 192.168.0.0/16 --upload-certs
I go to apply the calico config (wget to pull down the config, kubectl apply -f calico.yaml to apply), and then after a few minutes I get this in my calico node:
Normal Started 2m32s kubelet Started container calico-node Warning Unhealthy 2m31s kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory Warning Unhealthy 2m29s (x2 over 2m30s) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
What's interesting is if I destroy the calico configuration, and run the same command as admin, I get slightly different results:
Warning Unhealthy 12m (x2 over 12m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused Warning Unhealthy 12m kubelet Readiness probe failed: calico/node is not ready: felix is not ready: readiness probe reporting 503
Security group is wide open (ALL/ALL for inbound and outbound on ipv4 and ipv6). I can post netcat results or similar if that would be helpful to demonstrate SG openness. eth0 for this environment is as follows:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000 link/ether 0a:55:d8:36:be:b1 brd ff:ff:ff:ff:ff:ff inet 10.0.0.72/24 brd 10.0.0.255 scope global dynamic eth0 valid_lft 1880sec preferred_lft 1880sec inet6 fe80::855:d8ff:fe36:beb1/64 scope link valid_lft forever preferred_lft forever
I've never put together a kubernetes control plane before and this is very disheartening. I feel like I'm missing something. I almost want to try and roll this on local VMs but I feel like that could be a bad time too. Any help here would be amazing, this has to be something simple.
Answers
-
So I tried this in GCP (Using the tutorial on standing up VMs in GCP, everything is open) as well and I'm getting the same thing.
I swapped to Flannel and it's working fine.
I assume this is some arcane issue beyond my comprehension at this point.
0 -
Hi @vivicus,
While setting up VMs on both AWS and GCP, did you happen to follow the two demo videos from the introductory chapter?
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
It's a little late for this (I already installed Flannel, I have no means of showing you a failed kube node and pod structure at this point), but...
So I did follow the two videos, yes. Additionally I followed Calico's guidelines for removing src/dst. Everything would start normally except for calico nodes. Initially I followed the course documents exactly but then shifted to what was in the solutions file "containerd-setup.txt" to see if that changed anything. I would clear iptables as well during reconfiguration attempts (
iptables -v -t nat -F && iptables -v -t mangle -F && iptables -v -F && iptables -v -X
) to ensure nothing weird was happening with respect to reconfig. I would also clear those things that kubeadm didn't clear and noted it didn't clear to make sure nothing weird was happening.Calico nodes would start but quickly (~20 seconds, probably less) enter a warning state (using
kubectl describe -n kube-system pods <calico node name>
would show it). Left to their own devices they would enter a crashloop after some time.I tried terminating and creating fresh nodes on AWS, creating fresh nodes on GCP, same thing. It was either something I was doing, or it's something with the latest calico being applied that is interacting weirdly. I'm unsure which, but I'd lean towards me. The weird thing to me is that Flannel was literally download the yaml, edit accordingly, apply, and it was ready to go. No issues.
0 -
Hi @vivicus,
I attempted to reproduce your issue but I was unable to do so, since calico worked as expected every time. My students in class were also successful in installing calico over the past few weeks.
I am wondering what it is in your environment that prevents calico from running.
Regards,
-Chris0 -
@vivicus you showed that you use
--pod-network-cidr 192.168.0.0/16
, but then demonstrate some interface withinet 10.0.0.72/24
. What is that? And also you said that applied all-all, but for what cidr block, for what traffic? Checkkubectl describe ippool
after you install it should be the same with your pod network cidr. Try to use All traffic, all-all, 0.0.0.0/0 rule for SG if you didn't. And by the way AWS uses 192.168.0.0/16 CIDR for EC2 instances, so you need to change pod CIDR to e.g. 10.0.0.0/160
Categories
- 9.9K All Categories
- 29 LFX Mentorship
- 82 LFX Mentorship: Linux Kernel
- 465 Linux Foundation Boot Camps
- 266 Cloud Engineer Boot Camp
- 94 Advanced Cloud Engineer Boot Camp
- 43 DevOps Engineer Boot Camp
- 29 Cloud Native Developer Boot Camp
- 1 Express Training Courses
- 1 Express Courses - Discussion Forum
- 1.6K Training Courses
- 18 LFC110 Class Forum
- 4 LFC131 Class Forum
- 19 LFD102 Class Forum
- 132 LFD103 Class Forum
- 9 LFD121 Class Forum
- 60 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 544 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 20 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 36 LFS207 Class Forum
- 295 LFS211 Class Forum
- 53 LFS216 Class Forum
- 45 LFS241 Class Forum
- 39 LFS242 Class Forum
- 33 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 963 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 85 LFS260 Class Forum
- 124 LFS261 Class Forum
- 29 LFS262 Class Forum
- 78 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 16 LFS268 Class Forum
- 14 LFS269 Class Forum
- 194 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 206 LFW211 Class Forum
- 148 LFW212 Class Forum
- 890 Hardware
- 212 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 99 Printers & Scanners
- 85 Storage
- 747 Linux Distributions
- 88 Debian
- 64 Fedora
- 13 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 468 Linux System Administration
- 38 Cloud Computing
- 67 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 77 Network Management
- 107 System Management
- 48 Web Management
- 61 Mobile Computing
- 22 Android
- 25 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 525 Off Topic
- 127 Introductions
- 211 Small Talk
- 19 Study Material
- 782 Programming and Development
- 256 Kernel Development
- 492 Software Development
- 919 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 46 All In Program
- 46 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)