LFD259 - New version live on Kubernetes v1.21.1 (5.21.2021)
Hi,
A new course version of LFD259 went live today. In this release, all labs were updated to use Kubernetes v1.21.1, and are now using Podman. The updates also cover most of the upcoming CKAD exam updates.
To ensure you have access to the latest version, please clear your cache.
Regards,
Flavia
The Linux Foundation Training Team
Comments
-
hi, I just tried to create a new cluster with the provided installation scripts (v1.21.1) on Ubuntu 18 (KVM VMs) but it's not working. When executing the script 'k8sMaster.sh' i get the following error:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl: - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
The script continues its execution without checking the result of the previous step:
sudo kubeadm init --config=$(find / -name kubeadm.yaml2>/dev/null)
sleep5
echo"Running the steps explained at the end of the init output for you"
mkdir -p $HOME/.kub
...Does anyone know what's the issue?
Thanks.
KR,
Javier.1 -
Is there a high-level summary of differences? (Or could there be?)
0 -
Got the same error, running on fresh Ubuntu 18.04.5 server (VirtualBox).
[control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) ... ...
Any ideas about this? It's blocking setting up the cluster. @fcioanca ?
0 -
@lesnyrumcajs I'm sorry I don't have an answer regarding your configuration. Though, I am also using VirtualBox to set up my cluster. Some things I changed in my process when switching from the V2021-01-26 version so far are:
- download the newer version (make sure the url used by wget has
V2021-05-21
in it - make sure the names of the two nodes match what is expected by the documentation (
master
andworker
)
@leifsegen said:
Is there a high-level summary of differences? (Or could there be?)Regarding this question, so far I notice that the initialization seem to use cri-o instead of docker. I notice the previous scripts are in there, though. Are they intended to still be compatible with the current version of the course. I'm running into trouble getting the k8sMaster.sh script to run as well:
[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
master@master:~$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sun 2021-05-30 14:16:14 UTC; 5min ago Docs: https://kubernetes.io/docs/home/ Main PID: 22653 (kubelet) Tasks: 14 (limit: 2316) CGroup: /system.slice/kubelet.service └─22653 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --co May 30 14:21:21 master kubelet[22653]: E0530 14:21:21.244234 22653 kubelet.go:2291] "Error getting node" err="node \"master\" not found" # duplicates omitted master@master:~$ journalctl -xeu kubelet May 30 14:21:51 master kubelet[22653]: E0530 14:21:51.869400 22653 kubelet.go:2291] "Error getting node" err="node \"master\" not found" May 30 14:21:51 master kubelet[22653]: E0530 14:21:51.969776 22653 kubelet.go:2291] "Error getting node" err="node \"master\" not found" # duplicates omitted May 30 14:21:52 master kubelet[22653]: E0530 14:21:52.396559 22653 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.0.18:6443/apis/coordination.k8s.io/v1/namesp # etc. master@master:~$ sudo crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause # no output
0 - download the newer version (make sure the url used by wget has
-
Hello,
The crio based nodes take a bit longer to start, but typically within the listed 4min timeout.
When you look at kubelet, is it running?
What does the output of crictl show?
Do you nodes have the required amount of resources as listed in Exercise 2.1: Overview and Preliminaries?
Regards,
0 -
Hello @serewicz , thanks for responding.
When you look at kubelet, is it running?
student@k8s-master:~$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2021-06-02 20:27:29 UTC; 6min ago Docs: https://kubernetes.io/docs/home/ Main PID: 19648 (kubelet) Tasks: 15 (limit: 4915) CGroup: /system.slice/kubelet.service └─19648 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf -- Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.582356 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.696327 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.796635 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.897810 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:06 k8s-master kubelet[19648]: I0602 20:34:06.968908 19648 kubelet_node_status.go:71] "Attempting to register node" node="master Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.969847 19648 kubelet_node_status.go:93] "Unable to register node with API server" Jun 02 20:34:06 k8s-master kubelet[19648]: E0602 20:34:06.998438 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:07 k8s-master kubelet[19648]: E0602 20:34:07.098588 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:07 k8s-master kubelet[19648]: E0602 20:34:07.199096 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found" Jun 02 20:34:07 k8s-master kubelet[19648]: E0602 20:34:07.300501 19648 kubelet.go:2291] "Error getting node" err="node \"master\" not found"
Seems something is off here, no?
What does the output of crictl show?
Not sure what subcommand do you exactly mean? E.g.
sudo crictl info
:student@k8s-master:~$ sudo crictl info { "status": { "conditions": [ { "type": "RuntimeReady", "status": true, "reason": "", "message": "" }, { "type": "NetworkReady", "status": true, "reason": "", "message": "" } ] } }
Do you nodes have the required amount of resources as listed in Exercise 2.1: Overview and Preliminaries?
I assigned 8 GB RAM and 20 GB disk + 3 vCPUs for good measure.
I attached the master.out output from the initial script. Plus some output from the journal.
Completely unrelated to the problem - the link in the course pdf seems incorrect (it's https://training.linuxfoundation.org/cm/LFD259/LFD259V2021-05-21SOLUTIONS.tar.xz , should be https://training.linuxfoundation.org/cm/LFD259/LFD259_V2021-05-21_SOLUTIONS.tar.xz).
0 -
@lesnyrumcajs The solutions link in the course pdf is actually correct. However, when you copy/paste from the pdf, depending on your software, underscores tend to disappear - this is also mentioned in the Note immediately under the command to download the files.
0 -
Hi @lesnyrumcajs,
These error messages ("node not found", "connection refused") are indicating a networking issue with your VirtualBox VMs. Misconfigured VM networking from within the VirtualBox hypervisor often leads to these issues during Kubernetes cluster init process.
How many network interfaces do you have on each VM, and of what type?
Regards,
-Chris0 -
@fcioanca Thanks, sorry for the noise then.
@chrispokorni It's a brand new Virtual Box with default NAT (and SSH port forwarded to host) and defaults everywhere, nothing else.
student@k8s-master:~$ ifconfig enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255 inet6 fe80::a00:27ff:fe25:2708 prefixlen 64 scopeid 0x20<link> ether 08:00:27:25:27:08 txqueuelen 1000 (Ethernet) RX packets 52549 bytes 78891443 (78.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2874 bytes 186963 (186.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 124 bytes 10160 (10.1 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 124 bytes 10160 (10.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
0 -
Hi @lesnyrumcajs,
The VBox NAT & defaults are not sufficient for a Kubernetes cluster. If you examine closely the NAT properties in the VBox networking documentation, it prevents guests from talking to each other, a feature that Kubernetes relies on. I would recommend a bridged network instead, with promiscuous mode also enabled and set to allow all traffic.
Regards,
-Chris0 -
@chrispokorni Thanks for the suggestions. I actually tried it originally (because I didn't want to bother with port forwarding ). Unfortunately the error persists, just with my local network IP (172.16.x.x).
0 -
Hi @lesnyrumcajs,
The suggested configuration works for me with VBox VMs. I am wondering whether the host OS has an active firewall that may block traffic.
Regards,
-Chris0 -
Hello.
I am facing the same issue as mentioned above. I have created two brand new Ubuntu 18.04 LTS Server VMs using the iso "ubuntu-18.04.5-live-server-amd64.iso". One VM is for the cp and another for worker with 2 CPUs, 8GB RAM and 25GB disks each. Also the Networking has been configured to be bridged with promiscuous mode obtaining IP addresses from my local dhcp server. The cp is obtaining the IP 192.168.0.190 whether the worker the 192.168.0.191.
However when executing the ./k8scp.sh in the cp node i am stuck at the following phase:
[kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Could you please help?
Thanks a lot!
0 -
Hi @ioef,
The course author posted the following:
https://forum.linuxfoundation.org/discussion/859506/k8scp-sh-install-script-issues
Regards,
-Chris1 -
Hello!
Thank you for the heads up. I have also commented in the indicated thread with a Workaround that worked for me.
1 -
The workaround worked fine for me, thanks!
1
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 145 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 5 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)