LAB 3.6 master node not ready

Hello,
I've been following steps in containerd-setup.txt
However, the master node never gets into ready state. Notice the CONTAINER-RUNTIME field.
$ kubectl get node k8s-single -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-single NotReady control-plane,master 19m v1.23.1 10.166.0.3 <none> Ubuntu 20.04.4 LTS 5.15.0-1016-gcp containerd://Unknown
$ kubectl describe nodes k8s-single Name: k8s-single Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-single kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 10.166.0.3/32 projectcalico.org/IPv4IPIPTunnelAddr: 192.168.100.0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 03 Oct 2022 11:30:17 +0000 Taints: node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Lease: HolderIdentity: k8s-single AcquireTime: <unset> RenewTime: Mon, 03 Oct 2022 11:50:50 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 03 Oct 2022 11:31:14 +0000 Mon, 03 Oct 2022 11:31:14 +0000 CalicoIsUp Calico is running on this node MemoryPressure False Mon, 03 Oct 2022 11:50:50 +0000 Mon, 03 Oct 2022 11:30:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 03 Oct 2022 11:50:50 +0000 Mon, 03 Oct 2022 11:30:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 03 Oct 2022 11:50:50 +0000 Mon, 03 Oct 2022 11:30:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Mon, 03 Oct 2022 11:50:50 +0000 Mon, 03 Oct 2022 11:45:02 +0000 KubeletNotReady [container runtime is down, PLEG is not healthy: pleg was last seen active 6m16.582796441s ago; threshold is 3m0s] Addresses: InternalIP: 10.166.0.3 Hostname: k8s-single Capacity: cpu: 2 ephemeral-storage: 20134592Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7621368Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 18556039957 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7518968Ki pods: 110 System Info: Machine ID: bbd0e9ce1b4c1a57630559bf544f53af System UUID: bbd0e9ce-1b4c-1a57-6305-59bf544f53af Boot ID: a95eb6e2-0c9f-49ce-acc2-60d17c82d9cb Kernel Version: 5.15.0-1016-gcp OS Image: Ubuntu 20.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://Unknown Kubelet Version: v1.23.1 Kube-Proxy Version: v1.23.1 PodCIDR: 192.168.0.0/24 PodCIDRs: 192.168.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-66966888c4-tm7bk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m kube-system calico-node-cqxcn 250m (12%) 0 (0%) 0 (0%) 0 (0%) 20m kube-system coredns-64897985d-jpnrc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 20m kube-system coredns-64897985d-n55jc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 20m kube-system etcd-k8s-single 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 20m kube-system kube-apiserver-k8s-single 250m (12%) 0 (0%) 0 (0%) 0 (0%) 20m kube-system kube-controller-manager-k8s-single 200m (10%) 0 (0%) 0 (0%) 0 (0%) 20m kube-system kube-proxy-79lxk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m kube-system kube-scheduler-k8s-single 100m (5%) 0 (0%) 0 (0%) 0 (0%) 20m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1100m (55%) 0 (0%) memory 240Mi (3%) 340Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 20m kube-proxy Warning InvalidDiskCapacity 20m kubelet invalid capacity 0 on image filesystem Normal NodeHasSufficientMemory 20m kubelet Node k8s-single status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 20m kubelet Node k8s-single status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 20m kubelet Node k8s-single status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 20m kubelet Updated Node Allocatable limit across pods Normal Starting 20m kubelet Starting kubelet. Normal NodeReady 19m kubelet Node k8s-single status is now: NodeReady Normal NodeNotReady 5m53s kubelet Node k8s-single status is now: NodeNotReady Warning ContainerGCFailed 30s (x6 over 5m30s) kubelet rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService Warning ImageGCFailed 30s kubelet rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService
sudo crictl ps FATA[0000] listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
sudo systemctl status containerd ... Oct 03 11:53:15 k8s-single containerd[527]: time="2022-10-03T11:53:15.891993595Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: no corresponding runtime configured in `containerd.runtimes` for `containerd` `default_runtime_name = \"runc\""
It seems to be an error with container runtime.
Answers
-
It seems that the problem is in containerd-setup.txt
We intervene on containerd config file two times, and the second time actually overwrites the first config file we created. I don't know if this is intentional (?)First:
# Configure containerd to use the runc engins cat <<EOF | sudo tee /etc/containerd/config.toml version = 2 #disabled_plugins = ["cri"] [plugins."io.containerd.runtime.v1.linux"] shim_debug = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc] runtime_type = "io.containerd.runsc.v1" EOF
and a bit later:
# Get containerd running, append or create several files. cat <<EOF | sudo tee /etc/containerd/config.toml disabled_plugins = ["restart"] [plugins.linux] shim_debug = true [plugins.cri.containerd.runtimes.runsc] runtime_type = "io.containerd.runsc.v1" EOF
Since the second section's comment says "# Get containerd running, append or create several files", I thought that
-a
parameter has been forgotten fortee
command (in order to append the new content to existing file instead of overwriting it)However, appending doesn't resolve the problem, but completely omitting the Get containerd running part does...
What is Get containerd running part supposed to do? Is it OK omitting it?
1 -
Hi @k0dard,
The issue with containerd-setup.txt file was discussed in an earlier discussion thread.
Regards,
-Chris0
Categories
- 10.1K All Categories
- 35 LFX Mentorship
- 88 LFX Mentorship: Linux Kernel
- 504 Linux Foundation Boot Camps
- 279 Cloud Engineer Boot Camp
- 103 Advanced Cloud Engineer Boot Camp
- 48 DevOps Engineer Boot Camp
- 41 Cloud Native Developer Boot Camp
- 2 Express Training Courses
- 2 Express Courses - Discussion Forum
- 1.8K Training Courses
- 17 LFC110 Class Forum
- 5 LFC131 Class Forum
- 19 LFD102 Class Forum
- 148 LFD103 Class Forum
- 13 LFD121 Class Forum
- 61 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 569 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 23 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 45 LFS207 Class Forum
- 298 LFS211 Class Forum
- 53 LFS216 Class Forum
- 46 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 997 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 87 LFS260 Class Forum
- 126 LFS261 Class Forum
- 31 LFS262 Class Forum
- 79 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 17 LFS268 Class Forum
- 21 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 212 LFW211 Class Forum
- 154 LFW212 Class Forum
- 899 Hardware
- 217 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 208 Networking
- 101 Printers & Scanners
- 85 Storage
- 749 Linux Distributions
- 88 Debian
- 64 Fedora
- 14 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 355 Ubuntu
- 473 Linux System Administration
- 38 Cloud Computing
- 69 Command Line/Scripting
- Github systems admin projects
- 94 Linux Security
- 77 Network Management
- 108 System Management
- 49 Web Management
- 63 Mobile Computing
- 22 Android
- 27 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 528 Off Topic
- 127 Introductions
- 213 Small Talk
- 20 Study Material
- 794 Programming and Development
- 262 Kernel Development
- 498 Software Development
- 923 Software
- 258 Applications
- 182 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 53 All In Program
- 53 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)