kubeadm init error: CRI v1 runtime API is not implemented
Hi, I'll write this post to help anyone encountering my same issue, given that I could not find the solution in past forum discussions.
I'm following the installation PDF for the class, exactly with all required versions: Ubuntu 20.04 (on a Google Cloud VM), kubeadm 1.24.1, etc.
But when executing the kubeadm init command I got the following error:
[init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time="2023-01-19T15:05:35Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
By searching the web it appears this is an issue with the old containerd provided by Ubuntu 20.
I solved it with the following steps (as root):
1. Set up the Docker repository as described in https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
2. Remove the old containerd:apt remove containerd
3. Update repository data and install the new containerd: apt update
, apt install containerd.io
4. Remove the installed default config file: rm /etc/containerd/config.toml
5. Restart containerd: systemctl restart containerd
The kubeadm init command worked fine afterwards.
Comments
-
Thanks for this! After three days of searching, you solved my problem in five minutes!
2 -
Thank you. It worked for me.
On an Ubuntu 22.04.1 LTS, I have a warning [WARNING SystemVerification]: missing optional cgroups: blkio.
Then I had to fix https://forum.linuxfoundation.org/discussion/862809/lab-3-1-kubeadm-init-error-creating-kube-proxy-service-account0 -
Thank you mdevitis, Your solution worked perfectly.
0 -
Thank you sir, you are my HEROOOOOOOOOOO!!
0 -
Hello, I had the same problem (ubuntu focal 20.04). The first install when I started the class was OK in the end of 2022. I wanted to reinstall on a second system to test my ansible script and it ending with the CRI runtime error. Using the container.io package from Docker repo and removing the default config.toml file solve the problem.
I'll verify on jammy (22.04) to check.0 -
Same error on 22.04 with the containerd package from Ubuntu repository.
There is also an error for blkio cgroups.
Installing the containerd.io package from Docker repo and remove the default config file solve the problem.0 -
Using 22.04 Ubuntu, and following the instructions above (i.e the first / top answer) it worked but with a slight addtion of a step. The repo for finding container.d wasn't added to my system so I had to add it as well. The instructions here helpd https://superuser.com/questions/1528265/package-containerd-io-is-not-available-when-installing-docker-on-ubuntu-19-10
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
1 -
-
Hi @marviflame,
Calico node pods are typically impacted by incorrect network configuration at the infrastructure layer. What type of infrastructure are you using to provision the hosts of the cluster, and what OS?
Regards,
-Chris0 -
@chrispokorni I used the DigitalOcean Ubuntu 20.04 LTS Linux Server
0 -
@chrispokorni I had to use weave net pod network
0 -
Hi @marviflame,
Alternate infrastructure than what is recommended in the lab guide may require additional configuration to accommodate the provider's network stack implementation.
In addition, the introductory chapter of the course includes two video guides for provisioning the lab environment on AWS and GCP infrastructure, however, the networking requirements for any VPC, subnets, and firewall rules should be similar to DO.
Regards,
-Chris1 -
Thanks a lot for this quick solution. You saved my day - I am doing self-cert and your to the point solutions worked perfectly. Keep up the good work!
1 -
Fresh deployment of Ubuntu 22.04 and install of containerd and I ran into this. Of all the fixes I tried this is the one that worked. Thanks for posting.
0 -
Hi,
I followed the same steps:-
Step1:apt remove containerd
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'containerd' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 30 not upgraded.Step2:
apt update
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:3 https://download.docker.com/linux/ubuntu focal InRelease
Get:4 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2,380 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu focal-updates/main Translation-en [409 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 c-n-f Metadata [16.3 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [1,607 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu focal-updates/restricted Translation-en [226 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1,024 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu focal-updates/universe Translation-en [237 kB]
Get:14 http://us.archive.ubuntu.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [23.6 kB]
Get:15 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 Packages [1,993 kB]
Get:16 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 c-n-f Metadata [12.2 kB]
Get:17 http://us.archive.ubuntu.com/ubuntu focal-security/universe amd64 Packages [795 kB]
Get:18 http://us.archive.ubuntu.com/ubuntu focal-security/universe Translation-en [154 kB]
Get:19 http://us.archive.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [17.0 kB]
Fetched 9,230 kB in 17s (546 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
30 packages can be upgraded. Run 'apt list --upgradable' to see them.Step3:
apt install containerd.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
containerd.io is already the newest version (1.6.16-1).
0 upgraded, 0 newly installed, 0 to remove and 30 not upgraded.Step4:
rm /etc/containerd/config.toml
rm: cannot remove '/etc/containerd/config.toml': No such file or directoryStep5:
systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-02-13 23:24:56 UTC; 7s ago
Docs: https://containerd.io
Process: 574824 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 574835 (containerd)
Tasks: 23
Memory: 22.6M
CGroup: /system.slice/containerd.service
└─574835 /usr/local/bin/containerdFeb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.045913643Z" level=info msg="Start subscribing containerd event"
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046104300Z" level=info msg="Start recovering state"
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046041430Z" level=info msg=serving... address=/run/containerd/containe>
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046219362Z" level=info msg=serving... address=/run/containerd/containe>
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046280356Z" level=info msg="containerd successfully booted in 0.038047>
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046227827Z" level=info msg="Start event monitor"
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046488975Z" level=info msg="Start snapshots syncer"
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046531682Z" level=info msg="Start cni network conf syncer for default"
Feb 13 23:24:56 usf-linux1 containerd[574835]: time="2023-02-13T23:24:56.046552645Z" level=info msg="Start streaming server"
Feb 13 23:24:56 usf-linux1 systemd[1]: Started containerd container runtime.Final Step:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=/var/run/cri-dockerd.sock
W0213 23:25:41.313871 574999 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-13T23:25:41Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higherAny help appreciable.
0 -
Hi @manasfirst,
I would recommend following the installation steps in the sequence presented in the latest release of the lab guide. Step 13 of Lab exercise 3.1 provides the commands that were missed earlier - that is to create the config.toml file, and restart containerd.
In addition, in step 22 the initialization of the control plane is done with the help of a configuration manifest where the control plane endpoint is the
k8scp
alias set at step 21, and the Kubernetes version is set to 1.24.1. In doing so, the cluster is prepared for high availability discussed in Chapter 16, and the cluster upgrade from 1.24 to 1.25 will work as expected in a later chapter.Deviations from the installation steps will produce inconsistent results, and will make troubleshooting much more difficult.
Regards,
-Chris0 -
I followed the same steps:-
Well, they do not seem to be exactly the same steps.
Your command output shows that you are starting from a different situation (containerd.io is already installed, the toml file is not present...), and you installed Kubernetes 1.26.1 instead of 1.24.1 as suggested by the 3.1 LAB. You are also using different arguments in the kubeadm init command. Indeed also the error is different: yours mentions the endpoint unix:///var/run/cri-dockerd.sock while in my case it was unix:///var/run/containerd/containerd.sockI think you should restart the installation from scratch, following the steps in the lab.
0 -
I had the same problem on Ubuntu 22.04 LTS. I think an easier solution is to edit the config for containerd because it has disabled the CRI plugin by default.
Like this, or with your favorite editor:
sudo nano /etc/containerd/config.toml
and comment out the line that saysdisabled_plugins = ["cri"]
.Gotta restart containerd, but after that the CRI is enabled and there won't be any more complaints about not working when doing
kubeadm init
0 -
@omar9000 if I remember correctly, when I experienced the issue with Ubuntu 20 I described in the initial post, the standard Ubuntu packages did not install any /etc/containerd/config.toml file.
So the situation you describe with Ubuntu 22 seems to be different.By the way, commenting out the line in the config.toml file or completely deleting the file should have the same effect, because as far as I could see the disabled_plugins line was the only enabled option.
0 -
This was immensely helpful, thank you!
0 -
In my case an Ubuntu 20 and 22 have no
10-containerd-net.conflist
so I used a basic one and then installed the runsc binary and made a slight change to add the runtime on the containerd configCreate
/etc/cni/net.d/10-containerd-net.conflist
cat << EOF | sudo tee /etc/cni/net.d/10-containerd-net.conflist { "cniVersion": "1.0.0", "name": "containerd-net", "plugins": [ { "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "promiscMode": true, "ipam": { "type": "host-local", "ranges": [ [{ "subnet": "192.168.0.0/16" }] ], "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "::/0" } ] } }, { "type": "portmap", "capabilities": {"portMappings": true}, "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } EOF sudo swapoff -a sudo systemctl restart kubelet
Install
runsc
The
runsc
installation is mentioned at the end of the readme file, but without it,containerd
won't start. These are the instructions mentioned ingVisor
documentation.( set -e ARCH=$(uname -m) URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH} wget ${URL}/runsc ${URL}/runsc.sha512 \ ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512 sha512sum -c runsc.sha512 \ -c containerd-shim-runsc-v1.sha512 rm -f *.sha512 chmod a+rx runsc containerd-shim-runsc-v1 sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin )
Update
containerd
config to add therunsc
runtimesudo sed -i 's/shim_debug = false$/shim_debug = true/' /etc/containerd/config.toml sudo sed -i '/shim_debug = true$/a \\n [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]\n runtime_type = "io.containerd.runsc.v1"\n' /etc/containerd/config.toml sudo systemctl restart containerd
Now create the cluster
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 | tee /var/log/kubeinit.log
0 -
Here is what I did, note that "containerd config default | tee /etc/containerd/config.toml" creates a new config.toml, then edited it to set systemCgroup=true. Also note the crictl config set commands.
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update && apt-get install containerd.io -y
containerd config default | tee /etc/containerd/config.toml
sed -e 's/SystemdCgroup = false/SystemdCgroup = true/g' -i /etc/containerd/config.toml
crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock --set image-endpoint=unix:///run/containerd/containerd.sock
systemctl restart containerd0 -
Worked like a charm for me! Thanks a lot bud!
0 -
Thank you!!
In may case, I just remove/etc/containerd/config.toml
andsystemctl restart containerd
and It's solved.Error messages
# kubeadm init [init] Using Kubernetes version: v1.27.2 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time="2023-06-04T19:05:15+09:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
Env
OS: Almalinux 9.2 Kubernetes v1.27.2 containerd containerd.io 1.6.21 3dce8eb055cbb6872793272b4f20ed16117344f8 Docker version 24.0.2, build cb74dfc
0 -
Arlgiht. I am on 1.23 on prem k8s. This cluster was setup using containerd. The only reason config.toml was created here because the cluster was failing to initiate using kubeadm when I set it up.
Here is how my toml looks
root@stage-mast1:~# cat /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
version = 2Here comes the issue.
I am trying to upgrade to 1.24. Facing same error as everyone else here.[preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.24.15: output: time="2023-06-18T23:18:53Z" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.24.15: output: time="2023-06-18T23:18:54Z" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService" , error: exit status 1
This is HA cluster with 3 master and 8 workers. All nodes are ubuntu 20 LTS.
Containerd version is as below. Same is CRI here
>
root@stage-mast1:~# ctr version
Client:
Version: 1.5.9-0ubuntu1~20.04.6
Revision:
Go version: go1.13.8Server:
Version: 1.5.9-0ubuntu1~20.04.5
Revision:
UUID: 580cee63-8afd-4437-9447-e44b0501167a
WARNING: version mismatchShould I try the solution here?
0 -
Hi @Makrand,
The training material has been updated several times since Kubernetes 1.23 and 1.24. With that in mind I would recommend inspecting the official Kubernetes documentation for any specific instructions to upgrade the cluster from 1.23 to 1.24. For some earlier versions the upgrade instructions are very specific, and may not be supported by the more recent upgrade instructions.
https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
Regards,
-Chris0 -
Thankyou so much @mdevitis
0 -
Feb 18 10:49:05 lfs-cp-1 containerd[4335]: time="2024-02-18T10:49:05.968032466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 18 10:49:05 lfs-cp-1 containerd[4335]: time="2024-02-18T10:49:05.968353080Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
In my case, removing systemd_cgroup line from
/etc/containerd/config.toml
solved the issue.0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)