Welcome to the Linux Foundation Forum!

LFS258_V2021-09-20 - Cannot initialize cluster with kubeadm 1.21.1 and crio 1.21.3

Experiencing issue initializing cluster with kubeadm and crio-o

Trying to provision the exact system from the lab ( s_03 )

LFS258_V2021-09-20_SOLUTIONS.tar.xz, LFS258-labs_V2021-09-20.pdf

2 vCPU, 8 GB, Ubuntu 18.04.6 LTS
running on vsphere, 1 interface, no swap


kubeadm 1.21.1-00
kubectl 1.21.1-00
kubelet 1.21.1-00
kubernetes-cni 0.8.7-00
cri-o 1.21.3~0
cri-o-runc 1.0.1~0

configured system and crio, enabled and started, according to the latest pdf, and verified from:
for ubuntu 18.04

cgroup driver is systemd

updated te-olmo-k8m0101 k8scp

using kubeadm config:


kubeadm init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out

kubelet fails to start
[kubelet-check] Initial timeout of 40s passed.

log: Error getting node err="node \"k8scp" not found

Try adding the described crio.conf from the lab tar to /etc/crio/crio.conf,
could not find anything in the PDF about this file, just randomly found it in the tar basically.


kubeadm init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out

okt 26 23:21:49 te-olmo-k8m0101 kubelet[25616]: E1026 23:21:49.526385 25616 kubelet.go:2291] "Error getting node" err="node \"k8scp\" not found"

and reading up on crio's documentation. supposedly i also have to add /etc/cni/net.d/<some-crio-bridge.conf>, but reading into crio doc atm, as the lab is totally unclear on this.

is there another version set that is supposed to work?

can we expect any questions about cri-o or are we expected to be able to configure it?

spending 5 ours last night with no success. documentation/lab seems unclear. quite frustrating.


  • olmorupert
    olmorupert Posts: 9
    edited October 27

    Also added


    KUBELET_EXTRA_ARGS="--container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock"

  • attached kubelet logs and kubeadm.yaml

  • serewicz
    serewicz Posts: 971


    I think you may need a closer examination of the lab. For example you said there was no mention of the kubeadm-crio.yaml file but it is specifically mentioned in step 14 and step 15. Also the error about not finding k8scp means you did not edit /etc/hosts properly.

    You may need to edit the kubeadm-crio.yaml file to be a matching version, such as 1.21.1. Otherwise I have just run the exact steps from the lab and it worked. Here is my command history as copy and paste. There is even an error where I did not edit, to illustrate that the lab worked as written.

    [email protected]:~# history
    1 apt-get update && apt-get upgrade -y
    2 apt-get install -y vim
    3 modprobe overlay
    4 modprobe br_netfilter
    5 vim /etc/sysctl.d/99-kubernetes-cri.conf
    6 sysctl --system
    7 export OS=xUbuntu_18.04
    8 export VER=1.21
    9 echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/ /" | tee -a /etc/apt/sources.list.d/cri-0.list
    10 curl -L http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/Release.key | apt-key add -
    11 echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee -a /etc/apt/sources.list.d/libcontainers.list
    12 curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
    13 apt-get update
    14 apt-get install -y cri-o cri-o-runc
    15 systemctl daemon-reload
    16 systemctl enable crio
    17 systemctl start crio
    18 systemctl status crio
    19 vim /etc/apt/sources.list.d/kubernetes.list
    20 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
    21 apt-get update
    22 apt-get install -y kubeadm=1.21.1-00 kubelet=1.21.1-00 kubectl=1.21.1-00
    23 apt-mark hold kubelet kubeadm kubectl
    24 wget https://docs.projectcalico.org/manifests/calico.yaml
    25 hostname -i
    26 vim /etc/hosts
    27 find /home -name kubeadm-crio.yaml
    28 cp /home/student/LFS458/SOLUTIONS/s_03/kubeadm-crio.yaml .
    29 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out #<<-copy paste error
    30 kubeadm init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out
    31 vim kubeadm-crio.yaml
    32 kubeadm init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out
    33 history

    Your Kubernetes control-plane has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    Alternatively, if you are the root user, you can run:
    export KUBECONFIG=/etc/kubernetes/admin.conf
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash >sha256:d0d0db476a0cfedf3aed709c23146630d73596c01996fd8879b5e68a08cfd9ee


  • Hi @serewicz thanks for replying.

    I'm sure I've executed all those steps. I am going to restart again completely from scratch and follow your described procedure.

    The only thing different should be my hostname, and respectively, the /etc/hosts file:

    [email protected]:~$ cat /etc/hosts       localhost    te-olmo-k8m0101.my.domain te-olmo-k8m0101 k8scp    te-olmo-k8m0102.my.domain      te-olmo-k8m0102    te-olmo-k8m0103.my.domain      te-olmo-k8m0103    te-olmo-k8w0101.my.domain      te-olmo-k8w0101    te-olmo-k8w0102.my.domain      te-olmo-k8w0102    te-olmo-hap0101.my.domain      te-olmo-hap0101
    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

    which should be fine...


    [email protected]:~$ cat /etc/sysctl.d/99-kubernetes-cri.conf
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    [email protected]:~$ sudo sysctl -a | grep "bridge-nf-call\|ip_forward"
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.ipv4.ip_forward_use_pmtu = 0
    [email protected]:/etc/modules-load.d# cat 001-kubernetes.conf
    [email protected]:/root# lsmod | grep 'overlay\|br_netfilter'
    br_netfilter           24576  0
    bridge                155648  1 br_netfilter

    [email protected]:~$ ping k8scp
    PING te-olmo-k8m0101 ( 56(84) bytes of data.
    64 bytes from te-olmo-k8m0101 ( icmp_seq=1 ttl=64 time=0.032 ms
    64 bytes from te-olmo-k8m0101 ( icmp_seq=2 ttl=64 time=0.049 ms

    In regarding of not-mentioned config file I ment the crio.conf file, not the kubeadm-crio.yaml file:

    s_03  ❯  ls -l
    total 36
    -rw-r--r-- 1 ruperto ruperto   121 Nov  2  2020 99-kubernetes-cri.conf
    -rw-r--r-- 1 ruperto ruperto 10200 Aug 23 14:32 crio.conf
    -rw-r--r-- 1 ruperto ruperto   958 Nov  2  2020 first.yaml
    -rw-r--r-- 1 ruperto ruperto   163 Sep 20 15:36 kubeadm-config.yaml
    -rw-r--r-- 1 ruperto ruperto  1699 Aug 23 14:32 kubeadm-crio.yaml
    -rw-r--r-- 2 ruperto ruperto   206 Oct 23  2020 low-resource-range.yaml
    -rw-r--r-- 1 ruperto ruperto  2469 Aug 23 14:32 second.yaml

    however placing or not placing the file ( /etc/crio/crio.conf ) did not make a difference.

    i'm going to try again and get back on this.

    thanks for the reply.

  • following your procedure at step # systemctl start crio, i get:

    nov 01 22:34:44 te-olmo-k8m0101 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.498563086Z" level=info msg="Starting CRI-O, version: 1.21.3, git: ff0b7feb8e12509076b4b0e338b6334ce466b293(clean)"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.499271441Z" level=info msg="Node configuration value for hugetlb cgroup is true"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.499451353Z" level=info msg="Node configuration value for pid cgroup is true"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.499652779Z" level=error msg="Node configuration validation for memoryswap cgroup failed: node not configured with memory swap"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.499828792Z" level=info msg="Node configuration value for memoryswap cgroup is false"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.505779558Z" level=info msg="Node configuration value for systemd CollectMode is true"nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.517452800Z" level=info msg="Node configuration value for systemd AllowedCPUs is false"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.609768863Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
    nov 01 22:34:44 te-olmo-k8m0101 crio[1975]: time="2021-11-01 22:34:44.610176582Z" level=fatal msg="Validating runtime config: runtime validation: \"runc\" not found in $PATH: exec: \"runc\": executable file not found in $PATH"
    nov 01 22:34:44 te-olmo-k8m0101 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
    nov 01 22:34:44 te-olmo-k8m0101 systemd[1]: crio.service: Failed with result 'exit-code'.
    nov 01 22:34:44 te-olmo-k8m0101 systemd[1]: Failed to start Container Runtime Interface for OCI (CRI-O).

    adding this made crio happier:

    [email protected]:/etc/crio/crio.conf.d# cat 10-runc.conf
      default_runtime = "runc"

    also installed conntrack ( crio asked about it... )

    nov 01 22:53:12 te-olmo-k8m0101 crio[2540]: W1101 22:53:12.999043    2540 hostport_manager.go:71] The binary conntrack is not installed, this can cause failures in network connection cleanup.

    however. following exact above steps I again have the same results.

    after founding issue: https://github.com/cri-o/cri-o/issues/3631

    I also updated

    #mountop = "nodev,metacopy=on"
    mountop = "nodev"

    after this

    # kubeadm init --kubeadm init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out


  • however...

    you are correct. and i screwed up an installation earlier.

    i removed containers-common and deleted /etc/crio* /etc/containers and /etc/cni

    and tried again and it worked.


Upcoming Training