Welcome to the Linux Foundation Forum!

LAB 3.6 master node not ready

Hello,
I've been following steps in containerd-setup.txt
However, the master node never gets into ready state. Notice the CONTAINER-RUNTIME field.

$ kubectl get node k8s-single -o wide
NAME         STATUS     ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
k8s-single   NotReady   control-plane,master   19m   v1.23.1   10.166.0.3    <none>        Ubuntu 20.04.4 LTS   5.15.0-1016-gcp   containerd://Unknown
$ kubectl describe nodes k8s-single 
Name:               k8s-single
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-single
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 10.166.0.3/32
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.100.0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 03 Oct 2022 11:30:17 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-single
  AcquireTime:     <unset>
  RenewTime:       Mon, 03 Oct 2022 11:50:50 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 03 Oct 2022 11:31:14 +0000   Mon, 03 Oct 2022 11:31:14 +0000   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Mon, 03 Oct 2022 11:50:50 +0000   Mon, 03 Oct 2022 11:30:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 03 Oct 2022 11:50:50 +0000   Mon, 03 Oct 2022 11:30:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 03 Oct 2022 11:50:50 +0000   Mon, 03 Oct 2022 11:30:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Mon, 03 Oct 2022 11:50:50 +0000   Mon, 03 Oct 2022 11:45:02 +0000   KubeletNotReady              [container runtime is down, PLEG is not healthy: pleg was last seen active 6m16.582796441s ago; threshold is 3m0s]
Addresses:
  InternalIP:  10.166.0.3
  Hostname:    k8s-single
Capacity:
  cpu:                2
  ephemeral-storage:  20134592Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7621368Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  18556039957
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7518968Ki
  pods:               110
System Info:
  Machine ID:                 bbd0e9ce1b4c1a57630559bf544f53af
  System UUID:                bbd0e9ce-1b4c-1a57-6305-59bf544f53af
  Boot ID:                    a95eb6e2-0c9f-49ce-acc2-60d17c82d9cb
  Kernel Version:             5.15.0-1016-gcp
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://Unknown
  Kubelet Version:            v1.23.1
  Kube-Proxy Version:         v1.23.1
PodCIDR:                      192.168.0.0/24
PodCIDRs:                     192.168.0.0/24
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-kube-controllers-66966888c4-tm7bk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
  kube-system                 calico-node-cqxcn                           250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
  kube-system                 coredns-64897985d-jpnrc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
  kube-system                 coredns-64897985d-n55jc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
  kube-system                 etcd-k8s-single                             100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
  kube-system                 kube-apiserver-k8s-single                   250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
  kube-system                 kube-controller-manager-k8s-single          200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
  kube-system                 kube-proxy-79lxk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
  kube-system                 kube-scheduler-k8s-single                   100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1100m (55%)  0 (0%)
  memory             240Mi (3%)   340Mi (4%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type     Reason                   Age                  From        Message
  ----     ------                   ----                 ----        -------
  Normal   Starting                 20m                  kube-proxy  
  Warning  InvalidDiskCapacity      20m                  kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  20m                  kubelet     Node k8s-single status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    20m                  kubelet     Node k8s-single status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     20m                  kubelet     Node k8s-single status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  20m                  kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 20m                  kubelet     Starting kubelet.
  Normal   NodeReady                19m                  kubelet     Node k8s-single status is now: NodeReady
  Normal   NodeNotReady             5m53s                kubelet     Node k8s-single status is now: NodeNotReady
  Warning  ContainerGCFailed        30s (x6 over 5m30s)  kubelet     rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
  Warning  ImageGCFailed            30s                  kubelet     rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService
sudo crictl ps
FATA[0000] listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
sudo systemctl status containerd
...
Oct 03 11:53:15 k8s-single containerd[527]: time="2022-10-03T11:53:15.891993595Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: no corresponding runtime configured in `containerd.runtimes` for `containerd` `default_runtime_name = \"runc\""

It seems to be an error with container runtime.

Answers

  • k0dard
    k0dard Posts: 115

    It seems that the problem is in containerd-setup.txt
    We intervene on containerd config file two times, and the second time actually overwrites the first config file we created. I don't know if this is intentional (?)

    First:

    # Configure containerd to use the runc engins
    cat <<EOF | sudo tee /etc/containerd/config.toml
    version = 2
    #disabled_plugins = ["cri"]
    [plugins."io.containerd.runtime.v1.linux"]
      shim_debug = true
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      runtime_type = "io.containerd.runc.v2"
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
      runtime_type = "io.containerd.runsc.v1"
    EOF
    

    and a bit later:

    # Get containerd running, append or create several files.
    cat <<EOF | sudo tee /etc/containerd/config.toml
    disabled_plugins = ["restart"]
    [plugins.linux]
      shim_debug = true
    [plugins.cri.containerd.runtimes.runsc]
      runtime_type = "io.containerd.runsc.v1"
    EOF
    

    Since the second section's comment says "# Get containerd running, append or create several files", I thought that -a parameter has been forgotten for tee command (in order to append the new content to existing file instead of overwriting it)

    However, appending doesn't resolve the problem, but completely omitting the Get containerd running part does...

    What is Get containerd running part supposed to do? Is it OK omitting it?

  • chrispokorni
    chrispokorni Posts: 2,315

    Hi @k0dard,

    The issue with containerd-setup.txt file was discussed in an earlier discussion thread.

    Regards,
    -Chris

Categories

Upcoming Training