Welcome to the Linux Foundation Forum!

LFS258 Lab 3.1 Error in kubeadm -init

Hi , I have followed the step mention in the lab document as below highlighted ,I am accessing the GCP node .
sudo -i
root@master:~# apt-get update && apt-get upgrade -y

root@master:~# vim /etc/apt/sources.list.d/kubernetes.list

root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

root@master:~# apt-get update

root@master:~# apt-get install -y kubeadm kubelet kubectl

root@master:~# apt-mark hold kubelet kubeadm kubectl

root@master:~# wget https://docs.projectcalico.org/manifests/calico.yaml

root@master:~# less calico.yaml
root@master:~# hostname -i

root@master:~# ip addr show

root@master:~# vim /etc/hosts
root@master:~# vim kubeadm-config.yaml
root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
**```
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.2.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.2.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.2.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

```**

Ihave tried following the steps two times inclusing the creation of the fresh VM instance , still I am not able to proceed with the lab . Your help will be appeciated as I am doing this course during the day job and need to finish it in a week times and I am stuck with this lab .Thanks in Advance

Best Answer

Answers

  • the kubeadm-config.yamllooks like below :

    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: 1.22.2
    controlPlaneEndpoint: "k8scp:6443"
    networking:
      podSubnet: 192.168.0.0/16
    
  • I have tried as per the suggestion I see the below :

    root@master:~# systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
      Drop-In: /etc/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: activating (auto-restart) (Result: exit-code) since Tue 2021-10-19 10:10:57 UTC; 4s ago
         Docs: https://kubernetes.io/docs/home/
      Process: 11667 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
     Main PID: 11667 (code=exited, status=1/FAILURE)
    
  • Also I see some error to do with systemd , I am not sure how to fix it .Any help is appreciated .
    root@master:~# journalctl -xeu kubelet
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741740 12812 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741789 12812 state_mem.go:36] "Initialized new in-memory state store"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741847 12812 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741871 12812 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741891 12812 client.go:97] "Start docker client with request timeout" timeout="2m0s"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749542 12812 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749574 12812 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749723 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753433 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753510 12812 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753630 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.759805 12812 docker_service.go:264] "Docker Info" dockerInfo=&{ID:JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC Containers:0 ContainersRunning:0 ContainersPaused:0 Con
    Oct 19 10:12:29 master kubelet[12812]: E1019 10:12:29.759858 12812 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroup
    Oct 19 10:12:29 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
    Oct 19 10:12:29 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
    Oct 19 10:12:39 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
    Oct 19 10:12:39 master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 99.
    -- Subject: Automatic restarting of a unit has been scheduled
    -- Defined-By: systemd

    -- Support: http://www.ubuntu.com/support

    -- Automatic restarting of the unit kubelet.service has been scheduled, as the result for
    -- the configured Restart= setting for the unit.
    Oct 19 10:12:39 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
    -- Subject: Unit kubelet.service has finished shutting down
    -- Defined-By: systemd

    -- Support: http://www.ubuntu.com/support

    -- Unit kubelet.service has finished shutting down.
    Oct 19 10:12:39 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
    -- Subject: Unit kubelet.service has finished start-up
    -- Defined-By: systemd

    -- Support: http://www.ubuntu.com/support

    -- Unit kubelet.service has finished starting up.

    -- The start-up result is RESULT.
    Oct 19 10:12:39 master kubelet[12939]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
    Oct 19 10:12:39 master kubelet[12939]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.925622 12939 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.926024 12939 server.go:868] "Client rotation is on, will bootstrap in background"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.928122 12939 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.930909 12939 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990609 12939 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990862 12939 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990961 12939 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: Containe
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991039 12939 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991054 12939 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991088 12939 state_mem.go:36] "Initialized new in-memory state store"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991146 12939 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991177 12939 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991197 12939 client.go:97] "Start docker client with request timeout" timeout="2m0s"
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997342 12939 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997371 12939 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
    Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997515 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001540 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001689 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
    Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001740 12939 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
    Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.008022 12939 docker_service.go:264] "Docker Info" dockerInfo=&{ID:JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC Containers:0 ContainersRunning:0 ContainersPaused:0 Con
    Oct 19 10:12:40 master kubelet[12939]: E1019 10:12:40.008069 12939 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroup
    Oct 19 10:12:40 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
    Oct 19 10:12:40 master systemd[1]: kubelet.service: Failed with result 'exit-code'.

  • @supirman looks like I have similar problem , but I am not sure how to fix it. my docker info looks like below ->

    root@master:~# docker info
    Client:
    Context: default
    Debug Mode: false

    Server:
    Containers: 0
    Running: 0
    Paused: 0
    Stopped: 0
    Images: 7
    Server Version: 20.10.7
    Storage Driver: overlay2
    Backing Filesystem: extfs
    Supports d_type: true
    Native Overlay Diff: true
    userxattr: false
    Logging Driver: json-file
    ** Cgroup Driver: cgroupfs**
    Cgroup Version: 1
    Plugins:
    Volume: local
    Network: bridge host ipvlan macvlan null overlay
    Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
    Swarm: inactive
    Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version:
    runc version:
    init version:
    Security Options:
    apparmor
    seccomp
    Profile: default
    Kernel Version: 5.4.0-1053-gcp
    Operating System: Ubuntu 18.04.6 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 2
    Total Memory: 7.772GiB
    Name: master
    ID: JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC
    Docker Root Dir: /var/lib/docker
    Debug Mode: false
    Registry: https://index.docker.io/v1/
    Labels:
    Experimental: false
    Insecure Registries:
    127.0.0.0/8
    Live Restore Enabled: false

    WARNING: No swap limit support

  • @supirman Thank you for your help I was able to change the cgroup driver and have got the kubelet in a running status now.

  • chrispokorni
    chrispokorni Posts: 2,346
    edited October 2021

    Hi @swapnil07,

    It seems your kubeadm-config.yaml does not include the intended Kubernetes version 1.21.1. When making such changes to provided installation scripts and to other configuration resources, do expect errors and possible crashes. The installation process may change from one Kubernetes release to the next, and the installation scripts are aligned with a specific version.

    Regards,
    -Chris

  • crankyed
    crankyed Posts: 4

    The 2022-03-11 PDF and files from the tar provide a JSON file to resolve this issue.

    I found I had to restart docker after creating daemon.json file, but before initializing kubeadm.

Categories

Upcoming Training