LFS258 Lab 3.1 Error in kubeadm -init
Hi , I have followed the step mention in the lab document as below highlighted ,I am accessing the GCP node .
sudo -i
root@master:~# apt-get update && apt-get upgrade -y
root@master:~# vim /etc/apt/sources.list.d/kubernetes.list
root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@master:~# apt-get update
root@master:~# apt-get install -y kubeadm kubelet kubectl
root@master:~# apt-mark hold kubelet kubeadm kubectl
root@master:~# wget https://docs.projectcalico.org/manifests/calico.yaml
root@master:~# less calico.yaml
root@master:~# hostname -i
root@master:~# ip addr show
root@master:~# vim /etc/hosts
root@master:~# vim kubeadm-config.yaml
root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
**```
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.2.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.2.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.2.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'
```**
Ihave tried following the steps two times inclusing the creation of the fresh VM instance , still I am not able to proceed with the lab . Your help will be appeciated as I am doing this course during the day job and need to finish it in a week times and I am stuck with this lab .Thanks in Advance
Best Answer
-
Please do as the sugestion say:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'In my case the docker run in cgroup driver, different with kubelet driver. Docker need to run in systemd driver. https://stackoverflow.com/questions/43794169/docker-change-cgroup-driver-to-systemd.
Your issue may different.1
Answers
-
the kubeadm-config.yamllooks like below :
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.22.2 controlPlaneEndpoint: "k8scp:6443" networking: podSubnet: 192.168.0.0/16
0 -
I have tried as per the suggestion I see the below :
root@master:~# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Tue 2021-10-19 10:10:57 UTC; 4s ago Docs: https://kubernetes.io/docs/home/ Process: 11667 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE) Main PID: 11667 (code=exited, status=1/FAILURE)
0 -
Also I see some error to do with systemd , I am not sure how to fix it .Any help is appreciated .
root@master:~# journalctl -xeu kubelet
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741740 12812 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741789 12812 state_mem.go:36] "Initialized new in-memory state store"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741847 12812 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741871 12812 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.741891 12812 client.go:97] "Start docker client with request timeout" timeout="2m0s"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749542 12812 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749574 12812 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.749723 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753433 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753510 12812 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.753630 12812 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:29 master kubelet[12812]: I1019 10:12:29.759805 12812 docker_service.go:264] "Docker Info" dockerInfo=&{ID:JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC Containers:0 ContainersRunning:0 ContainersPaused:0 Con
Oct 19 10:12:29 master kubelet[12812]: E1019 10:12:29.759858 12812 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroup
Oct 19 10:12:29 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct 19 10:12:29 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
Oct 19 10:12:39 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Oct 19 10:12:39 master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 99.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd-- Support: http://www.ubuntu.com/support
-- Automatic restarting of the unit kubelet.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 19 10:12:39 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd-- Support: http://www.ubuntu.com/support
-- Unit kubelet.service has finished shutting down.
Oct 19 10:12:39 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd-- Support: http://www.ubuntu.com/support
-- Unit kubelet.service has finished starting up.
-- The start-up result is RESULT.
Oct 19 10:12:39 master kubelet[12939]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Oct 19 10:12:39 master kubelet[12939]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.925622 12939 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.926024 12939 server.go:868] "Client rotation is on, will bootstrap in background"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.928122 12939 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.930909 12939 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990609 12939 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990862 12939 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.990961 12939 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: Containe
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991039 12939 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991054 12939 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991088 12939 state_mem.go:36] "Initialized new in-memory state store"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991146 12939 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991177 12939 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.991197 12939 client.go:97] "Start docker client with request timeout" timeout="2m0s"
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997342 12939 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997371 12939 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
Oct 19 10:12:39 master kubelet[12939]: I1019 10:12:39.997515 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001540 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001689 12939 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.001740 12939 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
Oct 19 10:12:40 master kubelet[12939]: I1019 10:12:40.008022 12939 docker_service.go:264] "Docker Info" dockerInfo=&{ID:JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC Containers:0 ContainersRunning:0 ContainersPaused:0 Con
Oct 19 10:12:40 master kubelet[12939]: E1019 10:12:40.008069 12939 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroup
Oct 19 10:12:40 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct 19 10:12:40 master systemd[1]: kubelet.service: Failed with result 'exit-code'.0 -
@supirman looks like I have similar problem , but I am not sure how to fix it. my docker info looks like below ->
root@master:~# docker info
Client:
Context: default
Debug Mode: falseServer:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 7
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
** Cgroup Driver: cgroupfs**
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-1053-gcp
Operating System: Ubuntu 18.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.772GiB
Name: master
ID: JFIF:WH6U:QDXY:KLI2:DKXA:4JZV:47OG:MJKH:2YMZ:JVIO:3XDU:LEWC
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: falseWARNING: No swap limit support
0 -
Hi @swapnil07,
It seems your
kubeadm-config.yaml
does not include the intended Kubernetes version1.21.1
. When making such changes to provided installation scripts and to other configuration resources, do expect errors and possible crashes. The installation process may change from one Kubernetes release to the next, and the installation scripts are aligned with a specific version.Regards,
-Chris0 -
The 2022-03-11 PDF and files from the tar provide a JSON file to resolve this issue.
I found I had to restart docker after creating
daemon.json
file, but before initializing kubeadm.0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 154 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)