Can you help me debug the master initialization?
I get the error timed out waiting for the condition
when I try to initialize the master:
root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out W0103 15:13:11.353300 8986 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster] and IPs [10.96.0.1 10.2.0.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.2.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.2.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0103 15:13:33.458024 8986 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0103 15:13:33.459355 8986 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
Answers
-
Hello,
That errors include localhost and not the k8smaster alias indicate an issue with the location of the kubeadm-config.yaml file, the syntax of the file, and/or the edit to /etc/hosts.
As it seems you have deviated from the labs as written I would encourage you to slow down and read, and follow, each step carefully. After you get a cluster working you can revisit, experiment, and diverge from the labs.
Regards,
0 -
I've been following the steps in the lab very carefully, without deviating in any way. It's not clear to me what went wrong. I would really appreciate some help to move forward. I am currently blocked and cannot continue with the course.
0 -
Hello,
As I look through your previous questions and examples it is clear you are not following the exercise guide.
Let's start with some ground work. We can go step by step to get you back on track. What are you using for you nodes, GCE, AWS, Virtual Box?
0 -
GCE
0 -
I've just fixed a typo in the file
/etc/hosts
and rerunkubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
Now I get this:root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out W0103 20:06:49.751110 19618 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10259]: Port 10259 is in use [ERROR Port-10257]: Port 10257 is in use [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
0 -
Are you using cri-o or docker as the container engine? You have missed a step either way as it cannot find the docker or cri-o engine. The init process has a search order which first looks for docker then cri-o then containerd. The error says docker is not enabled. Did you run the exercise step to install docker?
Now that I know you are using GCE, did you create a VPC which allows all traffic to all ports? Did you run step 2 and 3 of the exercise?
0 -
Docker is correctly installed:
root@master:~# docker -v Docker version 19.03.6, build 369ce74a3c
0 -
What is the output of sudo systemctl status docker?
Can you write your command history to a file and share it? history > history.out
0 -
The VPC I created should allow all traffic to all ports:
0 -
Great, the VPC looks good. What size nodes are you using, and what version on Ubuntu?
0 -
n1-standard-2 (2 vCPUs, 7.5 GB memory) with Ubuntu, 18.04 LTS
0 -
This is the output of
sudo systemctl status docker
:root@master:~# sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled) Active: active (running) since Sun 2021-01-03 15:13:12 UTC; 6h ago Docs: https://docs.docker.com Main PID: 9004 (dockerd) Tasks: 17 CGroup: /system.slice/docker.service └─9004 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root@master:~#
0 -
That looks right as well. Could you show the commands and put up to the point of kubeadm init please.
0 -
Here is the most recent command history:
root@master:~# history 1 apt-get update && apt-get upgrade -y 2 apt-get install -y vim 3 vim 4 apt-get install -y docker.io 5 pwd 6 ls 7 vim /etc/apt/source.list.d/kubernetes.list 8 vim /etc/apt/sources.list.d/kubernetes.list 9 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 10 pwd 11 ls 12 apt-get update 13 apt-get install -y kubeadm=1.18.1-00 kubelet=1.18.1-00 kubectl=1.18.1-00 14 apt-mark hold kubelet kubeadm kubectl 15 wget https://docs.projectcalico.org/manifests/calico.yaml 16 less 17 less calico.yaml 18 ip addr show 19 vim /etc/hosts 20 vim kubeadm-config.yaml 21 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out 22 less calico.yaml 23 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out 24 vi kubeadm-init.out 25 docker ps -a | grep kube | grep -v pause 26 systemctl status kubelet 27 journalctl -xeu kubelet 28 vim /etc/hosts 29 apt-get update 30 less calico.yaml 31 ip addr show 32 vim /etc/hosts 33 vim kubeadm-config.yaml 34 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out 35 wget https://docs.projectcalico.org/manifests/calico.yaml 36 less calico.yaml 37 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out 38 docker ps 39 docker -v 40 sudo systemctl status docker 41 history > history.out 42 history root@master:~#
0 -
I fixed the problem by running
kubeadm reset
.0
Categories
- All Categories
- 50 LFX Mentorship
- 103 LFX Mentorship: Linux Kernel
- 553 Linux Foundation Boot Camps
- 296 Cloud Engineer Boot Camp
- 119 Advanced Cloud Engineer Boot Camp
- 52 DevOps Engineer Boot Camp
- 53 Cloud Native Developer Boot Camp
- 4 Express Training Courses
- 4 Express Courses - Discussion Forum
- 1.9K Training Courses
- 18 LFC110 Class Forum
- 6 LFC131 Class Forum
- 25 LFD102 Class Forum
- 150 LFD103 Class Forum
- 17 LFD121 Class Forum
- LFD137 Class Forum
- 61 LFD201 Class Forum
- LFD210 Class Forum
- LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- LFD237 Class Forum
- 23 LFD254 Class Forum
- 598 LFD259 Class Forum
- 102 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- LFD273 Class Forum
- 2 LFS145 Class Forum
- 24 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 3 LFS203 Class Forum
- 69 LFS207 Class Forum
- 300 LFS211 Class Forum
- 54 LFS216 Class Forum
- 47 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 11 LFS244 Class Forum
- 34 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 140 LFS253 Class Forum
- LFS254 Class Forum
- 1K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 92 LFS260 Class Forum
- 130 LFS261 Class Forum
- 32 LFS262 Class Forum
- 79 LFS263 Class Forum
- 15 LFS264 Class Forum
- 11 LFS266 Class Forum
- 17 LFS267 Class Forum
- 17 LFS268 Class Forum
- 23 LFS269 Class Forum
- 203 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS281 Class Forum
- 221 LFW211 Class Forum
- 167 LFW212 Class Forum
- SKF100 Class Forum
- 902 Hardware
- 219 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 209 Networking
- 101 Printers & Scanners
- 85 Storage
- 761 Linux Distributions
- 88 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 24 openSUSE
- 141 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 356 Ubuntu
- 477 Linux System Administration
- 41 Cloud Computing
- 69 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 77 Network Management
- 108 System Management
- 49 Web Management
- 66 Mobile Computing
- 23 Android
- 29 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 536 Off Topic
- 131 Introductions
- 216 Small Talk
- 21 Study Material
- 817 Programming and Development
- 275 Kernel Development
- 508 Software Development
- 928 Software
- 260 Applications
- 184 Command Line
- 3 Compiling/Installing
- 76 Games
- 316 Installation
- 59 All In Program
- 59 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)