Welcome to the Linux Foundation Forum!

Lab 3.1. Install Kubernetes

I am following the instruction from the PDF file in Lab 3.1. Install Kubernetes, however I am not able to initialize cp. I am setting it up on AWS, Ubuntu 20.04 as instructed. Any ideas what could be wrong?

root@ip-172-31-17-193:/etc# kubeadm init --config=kubeadm-config.yaml --upload-certs \

| tee kubeadm-init.out

[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0307 13:32:12.195579 5623 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-17-193 k8scp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.clust er.local] and IPs [10.96.0.1 172.31.17.193]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-17-193 localhost] and IPs [172.31.17.193 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-17-193 localhost] and IPs [172.31.17.193 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0307 13:32:25.767237 5623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0307 13:32:25.976809 5623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0307 13:32:26.160894 5623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0307 13:32:26.263318 5623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the conditionerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'

====================================================
root@ip-172-31-17-193:/etc# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) since Thu 2024-03-07 13:53:08 UTC; 4min 56s ago
Docs: https://kubernetes.io/docs/
Process: 6598 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS)
Main PID: 6598 (code=exited, status=0/SUCCESS)

Mar 07 13:53:05 ip-172-31-17-193 kubelet[6598]: W0307 13:53:05.536164 6598 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://k8scp:6433/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceV>
Mar 07 13:53:05 ip-172-31-17-193 kubelet[6598]: E0307 13:53:05.536250 6598 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://k8scp:6433/apis/storage.k8s.io/v1>
Mar 07 13:53:06 ip-172-31-17-193 kubelet[6598]: E0307 13:53:06.838235 6598 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://k8scp:6433/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-193?tim>
Mar 07 13:53:07 ip-172-31-17-193 kubelet[6598]: I0307 13:53:07.128810 6598 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-193"
Mar 07 13:53:07 ip-172-31-17-193 kubelet[6598]: E0307 13:53:07.129356 6598 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://k8scp:6433/api/v1/nodes\": dial tcp 172.31.17.193:6433: connect: connection refused" n>
Mar 07 13:53:07 ip-172-31-17-193 kubelet[6598]: E0307 13:53:07.621535 6598 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing>
Mar 07 13:53:08 ip-172-31-17-193 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Mar 07 13:53:08 ip-172-31-17-193 kubelet[6598]: I0307 13:53:08.595339 6598 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Mar 07 13:53:08 ip-172-31-17-193 systemd[1]: kubelet.service: Succeeded.
Mar 07 13:53:08 ip-172-31-17-193 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

Is the file up to date? It is kind of frustrating to pay all this money, yet the instructions for the initial configuration are obsolete or written in such an unclear manner preventing me to get at least the basic config right. Maybe there is some better (more or less foolproof) step-by-step guide for the initial configuration?

Answers

Categories

Upcoming Training