Welcome to the Linux Foundation Forum!

Control plane deployment fails due to kubelet boot up timeout

Options
tastyminerals
tastyminerals Posts: 4
edited December 2023 in LFD259 Class Forum

Hi, I am trying to follow the first lab assignment. Concretely, the lab2.2 where we supposed to set up the environment and create two "master" and "worker" nodes. I cannot get through the "master" node setup.
Running the course "k8scp.sh" script fails on Ubuntu 20.04.

[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

Checking the kubelet system status

kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Fri 2023-12-15 09:39:26 CET; 6min ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 7456 (kubelet)
      Tasks: 19 (limit: 9513)
     Memory: 59.1M
        CPU: 8.799s
     CGroup: /system.slice/kubelet.service
             └─7456 [rosetta] /usr/bin/kubelet /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock

The kubelet logs contain rpc error: code = Unknown desc = failed to generate sanbdox container spec options: failed to generate seccomp spec opts: seccomp is not supported\ and "Unable to register node with API server" err="Post \"https://198.19.249.189:6443/api/v1/nodes\": dial tcp 198.19.249.189:6443: connect: connection refused" node="kfd-master" which is probably caused by the former.

I checked if the kernel supports seccomp. It does, why the error then?

Checking the journal logs journalctl -xeu kubelet

7456 kubelet_node_status.go:70] "Attempting to register node" node="kfd-master"
7456 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://198.19.249.189:6443/api/v1/nodes\": dial tcp 198.19.249.189:6443: connect: connection refused" node="kfd-master"
7456 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to generate sanbdox container spec options: failed to generate seccomp spec opts: seccomp is not supported"
7456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to generate sanbdox container spec options: failed to generate seccomp spec opts: seccomp is not supported" pod="kube-system/kube-controller-manager-kfd-master"

Again, seccomp is not supported?
I have installed the seccomp packages and ran the following script:

#include <stdio.h>
#include <seccomp.h>

int main() {
  if (seccomp_api_get() > 0) {
    printf("Seccomp is supported.\n");
  } else {
    printf("Seccomp is not supported.\n");
  }
  return 0;
}

The script outputs

./check_seccomp
Seccomp is supported.

What can be the issue here?

I should mention though, that I am running this on a local machine using the Linux virtual machine on a MacOS. The OS is Ubuntu 20.04, linux kernel is 6.5.13. The virtualization software I use allows to create several Linux machines the shared network for both machines will be provided by the software, so setting up the master and worker nodes should be possible (in theory).

Answers

  • chrispokorni
    Options

    Hi @tastyminerals,

    For networking purposes I would recommend configuring the hypervisor with a DHCP server with a private subnet (10.0.0.0/16 or 172.16.0.0/12 for example).

    For each VM a single bridged network interface would suffice, on the private subnet of the DHCP server, and enable all inbound (ingress) traffic to your VMs (this should be a setting on your hypervisor). Using public IP addresses or multiple network interfaces per VM requires additional configuration options when bootstrapping Kubernetes, that are beyond the scope of the course. In addition with public IP addresses you may need to account for external firewall rules (from ISP, enterprise, etc.) blocking protocols used by Kubernetes and its plugins.

    Watching the two video guides from the introductory chapter may help. Although targeting AWS and GCP clouds, networking requirements are similar for on-prem VMs.

    Regards,
    -Chris

  • oleksandrsettlemint
    Options

    @tastyminerals Have the same issue. Are you able to figure out the reason and fix it?

  • tastyminerals
    tastyminerals Posts: 4
    edited February 19
    Options

    @oleksandrsettlemint I skipped k8scp.sh and created a node, worker environment via minikube: minikue start --nodes 2 -p my-cluster-name. You only need it for setting up the cluster nodes anyway. It works fine and you don't have a headache with their "custom" scripts that work only on AWS machines.

Categories

Upcoming Training