Welcome to the Linux Foundation Forum!

LFS258 Kubernetes Fundamentals - Labs Updated to v1.20.1 (2.5.2021)

Hi,

A new course version of LFS258 went live today. Lab exercises have been updated to Kubernetes v1.20.1. We are planning another course update in the coming weeks with lecture changes.

To ensure you have access to the latest updates, please clear your cache before accessing the course.

https://forum.linuxfoundation.org/discussion/858378/lfs258-labs-updated-to-v1-20-1-2-5-2021

Regards,
-Chris

Comments

  • varooran
    varooran Posts: 9

    I am having an issue installing Kubernetes on the master node. Exactly failing at step 14, (kubeadm init..)

    I have pasted the output:
    root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubedam-init.out
    W0304 22:50:52.795280 14976 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [init] Using Kubernetes version: v1.19.0
    [preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.2.0.8]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.2.0.8 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.2.0.8 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    To see the stack trace of this error execute with --v=5 or higher

    Unfortunately, an error has occurred:
        timed out waiting for the condition
    
    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    
    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
    
    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.
    
    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
    

    can you please advise what's going on?

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @varooran,

    This early in the lab the issues may be related to misconfigured kubeadm-config.yaml or /etc/hosts files. Or related to guest VM configuration issues - VMs inadequately sized, cloud or hypervisor firewalls that may block traffic to ports, guest OS firewalls, host OS firewalls, just to list a few possibilities.

    I would start by revisiting the two files mentioned, and then ensure that the VMs have enough resources based on the sizing guide found in the Overview section of the lab, and that guest OS firewalls are disabled, and the cloud/hypervisor firewall is open to allow all ingress traffic, from all sources, to all ports, all protocols.

    Regards,
    -Chris

  • varooran
    varooran Posts: 9

    @chrispokorni , Many thanks for your time. It is working, most likely instance that I created didn't have enough RAM.

Categories

Upcoming Training