Welcome to the Linux Foundation Forum!

kubeadm init hangs in GCP

RaveendraGuntupalli
edited July 2018 in LFS258 Class Forum

I am stuck at step 11 in LAB 3.1. I get timeout error message. It seems my kubelet is not running.But I am exactly folloeing the steps given LAB 3.1.Thanks for the help.

https://lms.quickstart.com/custom/858487/LAB_3.1.pdf

Step 11: kubeadm init --pod-network-cidr 10.244.0.0/16

#################################

GCP VM Instance

ubuntu-1604-xenial-v20180724

################################

root@kmst:~# kubeadm init --pod-network-cidr 10.244.0.0/16

[init] Using Kubernetes version: v1.9.9[init] Using Authorization modes: [Node RBAC][preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [kmst kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.2][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated sa key and public key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".[init] This might take a minute or longer if the control plane images have to be pulled.

################################

GCP VM Instance logs:


Jul 30 08:02:36 kmst kubelet[10727]: W0730 08:02:36.633131 10727 status_manager.go:459] Failed to get status for pod "kube-apiserver-kmst_kube-system(240e2d8ec75db9f607a9a31f5374f462)": Get https://10.138.0.2:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kmst: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.063159 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.138.0.2:6443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.228891 10727 kubelet.go:1607] Failed creating a mirror pod for "etcd-kmst_kube-system(408851a572c13f8177557fdb9151111c)": Post https://10.138.0.2:6443/api/v1/namespaces/kube-system/pods: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.432891 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.138.0.2:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkmst&limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.435314 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.138.0.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkmst&limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.595262 10727 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "kmst": Get https://10.138.0.2:6443/api/v1/nodes/kmst: net/http: TLS handshake timeout

Comments

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi, 

    Similar errors are seen when there isn't enough vCPU available on the master instance. To fix the vCPU issue ensure that the instance type has at least 2 vCPUs. Also, enable all traffic by allowing full access, allowing HTTP/HTTPS, and check that the firewall is inactive/disabled:


    sudo ufw status

    Regards, 

    -Chris

  • serewicz
    serewicz Posts: 1,000
    edited July 2018

    Hello,

    Inside the error string I notice it mentions that crictl was not installed. I've seen this when I accidently skipped the step to install Docker. If there is not container engine the pod won't start. Please double check docker was installled on both nodes. Also make sure there were no errors in previous steps.

    Regards,

  • Thanks Chris. Issue resolved with new master instance with given spec.

  • avmentzer
    avmentzer Posts: 8

    Same here:

    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

    than:

    Unfortunately, an error has occurred:
    timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    
    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
    
    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.
    
    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
    

    couldn't initialize a Kubernetes cluster'

    Output of journalctl -xeu kubelet:

    19445 kubelet_node_status.go:71] "Attempting to register node" node="master"
    2197 19445 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://k8master:6
    0904 19445 kubelet.go:2291] "Error getting node" err="node \"master\" not found"

    Firewall is off. I' using the recommended GC-Setup

  • avmentzer
    avmentzer Posts: 8

    EDIT:

    SOLVED (it was a typo, forgot the "s" after k8")

    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: 1.21.0
    controlPlaneEndpoint: "k8smaster:6443"
    networking:
    podSubnet: 192.168.0.0/16

Categories

Upcoming Training