Welcome to the new Linux Foundation Forum!

kubeadm init hangs in GCP

edited July 30 in LFS258 Class Forum

I am stuck at step 11 in LAB 3.1. I get timeout error message. It seems my kubelet is not running.But I am exactly folloeing the steps given LAB 3.1.Thanks for the help.

https://lms.quickstart.com/custom/858487/LAB_3.1.pdf

Step 11: kubeadm init --pod-network-cidr 10.244.0.0/16

#################################

GCP VM Instance

ubuntu-1604-xenial-v20180724

################################

[email protected]:~# kubeadm init --pod-network-cidr 10.244.0.0/16

[init] Using Kubernetes version: v1.9.9[init] Using Authorization modes: [Node RBAC][preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [kmst kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.2][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated sa key and public key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".[init] This might take a minute or longer if the control plane images have to be pulled.

################################

GCP VM Instance logs:


Jul 30 08:02:36 kmst kubelet[10727]: W0730 08:02:36.633131 10727 status_manager.go:459] Failed to get status for pod "kube-apiserver-kmst_kube-system(240e2d8ec75db9f607a9a31f5374f462)": Get https://10.138.0.2:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kmst: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.063159 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.138.0.2:6443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.228891 10727 kubelet.go:1607] Failed creating a mirror pod for "etcd-kmst_kube-system(408851a572c13f8177557fdb9151111c)": Post https://10.138.0.2:6443/api/v1/namespaces/kube-system/pods: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.432891 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.138.0.2:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkmst&limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.435314 10727 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.138.0.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkmst&limit=500&resourceVersion=0: net/http: TLS handshake timeout Jul 30 08:02:39 kmst kubelet[10727]: E0730 08:02:39.595262 10727 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "kmst": Get https://10.138.0.2:6443/api/v1/nodes/kmst: net/http: TLS handshake timeout

Comments

  • chrispokornichrispokorni Posts: 86

    Hi, 

    Similar errors are seen when there isn't enough vCPU available on the master instance. To fix the vCPU issue ensure that the instance type has at least 2 vCPUs. Also, enable all traffic by allowing full access, allowing HTTP/HTTPS, and check that the firewall is inactive/disabled:


    sudo ufw status

    Regards, 

    -Chris

  • serewiczserewicz Posts: 393
    edited July 30

    Hello,

    Inside the error string I notice it mentions that crictl was not installed. I've seen this when I accidently skipped the step to install Docker. If there is not container engine the pod won't start. Please double check docker was installled on both nodes. Also make sure there were no errors in previous steps.

    Regards,

  • Thanks Chris. Issue resolved with new master instance with given spec.

Sign In or Register to comment.