Welcome to the Linux Foundation Forum!

Lab 2.2 - Port 6443 stops responding intermittently on the control plane

scottengle
scottengle Posts: 4
edited August 2022 in LFD259 Class Forum

I'm running my VMs on GCE using Ubuntu 20.04. I'm experiencing intermittent issues where my kubectl commands will be refused. Sometimes, they start working within 5 or 10 minutes, but I can also reset the control plane and re-init it, then rejoin the worker node to the cluster.

Does anyone have advice on what configuration I can try adjusting to stop this issue from happening. It's quite disruptive when going through the exercises.

student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 3m38s v1.24.1
worker Ready <none> 2m11s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 6m58s v1.24.1
worker Ready <none> 5m31s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m2s v1.24.1
worker Ready <none> 5m35s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m8s v1.24.1
worker Ready <none> 5m41s v1.24.1
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 8m58s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m9s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m12s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m15s v1.24.1

Comments

  • scottengle
    scottengle Posts: 4
    edited August 2022

    I have also noticed that the entire cluster is incredibly slow.

    student@cp:~/LFD259/SOLUTIONS/s_02$ kubectl get pod
    NAME READY STATUS RESTARTS AGE
    basicpod 0/1 ContainerCreating 0 12m

  • einnod
    einnod Posts: 2

    I am facing the same issue, it is not like it does not work, sometimes it does, but 95% of the time it is down. I have been wasting time Googling the issue, everywhere they talk of swapfile, but I do not even have swap space on my EC2 instance.

  • einnod
    einnod Posts: 2

    Okay so I terminated my EC2 instances multiple times and recreated them, that did not help. This time I choose Ubuntu 20 LTS instead of 22, and it seems to work fine now.

  • It's probably not a bad idea to try recreating the instances. I'll give that a shot.

  • scottengle
    scottengle Posts: 4
    edited August 2022

    Recreating the VM instances has resolved both issues. Thanks for the nudge.

  • chrispokorni
    chrispokorni Posts: 2,372
    edited August 2022

    Hi @scottengle,

    This is a strange behavior indeed. If it is due to GCP issues, there isn't much we can do about it. However, at times it may be related to how the cluster nodes VMs have been provisioned, the VPC, VPC firewall, OS distribution and version, VM CPU, memory and disk space, etc...
    From your description it seems that the OS version 20.04 LTS should not be the issue, as it is the recommended version by the lab guide.
    In order to eliminate any possible networking issues did you happen to follow the demo video from the introductory chapter on how to provision the GCE instances? It offers important tips on GCP GCE provisioning, and on VPC and firewall configuration.

    If you experience slowness in your cluster again, try to run (when possible) the following command and provide its output in the forum to help troubleshoot the issue:

    kubectl get pods --all-namespaces -o wide
    OR
    kubectl get po -A -owide

    Regards,
    -Chris

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @einnod,

    The lab exercises have not yet been fully tested on 22.04 LTS, and there may be dependencies that need to be resolved prior to migrating the labs.

    Regards,
    -Chris

  • amayorga
    amayorga Posts: 6
    edited August 2022

    Hi @chrispokorni,I also join this thread.

    I'm laso facing problems when creating the first pod example (basicpod/nginx).

     kubectl get pod
    NAME       READY   STATUS              RESTARTS   AGE
    basicpod   0/1     ContainerCreating   0          113s
    
    kubectl describe pod basicpod
    
    Warning  FailedCreatePodSandBox  43s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0080880d559fb50ea77e6ea23dcb0369f1382f38aed54f13482afc7d2af46609/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown
      Warning  FailedCreatePodSandBox  4s (x3 over 28s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc9a5468aca6d6c0c3bc9bd162354d4adaeb81ba0ef95581c1da837031bc770/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown
    

    I have also tried with the image: arm64v8/nginx.

    Could it be some kind of incompatibility between the latest version of nginx with version 20.04?

    Or could it be related to problems I've had launching k8scp.sh and containerd?

    https://forum.linuxfoundation.org/discussion/comment/35215#Comment_35215

    To install manually I used

    sudo apt install containerd

    my node and containers status

    ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system namespace
    Error from server (NotFound): pods "namespace" not found
    ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system
    NAME                                       READY   STATUS              RESTARTS      AGE
    calico-kube-controllers-5b97f5d8cf-sfwfb   1/1     Running             2 (34m ago)   27h
    calico-node-5h77g                          0/1     Init:0/3            0             3h17m
    calico-node-9vz4r                          1/1     Running             2 (34m ago)   27h
    coredns-6d4b75cb6d-b5tf6                   1/1     Running             2 (34m ago)   27h
    coredns-6d4b75cb6d-wknrz                   1/1     Running             2 (34m ago)   27h
    etcd-ip-172-31-47-37                       1/1     Running             2 (34m ago)   27h
    kube-apiserver-ip-172-31-47-37             1/1     Running             2 (34m ago)   27h
    kube-controller-manager-ip-172-31-47-37    1/1     Running             2 (34m ago)   27h
    kube-proxy-8wpqj                           1/1     Running             2 (34m ago)   27h
    kube-proxy-dk9p6                           0/1     ContainerCreating   0             3h17m
    kube-scheduler-ip-172-31-47-37             1/1     Running             2 (34m ago)   27h
    ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get node
    NAME               STATUS   ROLES           AGE     VERSION
    ip-172-31-41-155   Ready    <none>          3h18m   v1.24.1
    ip-172-31-47-37    Ready    control-plane   27h     v1.24.1
    

    BR
    Alberto

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @amayorga,

    Installing containerd instead of containerd.io installs an earlier package version of containerd.

    As previously requested, please provide more detailed outputs:

    kubectl get pods -A -o wide
    and
    kubectl get nodes -o wide

    Regards,
    -Chris

  • Hi @chrispokorni,

    kubectl get pods -A -o wide
    NAMESPACE     NAME                                       READY   STATUS              RESTARTS       AGE    IP               NODE               NOMINATED NODE   READINESS GATES
    default       basicpod                                   0/1     ContainerCreating   0              7h2m   <none>           ip-172-31-41-155   <none>           <none>
    kube-system   calico-kube-controllers-5b97f5d8cf-sfwfb   1/1     Running             3 (118s ago)   33h    192.168.196.10   ip-172-31-47-37    <none>           <none>
    kube-system   calico-node-5h77g                          0/1     Init:0/3            0              10h    172.31.41.155    ip-172-31-41-155   <none>           <none>
    kube-system   calico-node-9vz4r                          1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    kube-system   coredns-6d4b75cb6d-b5tf6                   1/1     Running             3 (118s ago)   33h    192.168.196.12   ip-172-31-47-37    <none>           <none>
    kube-system   coredns-6d4b75cb6d-wknrz                   1/1     Running             3 (118s ago)   33h    192.168.196.11   ip-172-31-47-37    <none>           <none>
    kube-system   etcd-ip-172-31-47-37                       1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    kube-system   kube-apiserver-ip-172-31-47-37             1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    kube-system   kube-controller-manager-ip-172-31-47-37    1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    kube-system   kube-proxy-8wpqj                           1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    kube-system   kube-proxy-dk9p6                           0/1     ContainerCreating   0              10h    172.31.41.155    ip-172-31-41-155   <none>           <none>
    kube-system   kube-scheduler-ip-172-31-47-37             1/1     Running             3 (118s ago)   33h    172.31.47.37     ip-172-31-47-37    <none>           <none>
    
    kubectl get nodes -o wide
    NAME               STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
    ip-172-31-41-155   Ready    <none>          10h   v1.24.1   172.31.41.155   <none>        Ubuntu 20.04.5 LTS   5.15.0-1017-aws   containerd://1.5.9
    ip-172-31-47-37    Ready    control-plane   33h   v1.24.1   172.31.47.37    <none>        Ubuntu 20.04.5 LTS   5.15.0-1017-aws   containerd://1.5.9
    

    Thanks for helping.

    BR
    Alberto

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @amayorga,

    This is the output that shows us the state of the cluster. Since workload on the worker node (ip-172-31-41-155) is not running (kube-proxy, calico-node, and basicpod) I am still suspecting a networking issue between your EC2 instances.

    Please provide a screenshot of the Security Group rules configuration for the SG shielding the two EC2 instances.

    Regards,
    -Chris

Categories

Upcoming Training