Welcome to the Linux Foundation Forum!

Lab 2.2 - Port 6443 stops responding intermittently on the control plane

Posts: 4
edited August 2022 in LFD259 Class Forum

I'm running my VMs on GCE using Ubuntu 20.04. I'm experiencing intermittent issues where my kubectl commands will be refused. Sometimes, they start working within 5 or 10 minutes, but I can also reset the control plane and re-init it, then rejoin the worker node to the cluster.

Does anyone have advice on what configuration I can try adjusting to stop this issue from happening. It's quite disruptive when going through the exercises.

student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 3m38s v1.24.1
worker Ready <none> 2m11s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 6m58s v1.24.1
worker Ready <none> 5m31s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m2s v1.24.1
worker Ready <none> 5m35s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m8s v1.24.1
worker Ready <none> 5m41s v1.24.1
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 8m58s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m9s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m12s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m15s v1.24.1

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 4
    edited August 2022

    I have also noticed that the entire cluster is incredibly slow.

    student@cp:~/LFD259/SOLUTIONS/s_02$ kubectl get pod
    NAME READY STATUS RESTARTS AGE
    basicpod 0/1 ContainerCreating 0 12m

  • Posts: 2

    I am facing the same issue, it is not like it does not work, sometimes it does, but 95% of the time it is down. I have been wasting time Googling the issue, everywhere they talk of swapfile, but I do not even have swap space on my EC2 instance.

  • Posts: 2

    Okay so I terminated my EC2 instances multiple times and recreated them, that did not help. This time I choose Ubuntu 20 LTS instead of 22, and it seems to work fine now.

  • It's probably not a bad idea to try recreating the instances. I'll give that a shot.

  • Posts: 4
    edited August 2022

    Recreating the VM instances has resolved both issues. Thanks for the nudge.

  • Posts: 2,451
    edited August 2022

    Hi @scottengle,

    This is a strange behavior indeed. If it is due to GCP issues, there isn't much we can do about it. However, at times it may be related to how the cluster nodes VMs have been provisioned, the VPC, VPC firewall, OS distribution and version, VM CPU, memory and disk space, etc...
    From your description it seems that the OS version 20.04 LTS should not be the issue, as it is the recommended version by the lab guide.
    In order to eliminate any possible networking issues did you happen to follow the demo video from the introductory chapter on how to provision the GCE instances? It offers important tips on GCP GCE provisioning, and on VPC and firewall configuration.

    If you experience slowness in your cluster again, try to run (when possible) the following command and provide its output in the forum to help troubleshoot the issue:

    kubectl get pods --all-namespaces -o wide
    OR
    kubectl get po -A -owide

    Regards,
    -Chris

  • Posts: 2,451

    Hi @einnod,

    The lab exercises have not yet been fully tested on 22.04 LTS, and there may be dependencies that need to be resolved prior to migrating the labs.

    Regards,
    -Chris

  • Posts: 6
    edited August 2022

    Hi @chrispokorni,I also join this thread.

    I'm laso facing problems when creating the first pod example (basicpod/nginx).

    1. kubectl get pod
    2. NAME READY STATUS RESTARTS AGE
    3. basicpod 0/1 ContainerCreating 0 113s
    1. kubectl describe pod basicpod
    2.  
    3. Warning FailedCreatePodSandBox 43s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0080880d559fb50ea77e6ea23dcb0369f1382f38aed54f13482afc7d2af46609/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown
    4. Warning FailedCreatePodSandBox 4s (x3 over 28s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc9a5468aca6d6c0c3bc9bd162354d4adaeb81ba0ef95581c1da837031bc770/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown

    I have also tried with the image: arm64v8/nginx.

    Could it be some kind of incompatibility between the latest version of nginx with version 20.04?

    Or could it be related to problems I've had launching k8scp.sh and containerd?

    https://forum.linuxfoundation.org/discussion/comment/35215#Comment_35215

    To install manually I used

    sudo apt install containerd

    my node and containers status

    1. ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system namespace
    2. Error from server (NotFound): pods "namespace" not found
    3. ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system
    4. NAME READY STATUS RESTARTS AGE
    5. calico-kube-controllers-5b97f5d8cf-sfwfb 1/1 Running 2 (34m ago) 27h
    6. calico-node-5h77g 0/1 Init:0/3 0 3h17m
    7. calico-node-9vz4r 1/1 Running 2 (34m ago) 27h
    8. coredns-6d4b75cb6d-b5tf6 1/1 Running 2 (34m ago) 27h
    9. coredns-6d4b75cb6d-wknrz 1/1 Running 2 (34m ago) 27h
    10. etcd-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h
    11. kube-apiserver-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h
    12. kube-controller-manager-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h
    13. kube-proxy-8wpqj 1/1 Running 2 (34m ago) 27h
    14. kube-proxy-dk9p6 0/1 ContainerCreating 0 3h17m
    15. kube-scheduler-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h
    16. ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get node
    17. NAME STATUS ROLES AGE VERSION
    18. ip-172-31-41-155 Ready <none> 3h18m v1.24.1
    19. ip-172-31-47-37 Ready control-plane 27h v1.24.1

    BR
    Alberto

  • Posts: 2,451

    Hi @amayorga,

    Installing containerd instead of containerd.io installs an earlier package version of containerd.

    As previously requested, please provide more detailed outputs:

    kubectl get pods -A -o wide
    and
    kubectl get nodes -o wide

    Regards,
    -Chris

  • Hi @chrispokorni,

    1. kubectl get pods -A -o wide
    2. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. default basicpod 0/1 ContainerCreating 0 7h2m <none> ip-172-31-41-155 <none> <none>
    4. kube-system calico-kube-controllers-5b97f5d8cf-sfwfb 1/1 Running 3 (118s ago) 33h 192.168.196.10 ip-172-31-47-37 <none> <none>
    5. kube-system calico-node-5h77g 0/1 Init:0/3 0 10h 172.31.41.155 ip-172-31-41-155 <none> <none>
    6. kube-system calico-node-9vz4r 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    7. kube-system coredns-6d4b75cb6d-b5tf6 1/1 Running 3 (118s ago) 33h 192.168.196.12 ip-172-31-47-37 <none> <none>
    8. kube-system coredns-6d4b75cb6d-wknrz 1/1 Running 3 (118s ago) 33h 192.168.196.11 ip-172-31-47-37 <none> <none>
    9. kube-system etcd-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    10. kube-system kube-apiserver-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    11. kube-system kube-controller-manager-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    12. kube-system kube-proxy-8wpqj 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    13. kube-system kube-proxy-dk9p6 0/1 ContainerCreating 0 10h 172.31.41.155 ip-172-31-41-155 <none> <none>
    14. kube-system kube-scheduler-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
    1. kubectl get nodes -o wide
    2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    3. ip-172-31-41-155 Ready <none> 10h v1.24.1 172.31.41.155 <none> Ubuntu 20.04.5 LTS 5.15.0-1017-aws containerd://1.5.9
    4. ip-172-31-47-37 Ready control-plane 33h v1.24.1 172.31.47.37 <none> Ubuntu 20.04.5 LTS 5.15.0-1017-aws containerd://1.5.9

    Thanks for helping.

    BR
    Alberto

  • Posts: 2,451

    Hi @amayorga,

    This is the output that shows us the state of the cluster. Since workload on the worker node (ip-172-31-41-155) is not running (kube-proxy, calico-node, and basicpod) I am still suspecting a networking issue between your EC2 instances.

    Please provide a screenshot of the Security Group rules configuration for the SG shielding the two EC2 instances.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training