Lab 2.2 - Port 6443 stops responding intermittently on the control plane
I'm running my VMs on GCE using Ubuntu 20.04. I'm experiencing intermittent issues where my kubectl commands will be refused. Sometimes, they start working within 5 or 10 minutes, but I can also reset the control plane and re-init it, then rejoin the worker node to the cluster.
Does anyone have advice on what configuration I can try adjusting to stop this issue from happening. It's quite disruptive when going through the exercises.
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 3m38s v1.24.1
worker Ready <none> 2m11s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 6m58s v1.24.1
worker Ready <none> 5m31s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m2s v1.24.1
worker Ready <none> 5m35s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 7m8s v1.24.1
worker Ready <none> 5m41s v1.24.1
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
The connection to the server 10.2.0.3:6443 was refused - did you specify the right host or port?
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 8m58s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m9s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m12s v1.24.1
student@cp:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cp Ready control-plane 10m v1.24.1
worker Ready <none> 9m15s v1.24.1
Comments
-
I have also noticed that the entire cluster is incredibly slow.
student@cp:~/LFD259/SOLUTIONS/s_02$ kubectl get pod
NAME READY STATUS RESTARTS AGE
basicpod 0/1 ContainerCreating 0 12m
1 -
I am facing the same issue, it is not like it does not work, sometimes it does, but 95% of the time it is down. I have been wasting time Googling the issue, everywhere they talk of swapfile, but I do not even have swap space on my EC2 instance.
0 -
Okay so I terminated my EC2 instances multiple times and recreated them, that did not help. This time I choose Ubuntu 20 LTS instead of 22, and it seems to work fine now.
0 -
It's probably not a bad idea to try recreating the instances. I'll give that a shot.
0 -
Recreating the VM instances has resolved both issues. Thanks for the nudge.
0 -
Hi @scottengle,
This is a strange behavior indeed. If it is due to GCP issues, there isn't much we can do about it. However, at times it may be related to how the cluster nodes VMs have been provisioned, the VPC, VPC firewall, OS distribution and version, VM CPU, memory and disk space, etc...
From your description it seems that the OS version 20.04 LTS should not be the issue, as it is the recommended version by the lab guide.
In order to eliminate any possible networking issues did you happen to follow the demo video from the introductory chapter on how to provision the GCE instances? It offers important tips on GCP GCE provisioning, and on VPC and firewall configuration.If you experience slowness in your cluster again, try to run (when possible) the following command and provide its output in the forum to help troubleshoot the issue:
kubectl get pods --all-namespaces -o wide
ORkubectl get po -A -owide
Regards,
-Chris1 -
Hi @einnod,
The lab exercises have not yet been fully tested on 22.04 LTS, and there may be dependencies that need to be resolved prior to migrating the labs.
Regards,
-Chris1 -
Hi @chrispokorni,I also join this thread.
I'm laso facing problems when creating the first pod example (basicpod/nginx).
kubectl get pod NAME READY STATUS RESTARTS AGE basicpod 0/1 ContainerCreating 0 113s
kubectl describe pod basicpod Warning FailedCreatePodSandBox 43s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0080880d559fb50ea77e6ea23dcb0369f1382f38aed54f13482afc7d2af46609/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown Warning FailedCreatePodSandBox 4s (x3 over 28s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc9a5468aca6d6c0c3bc9bd162354d4adaeb81ba0ef95581c1da837031bc770/log.json: no such file or directory): fork/exec /usr/local/bin/runc: exec format error: unknown
I have also tried with the image: arm64v8/nginx.
Could it be some kind of incompatibility between the latest version of nginx with version 20.04?
Or could it be related to problems I've had launching k8scp.sh and containerd?
https://forum.linuxfoundation.org/discussion/comment/35215#Comment_35215
To install manually I used
sudo apt install containerd
my node and containers status
ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system namespace Error from server (NotFound): pods "namespace" not found ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5b97f5d8cf-sfwfb 1/1 Running 2 (34m ago) 27h calico-node-5h77g 0/1 Init:0/3 0 3h17m calico-node-9vz4r 1/1 Running 2 (34m ago) 27h coredns-6d4b75cb6d-b5tf6 1/1 Running 2 (34m ago) 27h coredns-6d4b75cb6d-wknrz 1/1 Running 2 (34m ago) 27h etcd-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h kube-apiserver-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h kube-controller-manager-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h kube-proxy-8wpqj 1/1 Running 2 (34m ago) 27h kube-proxy-dk9p6 0/1 ContainerCreating 0 3h17m kube-scheduler-ip-172-31-47-37 1/1 Running 2 (34m ago) 27h ubuntu@ip-172-31-47-37:/usr/local/bin$ kubectl get node NAME STATUS ROLES AGE VERSION ip-172-31-41-155 Ready <none> 3h18m v1.24.1 ip-172-31-47-37 Ready control-plane 27h v1.24.1
BR
Alberto0 -
Hi @amayorga,
Installing
containerd
instead ofcontainerd.io
installs an earlier package version of containerd.As previously requested, please provide more detailed outputs:
kubectl get pods -A -o wide
andkubectl get nodes -o wide
Regards,
-Chris0 -
Hi @chrispokorni,
kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default basicpod 0/1 ContainerCreating 0 7h2m <none> ip-172-31-41-155 <none> <none> kube-system calico-kube-controllers-5b97f5d8cf-sfwfb 1/1 Running 3 (118s ago) 33h 192.168.196.10 ip-172-31-47-37 <none> <none> kube-system calico-node-5h77g 0/1 Init:0/3 0 10h 172.31.41.155 ip-172-31-41-155 <none> <none> kube-system calico-node-9vz4r 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none> kube-system coredns-6d4b75cb6d-b5tf6 1/1 Running 3 (118s ago) 33h 192.168.196.12 ip-172-31-47-37 <none> <none> kube-system coredns-6d4b75cb6d-wknrz 1/1 Running 3 (118s ago) 33h 192.168.196.11 ip-172-31-47-37 <none> <none> kube-system etcd-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none> kube-system kube-apiserver-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none> kube-system kube-controller-manager-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none> kube-system kube-proxy-8wpqj 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none> kube-system kube-proxy-dk9p6 0/1 ContainerCreating 0 10h 172.31.41.155 ip-172-31-41-155 <none> <none> kube-system kube-scheduler-ip-172-31-47-37 1/1 Running 3 (118s ago) 33h 172.31.47.37 ip-172-31-47-37 <none> <none>
kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-41-155 Ready <none> 10h v1.24.1 172.31.41.155 <none> Ubuntu 20.04.5 LTS 5.15.0-1017-aws containerd://1.5.9 ip-172-31-47-37 Ready control-plane 33h v1.24.1 172.31.47.37 <none> Ubuntu 20.04.5 LTS 5.15.0-1017-aws containerd://1.5.9
Thanks for helping.
BR
Alberto0 -
Hi @amayorga,
This is the output that shows us the state of the cluster. Since workload on the worker node (ip-172-31-41-155) is not running (kube-proxy, calico-node, and basicpod) I am still suspecting a networking issue between your EC2 instances.
Please provide a screenshot of the Security Group rules configuration for the SG shielding the two EC2 instances.
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)