Any correction to allow kubectl execute on pods running on worker node?
Hi There,
Some how, I cannot run kubectl exec pod command on pods assigned to worker node, I am getting "unable to upgrade connection.." when running following command:
vagrant@cp:~/app1$ for name in try1-7f7c6fdc54-dqgjj try1-7f7c6fdc54-f8k5m \
try1-7f7c6fdc54-jxq57
do kubectl exec $name -- touch /tmp/healthy
done
error: unable to upgrade connection: pod does not exist
error: unable to upgrade connection: pod does not exist
error: unable to upgrade connection: pod does not exist
vagrant@cp:~/app1$
vagrant@cp:~/app1$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6488f757bc-rp4wv 1/1 Running 3 (110m ago) 24h 192.168.171.93 worker
registry-d4cf9fd7d-9c52d 1/1 Running 3 (110m ago) 24h 192.168.171.92 worker
try1-7f7c6fdc54-9zh2w 1/1 Running 0 12m 192.168.242.66 cp
try1-7f7c6fdc54-dqgjj 0/1 Running 0 12m 192.168.171.108 worker
try1-7f7c6fdc54-f8k5m 0/1 Running 0 12m 192.168.171.106 worker
try1-7f7c6fdc54-h4594 1/1 Running 0 12m 192.168.242.65 cp
try1-7f7c6fdc54-jxq57 0/1 Running 0 12m 192.168.171.107 worker
try1-7f7c6fdc54-vpbsj 1/1 Running 0 12m 192.168.242.67 cp
Thanks advance.
Shao
Best Answers
-
Hi @caishaoping,
This could be the result of improper network configuration of the VMs by the hypervisor. For specific details I would recommend watching the demo videos from the introductory chapter where the infrastructure provisioning is presented on AWS and GCP, but with key VM instance networking configuration options. On a local hypervisor similar configuration options should be available for VM networking. Once configured, cross exec should work, together with other commands that may fail at this time.
Regards,
-Chris0 -
Hi @caishaoping,
You can easily remove the worker node from the cluster with the
kubectl delete node node-name
command, build a new worker VM with a new IP address and then recreate the join command withkubeadm token create --print-join-command
, and run the newly generated join command on the new VM.Regards,
-Chris0 -
Hi @caishaoping,
The eth interface with IP on the same subnet as the other node should be used (but the IP addresses of your nodes should be distinct). The tunl interface is created by the network plugin to enable cross node routing.
Regards,
-Chris0
Answers
-
Thanks @chrispokorni
Per stackoverflow, I checked my nodes and found that both nodes use the same internal IP. Let me find out how to update nodes' internal ip. Hope I don't have to kubeadm join again.
vagrant@cp:~/app1$ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cp Ready control-plane 5d21h v1.24.1 10.0.2.15 Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.6.8
worker Ready 5d21h v1.24.1 10.0.2.15 Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.5.90 -
Here again, somehow, I did pretty much following and got it working, though still need time to digest:
- modifying /etc/systemd/system/kubelet.service.d/10-kubeadm.conf with adding KUBELET_EXTRA_ARGS=--node-ip %NEW_IP_ADDRESS%.
- sudo systemctl daemon-reload since config file was changed
- sudo systemctl restart kubelet.service
After the change, worker node's ip changed:
vagrant@cp:~$ k get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cp Ready control-plane 5d22h v1.24.1 10.0.2.15 Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.6.8
worker Ready 5d22h v1.24.1 192.168.171.64 Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.5.9
vagrant@cp:~$Now, I can exec pod running on worker node:
vagrant@cp:~$ k exec -it busybox -- shpwd
/
ls -al
vagrant@cp:~$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 1 (51m ago) 111m 192.168.171.121 worker
nginx-6488f757bc-rp4wv 1/1 Running 5 (51m ago) 41h 192.168.171.120 worker
registry-d4cf9fd7d-9c52d 1/1 Running 5 (51m ago) 41h 192.168.171.122 worker0 -
Hi @caishaoping,
Yes, it seems that the worker displays a new IP address, however, it is an IP that overlaps the pod CIDR and eventually will cause routing issues within your cluster.
Regards,
-Chris0 -
Sorry but I think I still need to ask one more question to be clear on assigning node's internal IP address, I am using VirtualBox VMs, when I assign internal IP to cp or worker node, should I only use one of the internal IP associated to LAN interface like eth0 or eth1? or I should avoid IPs associated to tunl0@NONE or others?
I am asking this because either liveness or readiness probe check is not success for goproxy, regardless of node where the pod is running on.
try1-79f8557b4d-2w2sp 1/2 Running 0 2m51s 192.168.171.66 worker
try1-79f8557b4d-449sp 1/2 Running 0 2m50s 192.168.171.68 worker
try1-79f8557b4d-fx676 1/2 Running 0 2m51s 192.168.242.91 cp
try1-79f8557b4d-gb5pt 1/2 Running 0 2m50s 192.168.171.67 worker
try1-79f8557b4d-jck92 1/2 Running 0 2m50s 192.168.242.89 cp
try1-79f8557b4d-k4788 1/2 Running 0 2m51s 192.168.242.90 cpSimply put, which one should be good candidate for node's internal IP?
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:a2:6b:fd brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 83551sec preferred_lft 83551sec
inet6 fe80::a00:27ff:fea2:6bfd/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:13:16:91 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.100/24 brd 172.16.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe13:1691/64 scope link
valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.242.64/32 scope global tunl0
valid_lft forever preferred_lft forever0 -
Thanks a lot @chrispokorni
Though still a lot to learn in K8s network part, for the purpose this lab, I have now got it done and understood more from doing the lab, including all the fail-and-try.
Appreciate!
Shao0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)