3.2 error when testing that the repo works after the reboot
At step 10 and 11 on section 3.2, after I try to reboot and connect to ssh again, I cannot repeat the instructions on step 11 when it says to curl $repo/v2/_catalog
. I get this error curl: (3) URL using bad/illegal format or missing URL
also when I run kubectl get nodes I get a The connection to the server XXXXXXX was refused - did you specify the right host or port?
error
Answers
-
not sure how to solve this
0 -
Hi @quachout,
The
repo
variable's persistence may not be properly set prior to the reboot step, therefore causing your curl commands to fail.Try setting it again, on both nodes, before proceeding:
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
In addition, I would also check the
/etc/containerd/config.toml
file and remove any possible duplicate entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then restart the containerd sevice.Regards,
-Chris0 -
@chrispokorni hello, I ran those lines but I don't think it did anything. I think I'm having trouble connecting to the Kubernetes API server running on the control plane node. How do I reconnect to it? Ever since the reboot I haven't been able to connect again. I think that may be the problem bc I get this error
E0406 23:05:36.310771 6781 memcache.go:238] couldn't get current server API group list: Get "https://172.31.10.72:6443/api?timeout=32s": dial tcp 172.31.10.72:6443: connect: connection refused The connection to the server 172.31.10.72:6443 was refused - did you specify the right host or port?
when I runkubectl get nodes
. How do I reconnect?0 -
Hi @quachout,
First, I would ensure that all recommended settings are in place as presented in the demo videos from the introductory chapter (for AWS EC2 and/or GCP GCE) - VPC, firewall/SG, VM size...
Second, I would check the
/etc/containerd/config.toml
file and remove any possible duplicate "[plugins...]" and "endpoint = [...]" entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then make sure to restart the containerd sevice.Before continuing with the labs, the containerd service needs to be active and running.
Regards,
-Chris0 -
@chrispokorni I restarted from the beginning with new nodes and when I got to the reboot section in 3.2,
curl $repo/v2_catalog
first gave me this errorcurl: (3) URL using bad/illegal format or missing URL
I then used these commands that you suggested
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
and now get this error when I runcurl $repo/v2_catalog
curl: (28) Failed to connect to 10.97.40.62 port 5000: Connection timed out
I also checked for duplicates in
config.toml
-- no duplicates there. I also made sure there were only 3 lines in theregistry.conf
file.This time
kubectl get nodes
properly returns my control plane and worker.Not sure what else to do. Thanks in advance:)
0 -
Hi @quachout,
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
@chrispokorni When I run
kubectl get nodes-o wide
I getNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-10-110 NotReady <none> 2d20h v1.26.1 172.31.10.110 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://Unknown ip-172-31-14-173 Ready control-plane 2d20h v1.26.1 172.31.14.173 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://1.6.20
When I run
kubectl get pods -A -o wide
I getdefault nginx-99f889bcd-7gj7k 1/1 Terminating 0 2d19h 192.168.23.70 ip-172-31-10-110 <none> <none> default nginx-99f889bcd-zpdsk 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.205 ip-172-31-14-173 <none> <none> default registry-5b4c5fffb9-5m8f5 1/1 Terminating 0 2d19h 192.168.23.71 ip-172-31-10-110 <none> <none> default registry-5b4c5fffb9-728gq 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.206 ip-172-31-14-173 <none> <none> kube-system calico-kube-controllers-57b57c56f-v9x4r 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.202 ip-172-31-14-173 <none> <none> kube-system calico-node-722wl 0/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system calico-node-brlhl 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system coredns-787d4945fb-c5nht 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.204 ip-172-31-14-173 <none> <none> kube-system coredns-787d4945fb-l746s 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.203 ip-172-31-14-173 <none> <none> kube-system etcd-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-apiserver-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-controller-manager-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-pkwvl 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-vkcp2 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system kube-scheduler-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none>
0 -
Hi @quachout,
The output indicates that the container runtime on the worker node is still not operational, therefore the worker node status is NotReady.
Per my previous comments, we are assuming that all recommended settings are in place as presented in the demo video from the introductory chapter for AWS EC2 - VPC settings, SG rule, VM size, recommended guest OS...
To fix the container runtime, perform the following on the worker node only:
Make sure the containerd service is stopped
Remove the 2 config files:/etc/containerd/config.toml
and/etc/containers/registries.conf.d/registry.conf
Reinstall the containerd.io package (checkk8sWorker.sh
for syntax)
Reinitialize the/etc/containerd/config.toml
file with default settings and set the systemd cgroup (checkk8sWorker.sh
for syntax)
Edit the/etc/containerd/config.toml
file by appending required entries (checklocal-repo-setup.sh
for syntax)
Create the/etc/containers/registries.conf.d/registry.conf
file (checklocal-repo-setup.sh
for syntax)
Restart (or start) the containerd service (checkk8sWorker.sh
for syntax)Before continuing with the labs, the containerd service needs to be active and running on the worker node.
Regards,
-Chris0 -
@chrispokorni where are the steps in the lab to "Reinstall the containerd.io package (check
k8sWorker.sh
for syntax)" just want to make sure I'm doing this correctly. Thanks!0 -
Hi @quachout,
Those steps are not specifically stated in the labs, because they are automated through the scripts I recommended for syntax checking. Since your errors are only related to the containerd runtime, I only recommend you extract and run those commands from the scripts, that install the runtime, initialize it with defaults, and edit/update configuration files.
It is a mystery, however, that the runtime install and/or config was unsuccessful on one node (the worker) while the other node seems to be completely fine, when the scripts are performing the same thing on both systems.
Regards,
-Chris0
Categories
- All Categories
- 206 LFX Mentorship
- 206 LFX Mentorship: Linux Kernel
- 733 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 165 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 119 Express Training Courses
- 119 Express Courses - Discussion Forum
- 5.9K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 219 LFD103 Class Forum
- 17 LFD110 Class Forum
- 32 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 683 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 95 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 2 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 10 LFS158 Class Forum
- 4 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 111 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 49 LFS241 Class Forum
- 43 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 143 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 114 LFS260 Class Forum
- 152 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 2 LFW111 Class Forum
- 257 LFW211 Class Forum
- 176 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 753 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 350 Ubuntu
- 464 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 365 Off Topic
- 113 Introductions
- 171 Small Talk
- 20 Study Material
- 522 Programming and Development
- 291 Kernel Development
- 213 Software Development
- 1.1K Software
- 212 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)