3.2 error when testing that the repo works after the reboot
At step 10 and 11 on section 3.2, after I try to reboot and connect to ssh again, I cannot repeat the instructions on step 11 when it says to curl $repo/v2/_catalog
. I get this error curl: (3) URL using bad/illegal format or missing URL
also when I run kubectl get nodes I get a The connection to the server XXXXXXX was refused - did you specify the right host or port?
error
Answers
-
not sure how to solve this
0 -
Hi @quachout,
The
repo
variable's persistence may not be properly set prior to the reboot step, therefore causing your curl commands to fail.Try setting it again, on both nodes, before proceeding:
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
In addition, I would also check the
/etc/containerd/config.toml
file and remove any possible duplicate entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then restart the containerd sevice.Regards,
-Chris0 -
@chrispokorni hello, I ran those lines but I don't think it did anything. I think I'm having trouble connecting to the Kubernetes API server running on the control plane node. How do I reconnect to it? Ever since the reboot I haven't been able to connect again. I think that may be the problem bc I get this error
E0406 23:05:36.310771 6781 memcache.go:238] couldn't get current server API group list: Get "https://172.31.10.72:6443/api?timeout=32s": dial tcp 172.31.10.72:6443: connect: connection refused The connection to the server 172.31.10.72:6443 was refused - did you specify the right host or port?
when I runkubectl get nodes
. How do I reconnect?0 -
Hi @quachout,
First, I would ensure that all recommended settings are in place as presented in the demo videos from the introductory chapter (for AWS EC2 and/or GCP GCE) - VPC, firewall/SG, VM size...
Second, I would check the
/etc/containerd/config.toml
file and remove any possible duplicate "[plugins...]" and "endpoint = [...]" entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then make sure to restart the containerd sevice.Before continuing with the labs, the containerd service needs to be active and running.
Regards,
-Chris0 -
@chrispokorni I restarted from the beginning with new nodes and when I got to the reboot section in 3.2,
curl $repo/v2_catalog
first gave me this errorcurl: (3) URL using bad/illegal format or missing URL
I then used these commands that you suggested
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
and now get this error when I runcurl $repo/v2_catalog
curl: (28) Failed to connect to 10.97.40.62 port 5000: Connection timed out
I also checked for duplicates in
config.toml
-- no duplicates there. I also made sure there were only 3 lines in theregistry.conf
file.This time
kubectl get nodes
properly returns my control plane and worker.Not sure what else to do. Thanks in advance:)
0 -
Hi @quachout,
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
@chrispokorni When I run
kubectl get nodes-o wide
I getNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-10-110 NotReady <none> 2d20h v1.26.1 172.31.10.110 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://Unknown ip-172-31-14-173 Ready control-plane 2d20h v1.26.1 172.31.14.173 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://1.6.20
When I run
kubectl get pods -A -o wide
I getdefault nginx-99f889bcd-7gj7k 1/1 Terminating 0 2d19h 192.168.23.70 ip-172-31-10-110 <none> <none> default nginx-99f889bcd-zpdsk 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.205 ip-172-31-14-173 <none> <none> default registry-5b4c5fffb9-5m8f5 1/1 Terminating 0 2d19h 192.168.23.71 ip-172-31-10-110 <none> <none> default registry-5b4c5fffb9-728gq 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.206 ip-172-31-14-173 <none> <none> kube-system calico-kube-controllers-57b57c56f-v9x4r 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.202 ip-172-31-14-173 <none> <none> kube-system calico-node-722wl 0/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system calico-node-brlhl 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system coredns-787d4945fb-c5nht 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.204 ip-172-31-14-173 <none> <none> kube-system coredns-787d4945fb-l746s 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.203 ip-172-31-14-173 <none> <none> kube-system etcd-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-apiserver-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-controller-manager-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-pkwvl 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-vkcp2 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system kube-scheduler-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none>
0 -
Hi @quachout,
The output indicates that the container runtime on the worker node is still not operational, therefore the worker node status is NotReady.
Per my previous comments, we are assuming that all recommended settings are in place as presented in the demo video from the introductory chapter for AWS EC2 - VPC settings, SG rule, VM size, recommended guest OS...
To fix the container runtime, perform the following on the worker node only:
Make sure the containerd service is stopped
Remove the 2 config files:/etc/containerd/config.toml
and/etc/containers/registries.conf.d/registry.conf
Reinstall the containerd.io package (checkk8sWorker.sh
for syntax)
Reinitialize the/etc/containerd/config.toml
file with default settings and set the systemd cgroup (checkk8sWorker.sh
for syntax)
Edit the/etc/containerd/config.toml
file by appending required entries (checklocal-repo-setup.sh
for syntax)
Create the/etc/containers/registries.conf.d/registry.conf
file (checklocal-repo-setup.sh
for syntax)
Restart (or start) the containerd service (checkk8sWorker.sh
for syntax)Before continuing with the labs, the containerd service needs to be active and running on the worker node.
Regards,
-Chris0 -
@chrispokorni where are the steps in the lab to "Reinstall the containerd.io package (check
k8sWorker.sh
for syntax)" just want to make sure I'm doing this correctly. Thanks!0 -
Hi @quachout,
Those steps are not specifically stated in the labs, because they are automated through the scripts I recommended for syntax checking. Since your errors are only related to the containerd runtime, I only recommend you extract and run those commands from the scripts, that install the runtime, initialize it with defaults, and edit/update configuration files.
It is a mystery, however, that the runtime install and/or config was unsuccessful on one node (the worker) while the other node seems to be completely fine, when the scripts are performing the same thing on both systems.
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)