3.2 error when testing that the repo works after the reboot
At step 10 and 11 on section 3.2, after I try to reboot and connect to ssh again, I cannot repeat the instructions on step 11 when it says to curl $repo/v2/_catalog
. I get this error curl: (3) URL using bad/illegal format or missing URL
also when I run kubectl get nodes I get a The connection to the server XXXXXXX was refused - did you specify the right host or port?
error
Answers
-
not sure how to solve this
0 -
Hi @quachout,
The
repo
variable's persistence may not be properly set prior to the reboot step, therefore causing your curl commands to fail.Try setting it again, on both nodes, before proceeding:
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
In addition, I would also check the
/etc/containerd/config.toml
file and remove any possible duplicate entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then restart the containerd sevice.Regards,
-Chris0 -
@chrispokorni hello, I ran those lines but I don't think it did anything. I think I'm having trouble connecting to the Kubernetes API server running on the control plane node. How do I reconnect to it? Ever since the reboot I haven't been able to connect again. I think that may be the problem bc I get this error
E0406 23:05:36.310771 6781 memcache.go:238] couldn't get current server API group list: Get "https://172.31.10.72:6443/api?timeout=32s": dial tcp 172.31.10.72:6443: connect: connection refused The connection to the server 172.31.10.72:6443 was refused - did you specify the right host or port?
when I runkubectl get nodes
. How do I reconnect?0 -
Hi @quachout,
First, I would ensure that all recommended settings are in place as presented in the demo videos from the introductory chapter (for AWS EC2 and/or GCP GCE) - VPC, firewall/SG, VM size...
Second, I would check the
/etc/containerd/config.toml
file and remove any possible duplicate "[plugins...]" and "endpoint = [...]" entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then make sure to restart the containerd sevice.Before continuing with the labs, the containerd service needs to be active and running.
Regards,
-Chris0 -
@chrispokorni I restarted from the beginning with new nodes and when I got to the reboot section in 3.2,
curl $repo/v2_catalog
first gave me this errorcurl: (3) URL using bad/illegal format or missing URL
I then used these commands that you suggested
student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@cp:~$ export repo=10.97.40.62:5000
student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
student@worker:~$ export repo=10.97.40.62:5000
and now get this error when I runcurl $repo/v2_catalog
curl: (28) Failed to connect to 10.97.40.62 port 5000: Connection timed out
I also checked for duplicates in
config.toml
-- no duplicates there. I also made sure there were only 3 lines in theregistry.conf
file.This time
kubectl get nodes
properly returns my control plane and worker.Not sure what else to do. Thanks in advance:)
0 -
Hi @quachout,
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
@chrispokorni When I run
kubectl get nodes-o wide
I getNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-10-110 NotReady <none> 2d20h v1.26.1 172.31.10.110 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://Unknown ip-172-31-14-173 Ready control-plane 2d20h v1.26.1 172.31.14.173 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://1.6.20
When I run
kubectl get pods -A -o wide
I getdefault nginx-99f889bcd-7gj7k 1/1 Terminating 0 2d19h 192.168.23.70 ip-172-31-10-110 <none> <none> default nginx-99f889bcd-zpdsk 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.205 ip-172-31-14-173 <none> <none> default registry-5b4c5fffb9-5m8f5 1/1 Terminating 0 2d19h 192.168.23.71 ip-172-31-10-110 <none> <none> default registry-5b4c5fffb9-728gq 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.206 ip-172-31-14-173 <none> <none> kube-system calico-kube-controllers-57b57c56f-v9x4r 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.202 ip-172-31-14-173 <none> <none> kube-system calico-node-722wl 0/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system calico-node-brlhl 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system coredns-787d4945fb-c5nht 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.204 ip-172-31-14-173 <none> <none> kube-system coredns-787d4945fb-l746s 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.203 ip-172-31-14-173 <none> <none> kube-system etcd-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-apiserver-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-controller-manager-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-pkwvl 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-vkcp2 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system kube-scheduler-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none>
0 -
Hi @quachout,
The output indicates that the container runtime on the worker node is still not operational, therefore the worker node status is NotReady.
Per my previous comments, we are assuming that all recommended settings are in place as presented in the demo video from the introductory chapter for AWS EC2 - VPC settings, SG rule, VM size, recommended guest OS...
To fix the container runtime, perform the following on the worker node only:
Make sure the containerd service is stopped
Remove the 2 config files:/etc/containerd/config.toml
and/etc/containers/registries.conf.d/registry.conf
Reinstall the containerd.io package (checkk8sWorker.sh
for syntax)
Reinitialize the/etc/containerd/config.toml
file with default settings and set the systemd cgroup (checkk8sWorker.sh
for syntax)
Edit the/etc/containerd/config.toml
file by appending required entries (checklocal-repo-setup.sh
for syntax)
Create the/etc/containers/registries.conf.d/registry.conf
file (checklocal-repo-setup.sh
for syntax)
Restart (or start) the containerd service (checkk8sWorker.sh
for syntax)Before continuing with the labs, the containerd service needs to be active and running on the worker node.
Regards,
-Chris0 -
@chrispokorni where are the steps in the lab to "Reinstall the containerd.io package (check
k8sWorker.sh
for syntax)" just want to make sure I'm doing this correctly. Thanks!0 -
Hi @quachout,
Those steps are not specifically stated in the labs, because they are automated through the scripts I recommended for syntax checking. Since your errors are only related to the containerd runtime, I only recommend you extract and run those commands from the scripts, that install the runtime, initialize it with defaults, and edit/update configuration files.
It is a mystery, however, that the runtime install and/or config was unsuccessful on one node (the worker) while the other node seems to be completely fine, when the scripts are performing the same thing on both systems.
Regards,
-Chris0
Categories
- All Categories
- 50 LFX Mentorship
- 103 LFX Mentorship: Linux Kernel
- 575 Linux Foundation IT Professional Programs
- 304 Cloud Engineer IT Professional Program
- 124 Advanced Cloud Engineer IT Professional Program
- 53 DevOps Engineer IT Professional Program
- 61 Cloud Native Developer IT Professional Program
- 5 Express Training Courses
- 5 Express Courses - Discussion Forum
- 2K Training Courses
- 19 LFC110 Class Forum
- 7 LFC131 Class Forum
- 27 LFD102 Class Forum
- 157 LFD103 Class Forum
- 20 LFD121 Class Forum
- 1 LFD137 Class Forum
- 61 LFD201 Class Forum
- 1 LFD210 Class Forum
- LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- LFD237 Class Forum
- 23 LFD254 Class Forum
- 611 LFD259 Class Forum
- 105 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFD273 Class Forum
- 2 LFS145 Class Forum
- 24 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 11 LFS203 Class Forum
- 75 LFS207 Class Forum
- 300 LFS211 Class Forum
- 54 LFS216 Class Forum
- 47 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 11 LFS244 Class Forum
- 36 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 140 LFS253 Class Forum
- LFS254 Class Forum
- 1.1K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 93 LFS260 Class Forum
- 132 LFS261 Class Forum
- 33 LFS262 Class Forum
- 80 LFS263 Class Forum
- 15 LFS264 Class Forum
- 11 LFS266 Class Forum
- 18 LFS267 Class Forum
- 17 LFS268 Class Forum
- 23 LFS269 Class Forum
- 203 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- LFS281 Class Forum
- 234 LFW211 Class Forum
- 172 LFW212 Class Forum
- 7 SKF100 Class Forum
- SKF200 Class Forum
- 902 Hardware
- 219 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 209 Networking
- 101 Printers & Scanners
- 85 Storage
- 763 Linux Distributions
- 88 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 24 openSUSE
- 142 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 357 Ubuntu
- 479 Linux System Administration
- 41 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 108 System Management
- 49 Web Management
- 68 Mobile Computing
- 23 Android
- 30 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 537 Off Topic
- 131 Introductions
- 217 Small Talk
- 21 Study Material
- 826 Programming and Development
- 278 Kernel Development
- 514 Software Development
- 928 Software
- 260 Applications
- 184 Command Line
- 3 Compiling/Installing
- 76 Games
- 316 Installation
- 62 All In Program
- 62 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)