3.2 error when testing that the repo works after the reboot
At step 10 and 11 on section 3.2, after I try to reboot and connect to ssh again, I cannot repeat the instructions on step 11 when it says to curl $repo/v2/_catalog
. I get this error curl: (3) URL using bad/illegal format or missing URL
also when I run kubectl get nodes I get a The connection to the server XXXXXXX was refused - did you specify the right host or port?
error
Answers
-
not sure how to solve this
0 -
Hi @quachout,
The
repo
variable's persistence may not be properly set prior to the reboot step, therefore causing your curl commands to fail.Try setting it again, on both nodes, before proceeding:
[email protected]:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
[email protected]:~$ export repo=10.97.40.62:5000
[email protected]:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
[email protected]:~$ export repo=10.97.40.62:5000
In addition, I would also check the
/etc/containerd/config.toml
file and remove any possible duplicate entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then restart the containerd sevice.Regards,
-Chris0 -
@chrispokorni hello, I ran those lines but I don't think it did anything. I think I'm having trouble connecting to the Kubernetes API server running on the control plane node. How do I reconnect to it? Ever since the reboot I haven't been able to connect again. I think that may be the problem bc I get this error
E0406 23:05:36.310771 6781 memcache.go:238] couldn't get current server API group list: Get "https://172.31.10.72:6443/api?timeout=32s": dial tcp 172.31.10.72:6443: connect: connection refused The connection to the server 172.31.10.72:6443 was refused - did you specify the right host or port?
when I runkubectl get nodes
. How do I reconnect?0 -
Hi @quachout,
First, I would ensure that all recommended settings are in place as presented in the demo videos from the introductory chapter (for AWS EC2 and/or GCP GCE) - VPC, firewall/SG, VM size...
Second, I would check the
/etc/containerd/config.toml
file and remove any possible duplicate "[plugins...]" and "endpoint = [...]" entries at its tail, and make sure the following file exists/etc/containers/registries.conf.d/registry.conf
and includes only 3 lines. Check on both nodes. If any corrections are needed, then make sure to restart the containerd sevice.Before continuing with the labs, the containerd service needs to be active and running.
Regards,
-Chris0 -
@chrispokorni I restarted from the beginning with new nodes and when I got to the reboot section in 3.2,
curl $repo/v2_catalog
first gave me this errorcurl: (3) URL using bad/illegal format or missing URL
I then used these commands that you suggested
[email protected]:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
[email protected]:~$ export repo=10.97.40.62:5000
[email protected]:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
[email protected]:~$ export repo=10.97.40.62:5000
and now get this error when I runcurl $repo/v2_catalog
curl: (28) Failed to connect to 10.97.40.62 port 5000: Connection timed out
I also checked for duplicates in
config.toml
-- no duplicates there. I also made sure there were only 3 lines in theregistry.conf
file.This time
kubectl get nodes
properly returns my control plane and worker.Not sure what else to do. Thanks in advance:)
0 -
Hi @quachout,
What are the outputs of the following commands?
kubectl get nodes -o wide
kubectl get pods -A -o wide
Regards,
-Chris0 -
@chrispokorni When I run
kubectl get nodes-o wide
I getNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-10-110 NotReady <none> 2d20h v1.26.1 172.31.10.110 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://Unknown ip-172-31-14-173 Ready control-plane 2d20h v1.26.1 172.31.14.173 <none> Ubuntu 20.04.6 LTS 5.15.0-1033-aws containerd://1.6.20
When I run
kubectl get pods -A -o wide
I getdefault nginx-99f889bcd-7gj7k 1/1 Terminating 0 2d19h 192.168.23.70 ip-172-31-10-110 <none> <none> default nginx-99f889bcd-zpdsk 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.205 ip-172-31-14-173 <none> <none> default registry-5b4c5fffb9-5m8f5 1/1 Terminating 0 2d19h 192.168.23.71 ip-172-31-10-110 <none> <none> default registry-5b4c5fffb9-728gq 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.206 ip-172-31-14-173 <none> <none> kube-system calico-kube-controllers-57b57c56f-v9x4r 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.202 ip-172-31-14-173 <none> <none> kube-system calico-node-722wl 0/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system calico-node-brlhl 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system coredns-787d4945fb-c5nht 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.204 ip-172-31-14-173 <none> <none> kube-system coredns-787d4945fb-l746s 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.203 ip-172-31-14-173 <none> <none> kube-system etcd-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-apiserver-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-controller-manager-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-pkwvl 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-vkcp2 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system kube-scheduler-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none>
0 -
Hi @quachout,
The output indicates that the container runtime on the worker node is still not operational, therefore the worker node status is NotReady.
Per my previous comments, we are assuming that all recommended settings are in place as presented in the demo video from the introductory chapter for AWS EC2 - VPC settings, SG rule, VM size, recommended guest OS...
To fix the container runtime, perform the following on the worker node only:
Make sure the containerd service is stopped
Remove the 2 config files:/etc/containerd/config.toml
and/etc/containers/registries.conf.d/registry.conf
Reinstall the containerd.io package (checkk8sWorker.sh
for syntax)
Reinitialize the/etc/containerd/config.toml
file with default settings and set the systemd cgroup (checkk8sWorker.sh
for syntax)
Edit the/etc/containerd/config.toml
file by appending required entries (checklocal-repo-setup.sh
for syntax)
Create the/etc/containers/registries.conf.d/registry.conf
file (checklocal-repo-setup.sh
for syntax)
Restart (or start) the containerd service (checkk8sWorker.sh
for syntax)Before continuing with the labs, the containerd service needs to be active and running on the worker node.
Regards,
-Chris0 -
@chrispokorni where are the steps in the lab to "Reinstall the containerd.io package (check
k8sWorker.sh
for syntax)" just want to make sure I'm doing this correctly. Thanks!0 -
Hi @quachout,
Those steps are not specifically stated in the labs, because they are automated through the scripts I recommended for syntax checking. Since your errors are only related to the containerd runtime, I only recommend you extract and run those commands from the scripts, that install the runtime, initialize it with defaults, and edit/update configuration files.
It is a mystery, however, that the runtime install and/or config was unsuccessful on one node (the worker) while the other node seems to be completely fine, when the scripts are performing the same thing on both systems.
Regards,
-Chris0
Categories
- 10.1K All Categories
- 35 LFX Mentorship
- 88 LFX Mentorship: Linux Kernel
- 504 Linux Foundation Boot Camps
- 279 Cloud Engineer Boot Camp
- 103 Advanced Cloud Engineer Boot Camp
- 48 DevOps Engineer Boot Camp
- 41 Cloud Native Developer Boot Camp
- 2 Express Training Courses
- 2 Express Courses - Discussion Forum
- 1.7K Training Courses
- 17 LFC110 Class Forum
- 5 LFC131 Class Forum
- 19 LFD102 Class Forum
- 148 LFD103 Class Forum
- 12 LFD121 Class Forum
- 61 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 569 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 23 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 45 LFS207 Class Forum
- 298 LFS211 Class Forum
- 53 LFS216 Class Forum
- 46 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 995 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 87 LFS260 Class Forum
- 126 LFS261 Class Forum
- 31 LFS262 Class Forum
- 79 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 17 LFS268 Class Forum
- 21 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 212 LFW211 Class Forum
- 153 LFW212 Class Forum
- 899 Hardware
- 217 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 208 Networking
- 101 Printers & Scanners
- 85 Storage
- 749 Linux Distributions
- 88 Debian
- 64 Fedora
- 14 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 355 Ubuntu
- 473 Linux System Administration
- 38 Cloud Computing
- 69 Command Line/Scripting
- Github systems admin projects
- 94 Linux Security
- 77 Network Management
- 108 System Management
- 49 Web Management
- 63 Mobile Computing
- 22 Android
- 27 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 528 Off Topic
- 127 Introductions
- 213 Small Talk
- 20 Study Material
- 794 Programming and Development
- 262 Kernel Development
- 498 Software Development
- 922 Software
- 257 Applications
- 182 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 53 All In Program
- 53 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)