Welcome to the Linux Foundation Forum!

3.2 error when testing that the repo works after the reboot

Options
quachout
quachout Posts: 15
edited April 2023 in LFD259 Class Forum

At step 10 and 11 on section 3.2, after I try to reboot and connect to ssh again, I cannot repeat the instructions on step 11 when it says to curl $repo/v2/_catalog. I get this error curl: (3) URL using bad/illegal format or missing URL

also when I run kubectl get nodes I get a The connection to the server XXXXXXX was refused - did you specify the right host or port? error

Answers

  • quachout
    quachout Posts: 15
    edited April 2023
    Options

    not sure how to solve this

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @quachout,

    The repo variable's persistence may not be properly set prior to the reboot step, therefore causing your curl commands to fail.

    Try setting it again, on both nodes, before proceeding:

    student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
    student@cp:~$ export repo=10.97.40.62:5000

    student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
    student@worker:~$ export repo=10.97.40.62:5000

    In addition, I would also check the /etc/containerd/config.toml file and remove any possible duplicate entries at its tail, and make sure the following file exists /etc/containers/registries.conf.d/registry.conf and includes only 3 lines. Check on both nodes. If any corrections are needed, then restart the containerd sevice.

    Regards,
    -Chris

  • quachout
    quachout Posts: 15
    edited April 2023
    Options

    @chrispokorni hello, I ran those lines but I don't think it did anything. I think I'm having trouble connecting to the Kubernetes API server running on the control plane node. How do I reconnect to it? Ever since the reboot I haven't been able to connect again. I think that may be the problem bc I get this error E0406 23:05:36.310771    6781 memcache.go:238] couldn't get current server API group list: Get "https://172.31.10.72:6443/api?timeout=32s": dial tcp 172.31.10.72:6443: connect: connection refused The connection to the server 172.31.10.72:6443 was refused - did you specify the right host or port? when I run kubectl get nodes. How do I reconnect?

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @quachout,

    First, I would ensure that all recommended settings are in place as presented in the demo videos from the introductory chapter (for AWS EC2 and/or GCP GCE) - VPC, firewall/SG, VM size...

    Second, I would check the /etc/containerd/config.toml file and remove any possible duplicate "[plugins...]" and "endpoint = [...]" entries at its tail, and make sure the following file exists /etc/containers/registries.conf.d/registry.conf and includes only 3 lines. Check on both nodes. If any corrections are needed, then make sure to restart the containerd sevice.

    Before continuing with the labs, the containerd service needs to be active and running.

    Regards,
    -Chris

  • quachout
    quachout Posts: 15
    Options

    @chrispokorni I restarted from the beginning with new nodes and when I got to the reboot section in 3.2, curl $repo/v2_catalog first gave me this error curl: (3) URL using bad/illegal format or missing URL

    I then used these commands that you suggested
    student@cp:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
    student@cp:~$ export repo=10.97.40.62:5000
    student@worker:~$ echo "export repo=10.97.40.62:5000" >> $HOME/.bashrc
    student@worker:~$ export repo=10.97.40.62:5000
    and now get this error when I run curl $repo/v2_catalog
    curl: (28) Failed to connect to 10.97.40.62 port 5000: Connection timed out

    I also checked for duplicates in config.toml -- no duplicates there. I also made sure there were only 3 lines in the registry.conf file.

    This time kubectl get nodes properly returns my control plane and worker.

    Not sure what else to do. Thanks in advance:)

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @quachout,

    What are the outputs of the following commands?

    kubectl get nodes -o wide

    kubectl get pods -A -o wide

    Regards,
    -Chris

  • quachout
    quachout Posts: 15
    Options

    @chrispokorni When I run kubectl get nodes-o wide I get NAME               STATUS     ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME ip-172-31-10-110   NotReady   <none>          2d20h   v1.26.1   172.31.10.110   <none>        Ubuntu 20.04.6 LTS   5.15.0-1033-aws   containerd://Unknown ip-172-31-14-173   Ready      control-plane   2d20h   v1.26.1   172.31.14.173   <none>        Ubuntu 20.04.6 LTS   5.15.0-1033-aws   containerd://1.6.20

    When I run kubectl get pods -A -o wide I get
    default nginx-99f889bcd-7gj7k 1/1 Terminating 0 2d19h 192.168.23.70 ip-172-31-10-110 <none> <none> default nginx-99f889bcd-zpdsk 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.205 ip-172-31-14-173 <none> <none> default registry-5b4c5fffb9-5m8f5 1/1 Terminating 0 2d19h 192.168.23.71 ip-172-31-10-110 <none> <none> default registry-5b4c5fffb9-728gq 1/1 Running 1 (2m59s ago) 2d19h 192.168.17.206 ip-172-31-14-173 <none> <none> kube-system calico-kube-controllers-57b57c56f-v9x4r 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.202 ip-172-31-14-173 <none> <none> kube-system calico-node-722wl 0/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system calico-node-brlhl 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system coredns-787d4945fb-c5nht 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.204 ip-172-31-14-173 <none> <none> kube-system coredns-787d4945fb-l746s 1/1 Running 2 (2m59s ago) 2d20h 192.168.17.203 ip-172-31-14-173 <none> <none> kube-system etcd-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-apiserver-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-controller-manager-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-pkwvl 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none> kube-system kube-proxy-vkcp2 1/1 Running 0 2d20h 172.31.10.110 ip-172-31-10-110 <none> <none> kube-system kube-scheduler-ip-172-31-14-173 1/1 Running 2 (2m59s ago) 2d20h 172.31.14.173 ip-172-31-14-173 <none> <none>

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @quachout,

    The output indicates that the container runtime on the worker node is still not operational, therefore the worker node status is NotReady.

    Per my previous comments, we are assuming that all recommended settings are in place as presented in the demo video from the introductory chapter for AWS EC2 - VPC settings, SG rule, VM size, recommended guest OS...

    To fix the container runtime, perform the following on the worker node only:
    Make sure the containerd service is stopped
    Remove the 2 config files: /etc/containerd/config.toml and /etc/containers/registries.conf.d/registry.conf
    Reinstall the containerd.io package (check k8sWorker.sh for syntax)
    Reinitialize the /etc/containerd/config.toml file with default settings and set the systemd cgroup (check k8sWorker.sh for syntax)
    Edit the /etc/containerd/config.toml file by appending required entries (check local-repo-setup.sh for syntax)
    Create the /etc/containers/registries.conf.d/registry.conf file (check local-repo-setup.sh for syntax)
    Restart (or start) the containerd service (check k8sWorker.sh for syntax)

    Before continuing with the labs, the containerd service needs to be active and running on the worker node.

    Regards,
    -Chris

  • quachout
    quachout Posts: 15
    Options

    @chrispokorni where are the steps in the lab to "Reinstall the containerd.io package (check k8sWorker.sh for syntax)" just want to make sure I'm doing this correctly. Thanks!

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @quachout,

    Those steps are not specifically stated in the labs, because they are automated through the scripts I recommended for syntax checking. Since your errors are only related to the containerd runtime, I only recommend you extract and run those commands from the scripts, that install the runtime, initialize it with defaults, and edit/update configuration files.

    It is a mystery, however, that the runtime install and/or config was unsuccessful on one node (the worker) while the other node seems to be completely fine, when the scripts are performing the same thing on both systems.

    Regards,
    -Chris

Categories

Upcoming Training