Welcome to the Linux Foundation Forum!

LFS258 Lab 3.1 - Something overwriting edits to /etc/hosts file, preventing worker join to cp node

Hello Mentors! Thank you for the great course. Could you please help me with an issue?

I managed to complete all of Lab 3.1 (Exercises 3.1-3.5) successfully using three nodes on Digital Ocean, but I am running into a problem trying to replicate the same lab/procedure on a MacBook Pro (M1 Max/Apple Silicon/aarch64/arm64) using a cp node and one or two workers using Canonical's Multipass v1.11.1 to create the nodes. Multipass enables me to launch three Ubuntu 22.04.1 nodes quickly right on the Mac, each with a unique IP, and of course, you can open a shell in Terminal to connect to each one via SSH (using 'multipass shell '), as you would expect. I am running macOS Ventura 13.3.1.

Everything works fine and as expected (i.e., containerd, kubelets, and all kube-system pods are running and ready with no apparent errors, etc.)--up until just after the moment where I try to join the first worker node to the cp. At that point, there is the usual brief message on the worker that it has joined the cluster. Shortly after that, on the cp, it briefly reports that node1 (the worker) has joined, and both the cp and node1 both flash "ready," but a moment later, the shell connections to the cp and worker both freeze-up and are lost (i.e., after a few seconds, each shell reverts to the usual Terminal prompt).

After the connections are lost, Multipass reports the nodes are still running, but I have been unable to recover the connections without stopping both nodes first, and then starting them up again, one at a time. During this process, I noticed that after I reestablish each shell connection, something has overwritten my edits to the /etc/hosts file on each node (cp and worker). So it appears the worker nodes cannot find the cp and vice-versa. Also, the alias I have created on each machine ("alias k='kubectl'") has been removed/forgotten. I noticed that if I reenter the control plane IP in the /etc/hosts file on the worker, it typically rejoins the cluster for a short time before things freeze up again.

So it appears some part of the system might be rebooting (maybe the kube-apiserver or kubelet or something ... but not the nodes themselves, probably), causing key edits to be lost (causing DNS failure, I suspect). Is there a way to fix this?

I tried commenting-out the line '- update_etc_hosts' in the '/etc/cloud/cloud.cfg' file, but that did not fix the issue (see this thread). This other thread on Stackoverflow suggested some reasons why the edits to /etc/hosts are not persisting (perhaps being overwritten by systemd-resolved.service), but I can't figure it out, so I'm asking the experts!

Additional context: I've taken the usual steps to set up the cluster--swap has been disabled ('swapoff -a'), all firewall tables have been flushed ('iptables -F'), SELINUX has been set to 'permissive', etc. So to reiterate: basically the same approach works fine to create a 3-node kubeadm cluster when I use my Mac with three nodes on Digital Ocean, but not when I try to create a similar cluster using Multipass on the Mac M1.

What can I do to fix this and make the kubeadm cluster function normally using Multipass on Apple Silicon? Thanks!

Comments

  • Having reflected on this issue a little more--and experiencing it again while using Multipass--it could be that when you use Multipass to stop a node and restart it again, that causes /etc/hosts to revert to it's original settings, and for any aliases you have set to be forgotten.

    But about the original issue (i.e., that a worker node can be connected to a cp node using the LFS258 Lab 3.1 procedure on Multipass, but then the system breaks down): I would still like to know if there is a way to resolve it (to run a multi-node kubeadm cluster on Multipass on Apple Silicon) if anyone knows. Thanks!

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @andrew.nichols,

    The lab material has not yet migrated to Ubuntu 22.04 LTS. It is still on Ubuntu 20.04 LTS to mirror the CKA exam environment, per:

    https://docs.linuxfoundation.org/tc-docs/certification/lf-handbook2/exam-preparation-checklist#platform-selection

    The Kubernetes nodes disconnects may be due to container images incompatibilities with the ARM architecture. As most Kubernetes control plane agents are run by containers, then the CNI network plugin, DNS servers, and any other plugins need to be compatible with the guest system architecture.

    Regards,
    -Chris

Categories

Upcoming Training