Welcome to the Linux Foundation Forum!

cluster on Raspberry 5

hi all,
luigi here, currently based in the south of Italy.
i'm completing that course on a Raspberry 5, running all the lab in a VM started with QEMU.

well, inside the VM there was no way to start the cluster wuth k8s 1.25 and the suggested version of weave.

for all the others RPI entusiast, here the command that i used to run the cluster and that are currently working:

Init the cluster

sudo kubeadm reset -f
sudo rm -rf /etc/cni/net.d /var/lib/cni /var/lib/kubelet /var/lib/etcd
sudo systemctl restart containerd
sudo systemctl restart kubelet

IP=$(ip -4 route get 1.1.1.1 | awk '{print $7; exit}')
sudo kubeadm init \
  --cri-socket unix:///run/containerd/containerd.sock \
  --apiserver-advertise-address "$IP"

about Weave (and coreDNS that was not starting)

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

Patch the DaemonSet to use arm64 images

kubectl -n kube-system set image ds/weave-net \
  weave=weaveworks/weave-kube-arm64:2.8.1 \
  weave-npc=weaveworks/weave-npc-arm64:2.8.1

Patch the initContainer image (this is what copies CNI binaries)

kubectl -n kube-system patch ds weave-net --type='json' -p='[
  {"op":"replace","path":"/spec/template/spec/initContainers/0/image","value":"weaveworks/weave-kube-arm64:2.8.1"}
]'

Remove the wrong host plugin + restart Weave so it re-copies the correct one

sudo rm -f /opt/cni/bin/weave-plugin-latest /opt/cni/bin/weave-net
kubectl -n kube-system delete pod -l name=weave-net

Restart CoreDNS pods

kubectl -n kube-system delete pod -l k8s-app=kube-dns
kubectl -n kube-system get pods -o wide

Checking the status of the cluster and pods

rescue@ubuntu-vm:~$ kubectl -n kube-system get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP          NODE        NOMINATED NODE   READINESS GATES
coredns-5dd5756b68-9j9kq            1/1     Running   0          2m14s   10.32.0.5   ubuntu-vm   <none>           <none>
coredns-5dd5756b68-b8vbb            1/1     Running   0          2m14s   10.32.0.4   ubuntu-vm   <none>           <none>
etcd-ubuntu-vm                      1/1     Running   37         68m     10.0.2.15   ubuntu-vm   <none>           <none>
kube-apiserver-ubuntu-vm            1/1     Running   33         68m     10.0.2.15   ubuntu-vm   <none>           <none>
kube-controller-manager-ubuntu-vm   1/1     Running   38         68m     10.0.2.15   ubuntu-vm   <none>           <none>
kube-proxy-8bhs5                    1/1     Running   0          68m     10.0.2.15   ubuntu-vm   <none>           <none>
kube-scheduler-ubuntu-vm            1/1     Running   38         68m     10.0.2.15   ubuntu-vm   <none>           <none>
weave-net-gx8wx                     2/2     Running   0          2m25s   10.0.2.15   ubuntu-vm   <none>           <none>
rescue@ubuntu-vm:~$ kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
ubuntu-vm   Ready    control-plane   69m   v1.28.15
rescue@ubuntu-vm:~$ kubectl cluster-info
Kubernetes control plane is running at https://10.0.2.15:6443
CoreDNS is running at https://10.0.2.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump

Cheers...

Comments

Categories

Upcoming Training