Welcome to the Linux Foundation Forum!

Lab 3.2 unable to join the cluster from worker node

Hi, i am on step 7 lab 3.2. Can't join cluster from worker. Help please!

root@worker:~# kubeadm join --token ur2pat.3anbura0oc0gnuf6 k8smaster:6443 --discovery-token-ca-cert-hash sha256:c4
12a01030a81aa11acd083def04161d010334a7d6d432e7b4e1f26bde3486d5

W1022 13:29:54.004779 1788 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignor
ed when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is
"systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: configmaps "cluster-info" is for
bidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-public"
To see the stack trace of this error execute with --v=5 or higher

Comments

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @sergeizak,

    What are you using for VMs - local VMs, or cloud VM instances?

    What are you VM specs and the OS?

    Did you install docker or CRI-O?

    What was the age of the token at the time the join command was issued?

    Regards,
    -Chris

  • Hi Chris,
    I am using Google VMs

    Specs are n1-standard-2 (2 vCPUs, 7.5 GB memory), Ubuntu 18.04

    Docker

    Freshly generated token

    It takes about 5 minutes for the error to show up

    Thanks
    Sergei

  • I ended up recreating master and it's working now

  • I have the same issue, can someone please help with the possible solution?

  • Nevermind, it got fixed!

  • shasha
    shasha Posts: 11

    i have the same issue , why when you solve a problem dont write it for another ????

  • Requesting people here who have fixed to help with providing the solution please . I already recreated master and redid the steps it does not resolve the problem for me .Thanks in advance

  • xavierzip
    xavierzip Posts: 2
    edited February 2022

    It happened to me when I init my master with docker using cgroupfs driver and kubelet failed to start. Fixed docker configuration by updating /etc/docker/daemon.json to use systemd first. Then recreated the master using kubeadm reset and then kubeadm init. After that the issue was fixed.

    To fix the above mention issue alone, you can try to update access policy via

    # kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
    

    but after I did that I encountered another issue that "cluster-info" is missing. To fix this one, I have to re-create my master node.

Categories

Upcoming Training