Welcome to the Linux Foundation Forum!

Unsuccessful joining worker node with master node using kubeadm join

Hello,

After setting up my master node I worked on my worker node and receive the following error:

kubeadm join --token czbtxb.0313qpgr4oztklb2 k8sscp:6443 --discovery-token-ca-cert-hash sha256:99e1b325fe4cf3745b5b6986c80294190632a2db9c713d483fa37772ccc36cba
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: Get "https://k8sscp:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp: lookup k8sscp on 127.0.0.53:53: server misbehaving
To see the stack trace of this error execute with --v=5 or higher

Any suggestions on how to resolve this?

Comments

  • chrispokorni
    chrispokorni Posts: 2,296

    Hi @rdancy,

    What are the most recent edits of both /etc/hosts files of control plane and worker nodes respectively?

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 14

    Hi Chris,

    Here's the hosts file from the control plane

    cat /etc/hosts
    127.0.0.1 localhost
    10.128.0.3 k8scp

    The following lines are desirable for IPv6 capable hosts

    ::1 ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    ff02::3 ip6-allhosts
    169.254.169.254 metadata.google.internal metadata

    And here's the worker node

    cat /etc/hosts
    10.128.0.3 k8scp
    127.0.0.1 localhost

    The following lines are desirable for IPv6 capable hosts

    ::1 ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    ff02::3 ip6-allhosts
    169.254.169.254 metadata.google.internal metadata

  • chrispokorni
    chrispokorni Posts: 2,296

    Hi @rdancy,

    Between the hosts entries and the join command, can you spot the discrepancy?
    Fix both hosts files and retry the join.

    I am also assuming that 10.128.0.3 is the actual private IP of your control plane node/VM, not just a pure copy/paste from the lab guide.

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 14

    Yes, 10.128.0.3 is the actual IP of the control plane.

  • rdancy
    rdancy Posts: 14

    I fixed a typo with k8scp and reran kubeadm join again. I added the CP IP to the worker node as well as placed it in the CP node. Join still failed

    root@instance-20240419-224432:~# kubeadm join --token czbtxb.0313qpgr4oztklb2 k8scp:6443 --discovery-token-ca-cert-hash sha256:99e1b325fe4cf3745b5b6986c80294190632a2db9c713d483fa37772ccc36cba --v=5
    I0423 15:02:44.195803 67428 join.go:412] [preflight] found NodeName empty; using OS hostname as NodeName
    I0423 15:02:44.195921 67428 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
    [preflight] Running pre-flight checks
    I0423 15:02:44.195963 67428 preflight.go:93] [preflight] Running general checks
    I0423 15:02:44.195989 67428 checks.go:280] validating the existence of file /etc/kubernetes/kubelet.conf
    I0423 15:02:44.195996 67428 checks.go:280] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
    I0423 15:02:44.196008 67428 checks.go:104] validating the container runtime
    I0423 15:02:44.211741 67428 checks.go:639] validating whether swap is enabled or not
    I0423 15:02:44.211791 67428 checks.go:370] validating the presence of executable crictl

  • chrispokorni
    chrispokorni Posts: 2,296

    Hi @rdancy,

    All of the messages displayed above are informational (I) messages, none of them show a failure, error, or warning. What error messages or warnings are displayed? Those are the ones that help us to determine how to move forward.

    On the CP node, as root, please run the following command. Copy its output and paste it in your next comment/response, without editing or making any corrections to it:

    kubeadm token create --print-join-command

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 14

    I ran the command you suggested and here's the output

    kubeadm token create --print-join-command

    kubeadm join 10.128.0.3:6443 --token hg9j5c.dxflofz4gpyt7xxh --discovery-token-ca-cert-hash sha256:99e1b325fe4cf3745b5b6986c80294190632a2db9c713d483fa37772ccc36cba
    root@instance-20240418-192801:~#

    I didn't see a specific error from the worker node, just a bunch of the output below:

    I0423 15:02:44.268337 67428 join.go:529] [preflight] Discovering cluster-info
    I0423 15:02:44.268351 67428 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "k8scp:6443"
    I0423 15:02:44.368446 67428 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "czbtxb", will try again
    I0423 15:02:49.594409 67428 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "czbtxb", will try again
    I0423 15:02:55.201878 67428 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "czbtxb", will try again
    I0423 15:03:00.506067 67428 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "czbtxb", will try again
    I0423 15:03:05.731090 67428 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "czbtxb", will try again

  • chrispokorni
    chrispokorni Posts: 2,296
    edited April 23

    Hi @rdancy,

    This part of the output is much more meaningful. It seems the control plane no longer recognized the czbtxb token ID because it expired in the meantime.
    Now that you generated a new token use this new command to attempt a new join. On the worker node, as root, run the following two commands:

    kubeadm reset
    

    #confirm with "y" or "yes" when prompted

    kubeadm join 10.128.0.3:6443 --token hg9j5c.dxflofz4gpyt7xxh --discovery-token-ca-cert-hash sha256:99e1b325fe4cf3745b5b6986c80294190632a2db9c713d483fa37772ccc36cba
    

    If you see messages about the token ID again, create a new token with the command I provided earlier.

    The k8scp alias is not needed in LFD259 because the cluster is not initialized on the alias (we are not using the kubeadm-config.yaml file that was used in another course).

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 14

    Looks like it's working now

    On the worker node

    224432:~# kubeadm join 10.128.0.3:6443 --token hg9j5c.dxflofz4gpyt7xxh --discovery-token-ca-cert-hash sha256:99e1b325fe4cf3745b5b6986c80294190632a2db9c713d483fa37772ccc36cba
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

    This node has joined the cluster:

    • Certificate signing request was sent to apiserver and a response was received.
    • The Kubelet was informed of the new secure connection details.

    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    root@instance-20240419-224432

    I verified that the node appears on the control plane

    kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    instance-20240418-192801 Ready control-plane 3d23h v1.28.9
    instance-20240419-224432 Ready 51s v1.28.1

    Thanks for your help Chris! You can close this case.

Categories

Upcoming Training