Welcome to the Linux Foundation Forum!

Unable to connect to the server: x509 (Lab 2.2)

Hi everyone,

yesterday I have reset kubeadm because I could not join from master with worker node. It fails showing me this error
"a Node with name and status "Ready" already exists in the cluster" (at the moment I don't have screen about this).

Anyway, after resetting it, kubectl throws me this error:

What is the problem?
This is the first time that I see this error.

Thank you,
Regards

Comments

  • serewiczserewicz Posts: 593

    As your font is in red, and the image grainy, I cannot make out any of the messages.

    Did you reset every node? Be aware that kubeadm reset is not well used and could have some hiccups. Please try to rebuild from a fresh cluster and see if the problem persists.

    Regards,

  • chrispokornichrispokorni Posts: 509

    Hi @MariangelaPetraglia,

    If you reset your master node ckad-1 then keep in mind that in the process all the cluster credentials are being reset as well. As a result, if you do not update your kubectl credentials, you would not be able to interact with the "new" cluster. It seems you ran the k8sMaster.sh again, and there is a chance that it did not update the /home/mary/.kube/config file.

    In order to fix your cluster access, you need to run the following 2 commands:

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config At the prompt confirm the overwrite action.

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    These commands are in the k8sMaster.sh script as well.

    Regards,
    -Chris

  • Hi @chrispokorni , @serewicz

    Now I tried newly:

    • on master node I have reset kubeadm and executed k8sMaster.sh script, it's successuful.
      The screen below is master's final output

    • on worker node I have reset kubeadm, I executed k8sSecond.sh script, and i tried to join with master node.
      The instruction join give me this error (as mentioned previously) showed in this screen

  • chrispokornichrispokorni Posts: 509
    edited June 22

    Hi @MariangelaPetraglia,

    It seems you are not following closely the lab exercise instructions. Your output shows clearly the reason for your error: you are running the kubeadm join command on the master/primary node. The kubeadm join command is expected to be executed on the worker/secondary node.

    Also, if the k8sMaster.sh script ran with its defaults, then it configured the Pod network to 192.168.0.0/16. From your output, your master node IP address is 192.168.1.107, which overlaps the Pod IP network. Perhaps you missed these configuration tips from an earlier discussion:

    With VirtualBox, the promiscuous mode - allow all may have to be enabled for bridged networking. Also, I see you are using the default DHCP subnet of VirtualBox, which overlaps with the Pod subnet managed by Calico in your Kubernetes cluster. This will cause DNS and routing issues sooner or later, so I'd recommend changing your VirtualBox VM subnet to prevent overlapping with the 192.168.0.0/16 Pod subnet.

    Regards,
    -Chris

  • Hi,
    I'm so sorry :s
    I had that error because when I cloned the VM I didn't change host names; that's why it seemed that I run them.
    Now, join instruction is executed with success and cluster is created with two nodes.

    Despite the procedure is successful, on worker node I had same error of this topic, i.e. "Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: cerification error" while trying to verify candidate authority certificate "kubernetes")" when I run kubectl get nodes command.

    How I do solve this problem on worker nodes?

    Thank you so much

  • serewiczserewicz Posts: 593

    Hello,

    Typically you would not run kubectl commands on the worker nodes. In any case, the ~/.kube/config file determines both where to send the API calls as well as what keys and certs to use. I would guess that the file on your worker no longer matches the file on the master. Especially if you cloned everything, but have not cleared the file since the clone.

    Regards,

  • Hello,

    Now I resolved this problem, the ~/.kube/config files were different.
    Thank you so much for your patience :) .

    Regards

  • serewiczserewicz Posts: 593

    Great! :smile:

Sign In or Register to comment.