Welcome to the Linux Foundation Forum!

kubectl get node error

Hello,

I'm trying to attach my node to the master and get the following error:

Kubectl on host returns error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty

It looks like it's having an issue with .kube/config

.kube/config is fine on master.

Any ideas?

Thanks!

Reg

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Well, let's see if we can figure it out. To start, please paste the command and the output where you received the error.

    When you say the config file is fine on the master, you mean that it runs without issues there? The lab does not have you copy over the file to the worker node, so if you did so did you put it in the correct place with the correct permissions, and not edit the file in any way?

    What does the output of kubectl get node show?

    Regards,

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @rdancy,

    Once the worker node successfully joined the master/cluster, you should see that confirmation message at the end of the output produced by the join phase.
    Then, the kubectl get node command is expected to be run on (or against) the master node.

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 8

    Here's the command I ran:
    kubectl get nodes
    error: error loading config file "/home/reg_dancy01/.kube/config": couldn't get version/kind; json parse error:
    json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\"";
    Kind string "json:\"kind,omitempty\"" }

    Yes it runs without an issue on the master

    I was following the lab, the part it says grow your cluster

  • chrispokorni
    chrispokorni Posts: 2,155
    edited August 2020

    The lab does not guide you to configure kubectl with the expected .kube/config file on the worker node, and, as a result, kubectl is not expected to work from the worker node.

    However, if you want to configure it yourself, then separately you would need to setup the .kube/config file on the worker node, by copying it over from the master node, and ensuring it has the same owner/permissions as found on the master node.

    Regards,
    -Chris

  • rdancy
    rdancy Posts: 8

    Hi ,

    I also ran the join command before running the get nodes command:

    root@instance-1:~# kubeadm join 10.128.0.3:6443 --token W0817 13:16:19.690275 --discovery-token-ca-cert-hash sha
    256:0d9acb893d4ff0c34ebfd8c02644796a3924ffeca20799ca018553a7e411ce26
    accepts at most 1 arg(s), received 2
    To see the stack trace of this error execute with --v=5 or higher
    root@instance-1:~#

  • chrispokorni
    chrispokorni Posts: 2,155

    The value of your token does not look right. It seems to be the prefix of a Warning message, with today's date and time.

    The value of the token has a different structure.

    Try re-creating the token, run kubectl reset on the worker node, then run the kubeadm join ... command again on the worker, with the value of the new token instead.

    Regards,
    -Chris

  • serewicz
    serewicz Posts: 1,000

    Hello,

    From the join command you used, I notice that it does not use the alias k8smaster as the lab describes. If you skipped those steps there may be others missed. Please start with two new VMs and complete each step to configure the master and then join the worker.

    Regards,

  • rdancy
    rdancy Posts: 8

    Hello

    I took your advice and restarted with two new VMs. I have re-configured the master and joined the worker successfully.

    Thanks for the help

    Reg

  • serewicz
    serewicz Posts: 1,000

    Great! Glad its working. :)

Categories

Upcoming Training