Welcome to the Linux Foundation Forum!

3.3 Finish cluster setup is not giving the conclusion of cluster setup

Hi Team,

I've finished exercise 3.2 (grow cluster) and understood the purpose of all the commands that i ran.

But when i ran the commands for exercise 3.3 (Finish cluster setup) , i am not able to understand the concept that there is no resolution for step no: 8 in in exercise 3.2 i.e 'kubectl get nodes' command on worker node gives error because don't have cluster keys or authentication keys in local .kube/config file.

it would be great if you can explain below queries ?

query 1) so, what is the resolution for the below command and how do i come to know that i've done cluster setup b/w master and worker nodes successfully ?
student@worker:˜$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

query 2) why are we deploying a simple application (exercise 3.4) on master node (control pane) ? After going through the kubernetes architecture , i am impressed that all the deployments, creating pods is happening on worker nodes . can you please provide your suggestions here and correct me if i understood wrongly.

query 3 ) what tasks/operations can we do on master node only ?

query 4) what tasks/operations can we do on worker nodes only ?

query 5) what tasks/operations can we do on both master and worker nodes ?

Thanks
Anil Kumar

Comments

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @anil.bmam,

    The failed kubectl command shows the dependency between the CLI tool and the ~/.kube/config file. When the files is not found, any kubectl command fails.

    When deploying application to the cluster, our deployment request is captured by the API server running on the control-plane node. Then the control-plane agents decide which node of the cluster will run the application and will delegate all initialization tasks to the node agents of the selected node.

    Regards,
    -Chris

  • rtodich
    rtodich Posts: 6

    ok. Made it to this section. Was able to join the node to the cluster, but the kubectl describe node k8scp command displays the error below...

    The kubectl get node command works just fine. What am I missing here?

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @rtodich,

    You can try the node's hostname instead of the control-plane alias with the kubectl describe command.

    Regards,
    -Chris

  • rtodich
    rtodich Posts: 6

    @chrispokorni said:
    Hi @rtodich,

    You can try the node's hostname instead of the control-plane alias with the kubectl describe command.

    Regards,
    -Chris

    That worked. Curious why the alias does not. Will circle back to that later.

    Thanks Chris!

  • The alias's mixed me up at first, then I realized I had to put names and IP's in /etc/hosts files on both servers. Short cutting DNS in that way hostnames work.

  • Following these instructions the control plane node takes the hostname so if also following GCP instructions the node will be called 'master', not 'k8scp', which is the output of my 'kubectl get node'. 'kubectl describe node master' will work but not 'kubectl describe node k8scp' because this is not the nodename - the pdf shows the node name as k8scp. This is also not a dns error as 'ping k8scp' will resolve if it has been added to /etc/hosts but this does not appear to affect the 'kubectl describe node' command? Also the worker node still joined with k8scp:6443 in the kudeadm join command. Question is where does the 'kubectl describe node' output k8scp come from in the pdf example?

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @mrosebury,

    It is possible it came from a typo.

    Otherwise the k8scp alias helps with networking, traffic routing, and is registered with the identity of the control plane when certificates are issued. This will be very helpful in Chapter 16 when converting the cluster's control plane into a HA control plane.
    The kubectl commands have no effect on the alias, as kubectl sees the node name - which by default is inherited from the hostname, unless a desired node name is specified in the kubeadm config file.

    Regards,
    -Chris

  • mrosebury
    mrosebury Posts: 5
    edited May 2022

    I think this typo may impacts on lab 16.2 when the IP of the ha-proxy server is used in the master's etc hosts file. Lab 16.1 Step 5 'kubectl get nodes' error 'http: server gave HTTP response to HTTPS client

Categories

Upcoming Training