Welcome to the Linux Foundation Forum!

Lab 16.2 Join Control Plane Nodes step 2

stuck at the step 2 in chapter "Join Control Plane Nodes", i can't create a token on my master-node aka "cp"-node. All the other steps up to this point have worked so far.

****error message:****
student@master:~$ sudo kubeadm token create --v=5
I0124 19:16:49.817339 28387 token.go:123] [token] validating mixed arguments
I0124 19:16:49.817413 28387 token.go:132] [token] getting Clientsets from kubeconfig file
I0124 19:16:49.817457 28387 cmdutil.go:81] Using kubeconfig file: /home/student/.kube/config
I0124 19:16:49.821938 28387 token.go:247] [token] loading configurations
I0124 19:16:49.824444 28387 interface.go:431] Looking for default routes with IPv4 addresses
I0124 19:16:49.824475 28387 interface.go:436] Default route transits interface "ens4"
I0124 19:16:49.824649 28387 interface.go:208] Interface ens4 is up
I0124 19:16:49.824730 28387 interface.go:256] Interface "ens4" has 2 addresses :[ fe80::4001:aff:fe02:15/64].
I0124 19:16:49.824780 28387 interface.go:223] Checking addr
I0124 19:16:49.824801 28387 interface.go:230] IP found
I0124 19:16:49.824821 28387 interface.go:262] Found valid IPv4 address for interface "ens4".
I0124 19:16:49.824839 28387 interface.go:442] Found active IP
I0124 19:16:49.824887 28387 kubelet.go:203] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0124 19:16:49.831301 28387 token.go:254] [token] creating token
I0124 19:16:49.834186 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 1 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:50.836217 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 2 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:51.838076 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 3 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:52.839762 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 4 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:53.841190 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 5 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:54.843844 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 6 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:55.845497 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 7 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:56.847277 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 8 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:57.849205 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 9 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
I0124 19:16:58.850850 28387 with_retry.go:171] Got a Retry-After 1s response for attempt 10 to https://k8scp:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-qkva0e?timeout=10s
timed out waiting for the condition


  • provide some more logs, such as
    kubectl get nodes
    systemctl status kubelet
    systemctl status docker
    systemctl status kubeadm

  • celtium
    celtium Posts: 6
    edited January 2022

    additional logs:

  • Looks like your kubectl having trouble. Please also send the output of
    sudo cat /etc/hosts
    and the IP of the master node where you were trying to run kubectl get nodes.

  • The alias k8scp of the master node (and of all nodes of the cluster) is set to the ip-address of the "ha-proxy" node ( as required by the instructions in step 16.1of the chapter "Join Control Plane Nodes".
    The external IP of the master node is:

  • Have you configured ha-proxy on the student@master node or on a different node?

  • I have configured the ha-proxy on a different node.

  • And are you using Local VMs / AWS EC2 / GCP VMs?

  • I am using the GCP VMs.

  • alihasanahmedk
    alihasanahmedk Posts: 34
    edited January 2022

    I think the mistake you are doing is that in /etc/hosts the IP would be your haproxy VM Internal IP, not External IP. Most likely your haproxy VM Internal IP will be from this network 10.2.0.x. External IP will be used to access the resource from your local machine browser and for SSH.
    Run ifconfig in haproxy VM then check the Internal IP and set that internal IP into your control-plane (cp) node's /etc/hosts. Then run kubectl get nodes. This command should be executed successfully. Then access haproxy from browser by running or External_IP_of_haproxy:9999/stats. if still, you are facing issues then let me know we will set an online meeting to resolve your issue.

  • celtium
    celtium Posts: 6
    edited February 2022

    I added the internal ip-address of the ha-proxy on all Nodes (for e.g. on master-node see first screenshot below) , as required in the lab step1. But to create an token at the step 3 on the master it failed, the result you can see at my first post above with the error log****error message:****

    The steps "to use a local browser to navigate to the public IP of your proxy server. " before worked fine for me so far.

  • Hi @celtium,

    In the section "Deploy a Load Balancer", how did you configure the haproxy.cfg file in step 2?

    Did haproxy.service restart successfully in step3?

    After setting the k8scp alias on the haproxy node's /etc/hosts file, did kubectl get nodes produce the expected output in step 6?


  • show me your haproxy.cfg. And please share the output of ifconfig internal IP for all nodes.

  • etofran810
    etofran810 Posts: 50

    I have the same problem

  • etofran810
    etofran810 Posts: 50

    file lab 3 node master
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: 1.23.1
    controlPlaneEndpoint: "masterkub:6443"

  • etofran810
    etofran810 Posts: 50

    node ha-proxi
    ip addr | grep inet | grep 10
    inet scope global dynamic ens4

    file haproxy.cfg
    backend k8sServers
    balance roundrobin
    server masterkub check

  • etofran810
    etofran810 Posts: 50

    when executed join generate error

    unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address

  • etofran810
    etofran810 Posts: 50

    attached file with info

  • etofran810
    etofran810 Posts: 50

    I have removed --control-plane and node join

    masterkub Ready control-plane,master 46d v1.23.1
    secondcp Ready 45m v1.23.1
    workerkub Ready 46d v1.23.1

  • chrispokorni
    chrispokorni Posts: 2,012

    Hi @etofran810,

    The control plane should be advertised via the k8scp alias, not the control plane node hostname. The k8scp alias should be used to bootstrap the cluster, not the control plane node hostname, just as presented in the lab guide. This allows for the flexibility needed to re-assign the alias to another node, as intended by the exercise once the ha-proxy node has been added to the cluster's topology.


  • etofran810
    etofran810 Posts: 50

    ok, I am going to modify alias to k8scp


Upcoming Training