Welcome to the Linux Foundation Forum!

Lab 16.2: unable to join second master node

Posts: 3
edited May 2020 in LFS258 Class Forum

Hi,

i'm stuck in Lab 16.2. I'm not able to join the second master node to the cluster.
I've setup haproxy and changed /etc/hosts so k8s-master does point to haproxy.
I've tested that via curl request: curl -k https://k8s-master:6443 and i get a 403 in JSON Format...so connectivity via haproxy is not an issue.

I try to join the second node as a master like this:

  1. student@k8s-master-2:~$ sudo kubeadm join k8s-master:6443 --token pzrft4.7bhbcyqpjzeoemle --discovery-token-ca-cert-hash sha256:4df75f261e68bf47c0117bde46899745956883a592f7f996843698f5d1053876 --control-plane --certificate-key a59d1d7f298e152fd590d891c5b1d7a8749e55101aa2a8c23bcb893bfe863781
  2. [preflight] Running pre-flight checks
  3. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  4. [preflight] Reading configuration from the cluster...
  5. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  6. error execution phase preflight:
  7. One or more conditions for hosting a new control plane instance is not satisfied.
  8.  
  9. unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address
  10.  
  11. Please ensure that:
  12. * The cluster has a stable controlPlaneEndpoint address.
  13. * The certificates that must be shared among control plane instances are provided.
  14.  
  15.  
  16. To see the stack trace of this error execute with --v=5 or higher

What does that mean? Why is the controlPlaneEndpoint not stable?
Any help is welcome. Thanks for having a look!
Cheers
Peter

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 2,451

    Hi Peter,

    In the lab exercises, beginning with Lab 3, the master alias is set to k8smaster. From your notes, it seems that you are using a different alias k8s-master. Is this used consistently in your cluster, in all the /etc/hosts files, kubeadm init command parameters, and kubeadm join command? Is your haproxy.cfg correct? Has the IP address changed on the proxy instance or the first master instance?

    Regards,
    -Chris

  • Posts: 3
    edited May 2020

    Hi Chris,

    thank you for your reply and your help. I was busy and couldn't reply earlier.
    Regarding your questiions:

    • Is this used consistently in your cluster, in all the /etc/hosts files, kubeadm init command parameters, and kubeadm join command?
      Yes i did use that consistently, while working through the Labs.
      k8s-master has the gcloud internal ip 10.156.0.5.
      k8s-master-2 has the gcloud internal ip 10.156.0.12.
      k8s-proxy has the gcloud internal ip 10.156.0.10.
      In all /etc/hosts/ files k8s-master is mapped to 10.156.0.10.

    • kubeadm init
      I don't have my kubeadm-config.yaml from Lab 3 anymore, so i cannot answer your question regarding kubeadm init.
      All i know is, that joining worker nodes wasn't a problem.
      The kube adm init command i use to join the new node to the existing master shows also that connectivity is not an issue:
      It's too long to paste it here, but i've attached the output.

    It's possible to curl on the api through the k8s-proxy.

    1. sutdent@k8s-master-2:~$ curl k8s-master:6443 -v
    2. * Rebuilt URL to: k8s-master:6443/
    3. * Trying 10.156.0.10...
    4. * Connected to k8s-master (10.156.0.10) port 6443 (#0)
    5.  
    6.  
    7. GET / HTTP/1.1
    • haproxy config
      haproxy.cfg seems correct to me; i also see traffic in the stats increasing, whle curl'ing on the api.

    • IP change
      The internal IPs from the VMs didn't change.

    Any ideas? Thanks again!
    Peter

  • Posts: 2,451

    Hi,

    The text file you attached shows kubeadm run as the student user. It should be run by root instead.

    Regards,
    -Chris

  • Posts: 3
    edited June 2020

    Hi Chris,

    thank you for your reply and suggestions. I was busy again and couldn't continue here, until now.
    My cluster VMs were powered off, but after poweron cluster is running again.

    So the instructions tells me to use the student user, but with sudo. So basically your're right.
    I tried to join the second master again. Generated token, SSL hash & master certificate on the running master to join the new master node. This time i used root. It fails again with the same Error:

    1. root@k8s-master-2:~# kubeadm join k8s-master:6443 --token 1uwx2s.fs12j9vumu59utl9 --discovery-token-ca-cert-hash sha256:4df75f2
    2. 61e68bf47c0117bde46899745956883a592f7f996843698f5d1053876 --control-plane --certificate-key 2a5cca83e6a6d4022652ab372dc137d46d2
    3. 3adb39a70614ec957ea99f21ce8ab
    4. [preflight] Running pre-flight checks
    5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". P
    6. lease follow the guide at https://kubernetes.io/docs/setup/cri/
    7. [preflight] Reading configuration from the cluster...
    8. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    9. error execution phase preflight:
    10. One or more conditions for hosting a new control plane instance is not satisfied.
    11.  
    12. unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address
    13.  
    14. Please ensure that:
    15. * The cluster has a stable controlPlaneEndpoint address.
    16. * The certificates that must be shared among control plane instances are provided.
    17.  
    18.  
    19. To see the stack trace of this error execute with --v=5 or higher

    Now what's interesting is, that i tried to join the node as a "worker node" instead, by omitting the --control-plane argument.
    And that worked!

    1. root@k8s-master-2:~# kubeadm join k8s-master:6443 --token 1uwx2s.fs12j9vumu59utl9 --discovery-token-ca-cert-hash sha256:4df75f261e68bf47c0117bde46899745956883a592f7f996843698f5d1053876 --certificate-key 2a5cca83e6a6d4022652ab372dc137d46d23adb39a70614ec957ea99f21ce8ab
    2. W0620 08:40:36.959233 20255 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    3. [preflight] Running pre-flight checks
    4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    5. [preflight] Reading configuration from the cluster...
    6. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    7. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    8. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    10. [kubelet-start] Starting the kubelet
    11. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    12.  
    13. This node has joined the cluster:
    14. * Certificate signing request was sent to apiserver and a response was received.
    15. * The Kubelet was informed of the new secure connection details.
    16.  
    17. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    Now i have another worker node in the cluster

    1. student@k8s-master:~$ k get nodes
    2. NAME STATUS ROLES AGE VERSION
    3. k8s-master Ready master 147d v1.17.2
    4. k8s-master-2 Ready <none> 18m v1.17.2
    5. k8s-worker Ready <none> 147d v1.17.2

    I scaled an exisiting deployment and the new pods are also running on the new node.

    I don't know what to try next, but i'll try again with a new blank VM.

    Thank's again for the feedback!
    Best
    Peter

  • Posts: 2,451
    edited June 2020

    Hi Peter,

    By taking a second look at your naming convention, there may be a conflict between the k8s-master as node name of your 1st master node, and the same k8s-master as the overall cluster Master alias. The DNS finds the same name/alias associated with 2 separate IP addresses: 10.156.0.5 and 10.156.0.10, and it would make sense why it is complaining about an unstable controlPlaneEndpoint.

    Try changing the alias from k8s-master to something else, in all the /etc/hosts files.

    Also, I notice you are still running Kubernetes 1.17. The course has been updated to 1.18.1, and the SOLUTIONS tarball has also been updated to reflect the latest course version.

    Regards,
    -Chris

  • Turning a single control plane cluster created without --control-plane-endpoint into a highly available cluster is not supported by kubeadm.

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

  • You are correct @eduardofraga, that is why in Lab Exercise 3.1 we are setting that up to be an alias instead of the hostname of any of the master nodes.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training