Welcome to the Linux Foundation Forum!

Lab 16.2: unable to join second master node

KalleWirsch
KalleWirsch Posts: 3
edited May 2020 in LFS258 Class Forum

Hi,

i'm stuck in Lab 16.2. I'm not able to join the second master node to the cluster.
I've setup haproxy and changed /etc/hosts so k8s-master does point to haproxy.
I've tested that via curl request: curl -k https://k8s-master:6443 and i get a 403 in JSON Format...so connectivity via haproxy is not an issue.

I try to join the second node as a master like this:

student@k8s-master-2:~$ sudo kubeadm join k8s-master:6443 --token pzrft4.7bhbcyqpjzeoemle --discovery-token-ca-cert-hash sha256:4df75f261e68bf47c0117bde46899745956883a592f7f996843698f5d1053876 --control-plane --certificate-key a59d1d7f298e152fd590d891c5b1d7a8749e55101aa2a8c23bcb893bfe863781
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

What does that mean? Why is the controlPlaneEndpoint not stable?
Any help is welcome. Thanks for having a look!
Cheers
Peter

Comments

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi Peter,

    In the lab exercises, beginning with Lab 3, the master alias is set to k8smaster. From your notes, it seems that you are using a different alias k8s-master. Is this used consistently in your cluster, in all the /etc/hosts files, kubeadm init command parameters, and kubeadm join command? Is your haproxy.cfg correct? Has the IP address changed on the proxy instance or the first master instance?

    Regards,
    -Chris

  • KalleWirsch
    KalleWirsch Posts: 3
    edited May 2020

    Hi Chris,

    thank you for your reply and your help. I was busy and couldn't reply earlier.
    Regarding your questiions:

    • Is this used consistently in your cluster, in all the /etc/hosts files, kubeadm init command parameters, and kubeadm join command?
      Yes i did use that consistently, while working through the Labs.
      k8s-master has the gcloud internal ip 10.156.0.5.
      k8s-master-2 has the gcloud internal ip 10.156.0.12.
      k8s-proxy has the gcloud internal ip 10.156.0.10.
      In all /etc/hosts/ files k8s-master is mapped to 10.156.0.10.

    • kubeadm init
      I don't have my kubeadm-config.yaml from Lab 3 anymore, so i cannot answer your question regarding kubeadm init.
      All i know is, that joining worker nodes wasn't a problem.
      The kube adm init command i use to join the new node to the existing master shows also that connectivity is not an issue:
      It's too long to paste it here, but i've attached the output.

    It's possible to curl on the api through the k8s-proxy.

    sutdent@k8s-master-2:~$ curl k8s-master:6443 -v
    * Rebuilt URL to: k8s-master:6443/
    *   Trying 10.156.0.10...
    * Connected to k8s-master (10.156.0.10) port 6443 (#0)
    
    
      GET / HTTP/1.1
    • haproxy config
      haproxy.cfg seems correct to me; i also see traffic in the stats increasing, whle curl'ing on the api.

    • IP change
      The internal IPs from the VMs didn't change.

    Any ideas? Thanks again!
    Peter

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi,

    The text file you attached shows kubeadm run as the student user. It should be run by root instead.

    Regards,
    -Chris

  • KalleWirsch
    KalleWirsch Posts: 3
    edited June 2020

    Hi Chris,

    thank you for your reply and suggestions. I was busy again and couldn't continue here, until now.
    My cluster VMs were powered off, but after poweron cluster is running again.

    So the instructions tells me to use the student user, but with sudo. So basically your're right.
    I tried to join the second master again. Generated token, SSL hash & master certificate on the running master to join the new master node. This time i used root. It fails again with the same Error:

    root@k8s-master-2:~# kubeadm join k8s-master:6443 --token 1uwx2s.fs12j9vumu59utl9 --discovery-token-ca-cert-hash sha256:4df75f2
    61e68bf47c0117bde46899745956883a592f7f996843698f5d1053876 --control-plane --certificate-key 2a5cca83e6a6d4022652ab372dc137d46d2
    3adb39a70614ec957ea99f21ce8ab
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". P
    lease follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    error execution phase preflight:
    One or more conditions for hosting a new control plane instance is not satisfied.
    
    unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address
    
    Please ensure that:  
    * The cluster has a stable controlPlaneEndpoint address.
    * The certificates that must be shared among control plane instances are provided.
    
    
    To see the stack trace of this error execute with --v=5 or higher
    

    Now what's interesting is, that i tried to join the node as a "worker node" instead, by omitting the --control-plane argument.
    And that worked!

    root@k8s-master-2:~# kubeadm join k8s-master:6443 --token 1uwx2s.fs12j9vumu59utl9 --discovery-token-ca-cert-hash sha256:4df75f261e68bf47c0117bde46899745956883a592f7f996843698f5d1053876  --certificate-key 2a5cca83e6a6d4022652ab372dc137d46d23adb39a70614ec957ea99f21ce8ab
    W0620 08:40:36.959233   20255 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    Now i have another worker node in the cluster

    student@k8s-master:~$ k get nodes
    NAME           STATUS   ROLES    AGE    VERSION
    k8s-master     Ready    master   147d   v1.17.2
    k8s-master-2   Ready    <none>   18m    v1.17.2
    k8s-worker     Ready    <none>   147d   v1.17.2
    

    I scaled an exisiting deployment and the new pods are also running on the new node.

    I don't know what to try next, but i'll try again with a new blank VM.

    Thank's again for the feedback!
    Best
    Peter

  • chrispokorni
    chrispokorni Posts: 2,372
    edited June 2020

    Hi Peter,

    By taking a second look at your naming convention, there may be a conflict between the k8s-master as node name of your 1st master node, and the same k8s-master as the overall cluster Master alias. The DNS finds the same name/alias associated with 2 separate IP addresses: 10.156.0.5 and 10.156.0.10, and it would make sense why it is complaining about an unstable controlPlaneEndpoint.

    Try changing the alias from k8s-master to something else, in all the /etc/hosts files.

    Also, I notice you are still running Kubernetes 1.17. The course has been updated to 1.18.1, and the SOLUTIONS tarball has also been updated to reflect the latest course version.

    Regards,
    -Chris

  • Turning a single control plane cluster created without --control-plane-endpoint into a highly available cluster is not supported by kubeadm.

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

  • You are correct @eduardofraga, that is why in Lab Exercise 3.1 we are setting that up to be an alias instead of the hostname of any of the master nodes.

    Regards,
    -Chris

Categories

Upcoming Training