Welcome to the Linux Foundation Forum!

LFD259: Lab 2.2 - Failing at joining the minion node to cluster

Hi,
I have done with the installations of k8sMaster.sh and k8sSecond.sh.
Now I am trying to run the following command as per the document:
$ sudo kubeadm join -- token ip<172.20.10.4:6443> --discovery-token-ca-cert-hash sha256:

I am getting the below below errors:
cgoka@work:~/kubernetes_LFD259/LFD259/SOLUTIONS/s_02$ sudo kubeadm join --token tkoi0v.vxsnpod7d0mwdpyj \

172.20.10.4:6443 --discovery-token-ca-cert-hash sha256:a0849670c01f8f66c9dc4be8acf7773fd2f33f6be1a54e85db35681bc159b2e2

    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
    [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Note: I am running on Ubuntu 18.04

Please let me know how to resolve this issue.

Comments

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi,
    I moved this discussion to the LFD259 forum since it was created in another class' forum.

    From your error, it seems that kubeadm was already issued on that node. Assuming you ran k8sMaster.sh on the master node (with kubeadm init), then k8sSecond.sh on minion/worker node (with kubeadm join), your nodes should have joined the cluster. Issuing kubeadm join a second time on the worker node will display such errors.

    Try issuing

    sudo kubeadm reset

    on worker and master nodes.

    Then re-issue on the master node

    sudo kubeadm init ...

    and on the worker node

    sudo kubeadm join ...

    Another possible issue I see between your Ubuntu 18, and the k8sMaste.sh and k8sSecond.sh installation scripts, which are customized for Ubuntu 16. The very first sentence in Exercise 2.1 mentions Ubuntu 16. Did you see any errors during the installation process?

    Pay close attention to exercises as they are compiled and tested for a particular set of versions. Deviating from the instructions may cause inconsistent configurations and outputs.

    Regards,
    -Chris

  • I migrated my system into Ubuntu 16.04.
    When I start the master with below command.
    $bash k8sMaster.sh | tee ~/master.out

    I am getting below errors:

    2019-08-01 16:33:46 (917 KB/s) - ‘calico.yaml’ saved [15051/15051]

    unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

    You should see this node in the output below
    It can take up to a mintue for node to show Ready status

    The connection to the server localhost:8080 was refused - did you specify the right host or port?

  • serewicz
    serewicz Posts: 1,000

    Hello,

    I see you wrote that you migrated from 18. Am I correct that the Ubuntu 16.04 instance is a fresh install? Please include the command you used (copy and paste would be great) so we can see why you are getting those errors. As it says localhost:8080 I think there may have been a typo or not copying over the proper ~/.kube/config file when the kubeadm init command was run.

    Regards,

  • chrispokorni
    chrispokorni Posts: 2,349

    Edit your k8sMaster.sh and look at the line:

    wget https://tinyurl.com/y8lvqc9g -O calico.yaml

    As it is now, it has a slight typo - there are 2 blank spaces right before "-O", and one seems to be treated as a new line.
    Edit that line and make sure there is a single blank space before "-O", then it should look similar to:

    wget https://tinyurl.com/y8lvqc9g -O calico.yaml

    This should fix your issue. If you are still seeing timeouts after this, then you may have a firewall enabled, which is blocking traffic to some ports.

    Regards,
    -Chris

  • I am using AWS Ubuntu Server 16.04 LTS (HVM), SSD Volume Type instance.

    I am running the following command
    $ bash k8sMaster.sh | tee ~/master.out
    I never ran kubeadm init command yet.

  • I removed the extra blank before "-O" calico.yaml, still issue persist.

  • But I am seeing these files therein current directory-

    ubuntu@ip-172-31-36-216:~/LFD259/ckad-1$ ls -ltr
    total 24
    -rwxrwxr-x 1 ubuntu ubuntu 2139 Aug 1 17:06 k8sMaster.sh
    -rw-rw-r-- 1 ubuntu ubuntu 1660 Aug 1 17:07 rbac-kdd.yaml
    -rw-rw-r-- 1 ubuntu ubuntu 15051 Aug 1 17:07 calico.yaml

  • chrispokorni
    chrispokorni Posts: 2,349
    edited August 2019

    On AWS you need to make sure your EC2 instances are in an SG open to all traffic, that it allows traffic to all ports, all protocols, from all sources.
    Also verify that Ubuntu 16 on AWS does not have any firewalls enabled/active by default.

    kubeadm init is being run as part of the k8sMaster.sh script. The master.out file should have recorded all output, if you also don't mind providing that.

  • CHANDRASHEKHARGOKA
    edited August 2019

    SG opened for all traffic FUR:




    **And firewall also inactive: **

    ubuntu@ip-172-31-36-216:~/LFD259/ckad-1$ sudo ufw status
    Status: inactive

  • chrispokorni
    chrispokorni Posts: 2,349

    The detailed output is appreciated.
    Not sure which SG of the 2 you are using, but one seems to limit the sources to itself.
    I'd also check the VPC setup, IGW, possibly Subnet, RT and NACL.

  • I am not aware of AWS internals much to check the things. If this is the case, I would go back to my personal laptop, which has Ubuntu 18.04. There I didn't see these many errors.
    In Ubuntu 18.04 :
    The k8sMaster.sh file was executed successfully with the below command.
    $bash k8sMaster.sh | tee ~/master.out
    Then I opened a new command prompt to create worker and executed
    $bash k8sSecond.sh (as per the document)
    Then tried to join like below:

    $ sudo kubeadm join -- token ip<172.20.10.4:6443> --discovery-token-ca-cert-hash sha256:
    After I got the below error:

    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
    [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

    But in the very first comment for this thread, you suggested me to execute sudo kubeadm init .. and sudo kubeadm join ..
    I am not getting why these commands to be executed. I have never seen sudo kubeadm init command in the document.

  • chrispokorni
    chrispokorni Posts: 2,349

    If you look closely in the k8sMaster.sh you will find the kubeadm init command.

    How many EC2 instances did you start on AWS?

  • Only one instance

  • chrispokorni
    chrispokorni Posts: 2,349

    That is your problem right there.

    Please read the lab instructions carefully, as they guide you to create a 2 node cluster. Specific steps in the lab exercise are executed on your 1st node, and specific steps on your 2nd node. Read closely the instructions of each step in the exercises, and the commands you need to run. The hostname of the command indicates the node you should be on.

    Good luck!
    -Chris

Categories

Upcoming Training