Welcome to the new Linux Foundation Forum!

Lab 3.2 kubeadm join request refused

hmchen47hmchen47 Posts: 3
edited March 2018 in LFS258 Class Forum

Hello all,

I am following the instruction to do Lab 3.2 Grow the Cluster.  I use my own laptop to setup the environment.  Here are teh spec of my environment.

Host OS: Windows 10

VM Hypervisor: VirtualBox 

Guest OS: Ubuntu 17.10

I setup two VMs.  The master node is the one I regular used for my projects.  The second one is dedicated for the project.

At step 5 of the Lab instruction, I cannot pass the settings.

Here I dump the info I capture for this lab.



Master node:

$ [email protected]:ch03(k8s03)$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:c1:e5:84 brd ff:ff:ff:ff:ff:ff

    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3

       valid_lft 69118sec preferred_lft 69118sec

    inet6 fe80::a9ce:981a:d8f:529c/64 scope link 

       valid_lft forever preferred_lft forever

3: br-2a405b0dbcdc: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 

    link/ether 02:42:b3:b9:b1:a1 brd ff:ff:ff:ff:ff:ff

    inet 172.19.0.1/16 scope global br-2a405b0dbcdc

       valid_lft forever preferred_lft forever

4: br-4bb1837b90b1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 

    link/ether 02:42:1f:16:7c:e3 brd ff:ff:ff:ff:ff:ff

    inet 172.18.0.1/16 scope global br-4bb1837b90b1

       valid_lft forever preferred_lft forever

5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 

    link/ether 02:42:b2:41:94:1b brd ff:ff:ff:ff:ff:ff

    inet 172.17.0.1/16 scope global docker0

       valid_lft forever preferred_lft forever

6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 

    link/ether f2:79:48:46:d8:60 brd ff:ff:ff:ff:ff:ff

    inet 10.244.0.0/32 scope global flannel.1

       valid_lft forever preferred_lft forever

    inet6 fe80::f079:48ff:fe46:d860/64 scope link 

       valid_lft forever preferred_lft forever

7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000

    link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff

    inet 10.244.0.1/24 scope global cni0

       valid_lft forever preferred_lft forever

    inet6 fe80::cc83:4fff:feda:d3a1/64 scope link 

       valid_lft forever preferred_lft forever

8: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 

    link/ether b2:69:f7:04:6a:20 brd ff:ff:ff:ff:ff:ff link-netnsid 0

    inet6 fe80::b069:f7ff:fe04:6a20/64 scope link 

       valid_lft forever preferred_lft forever



$ sudo kubeadm token list

[sudo] password for hmchen: 

TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS

44d148.2a750f5fe1f90105   23h       2018-03-28T12:09:11-07:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

ba3edb.1b9344bf6ce19d91   29m       2018-03-27T13:17:26-07:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token



$ openssl x509  -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa \

                -pubin -outform der 2>/dev/null | openssl dgst \

                -sha256 -hex | sed 's/^.* //'

34e555e3f502b0b0aa0786ea841fa8b62daa66569863d62adc34f8e72dc3da41

 

Second Node:

$ ping 10.0.2.15 ==> master node reachable



# kubeadm join --ignore-preflight-errors=cri   --token 44d148.2a750f5fe1f90105 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:34e555e3f502b0b0aa0786ea841fa8b62daa66569863d62adc34f8e72dc3da41

[preflight] Running pre-flight checks.

[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'

[WARNING FileExisting-crictl]: crictl not found in system path

[discovery] Trying to connect to API Server "10.0.2.15:6443"

[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"

[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]

[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]

 

I have tried other interfaces but none working even with ping.

Comments

  • chrispokornichrispokorni Posts: 178
    edited March 2018

    Hi, while I have not tried yet doing the labs on local VMs, I can only think of a networking issue between the 2 nodes:

    1 - is your master listening and accepting traffic on port 6443? The reason I asked is because you mentioned you have used this node for other projects as well.

    2 -  can you run successfully a telnet or nc from the second node with the master IP and port 6443?

    Good luck!

    -Chris

  • serewiczserewicz Posts: 456
    edited March 2018

    Hello,

    In addition to the errors Chris mentioned, I noticed it said:


    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'

    Did you install docker on both nodes already? Perhaps return to the lab guide and ensure all necessary steps have been completed on both nodes. The master node must be up and in a Ready state. Then try to join. If you have continued issues, use netcat (nc <server> 6443) to see if the port is reachable. As Chris mentioned it may be a firewall either on the nodes or the host.

    Regards,

     

  • hmchen47hmchen47 Posts: 3

    serewicz

    I did solve the issue mention above after running the suggest command 'systemctl enable docker.service' in the later run.  Currently I am working on the port issue as suggested.

    Regards,

    Fred

  • serewiczserewicz Posts: 456
    edited March 2018

    Hi Fred,

    Another tool to consider is wireshark.  https://www.wireshark.org/docs/wsug_html_chunked/ChUseMainWindowSection.html

    By capturing all the packets on the master node, you can filter for inboud traffic from the second node. If the packet is being accepted I would think it has something to do with the state of the master, or the TLS/SSL key in use.

    Regards,

  • hmchen47hmchen47 Posts: 3
    edited March 2018

    Chris, serewicz,

    My issue most likely about the VM network setting and routing issue.  I install the VMs with NAT.  I will try  bridgeing or other options.  Once I have the result I will post the consequence here.

    Regards,

    Fred

  • chrispokornichrispokorni Posts: 178
    edited March 2018

    Hi Fred, although a good learning experience, it is unfortunate that you have to go through all this trouble to setup your environment in order to complete the labs. There was another post in this forum about a similar issue (Ubuntu VMs on a local windows workstation), which did not get resolved. The successful alternative, however, was to complete these labs in the cloud, either on GCP or AWS. They offer 1 free year, and I would think that Azure has a similar offer. I have completed most labs by now on GCP and I only ran into a minor firewall issue, which I resolved with a few simple commands.

    Good luck!

    -Chris

  • chrispokornichrispokorni Posts: 178
    edited April 2018

    Hi Fred, @hmchen47

    I was able to run thru lab 3 by setting up Bridged networking adapter on the master and minion VMs, and I posted some of the steps and my findings in the topic below:

    https://www.linux.com/forums/lfs258-class-forum/lab-3233-master-node-network-problem

    Regards, 

    -Chris

Sign In or Register to comment.