Welcome to the Linux Foundation Forum!

issue to proceed with the course.

Good morning all and thanks in advance to all of you who can help me.
I'm on this course. I'm working with two virtualized Ubuntu VM on VirtualBox on my machine (my host Ubuntu dist too). Usually, I freeze/hybernate the VM and come back to use it each time I can dedicate to it. I'm on Lab 6.1. This time, I can't execute the Lab, due to the command that doesn't respond to me.

I executed the command "kubectl get secrets --all-namespaces", but everything I try to launch receives the same message: "The connection to the server k8smaster:6443 was refused - did you specify the right host or port?".
Ping to the "k8smaster" work fine, the $KUBECONF environment variable is set, what strange is that the kubelet service seems to be not active, I try to restart but it doesn't.

_systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2021-04-01 11:13:45 CEST;>
Docs: https://kubernetes.io/docs/home/
Process: 19429 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $K>
Main PID: 19429 (code=exited, status=255/EXCEPTION)

apr 01 11:13:45 in7rud3r-VMUK8s systemd[1]: kubelet.service: Main process exited, code=exited,>
apr 01 11:13:45 in7rud3r-VMUK8s systemd[1]: kubelet.service: Failed with result 'exit-code'.
_

Can someone help me to go ahead without reinstalling a new cluster, please?

Really appreciated any suggestion!

Thanks in adv again.

Answers

  • chrispokorni
    chrispokorni Posts: 2,346
    edited April 2021

    Hi @andrea.calvario,

    Assuming you are running kubectl commands from the control-plane node, the error may be caused by the API server not running, or a misconfigured /etc/hosts file. To force a kubelet restart and hopefully a control-plane restart, I would recommend a reboot of the VMs, instead of hybernating them.

    If the IP addresses of your VMs change in the meantime, the /etc/hosts files may need to be updated.

    Regards,
    -Chris

  • Thanks for your support Chris, unfortunatelly I can't resolve the problem.
    As you suggest I reboot the VM and I ensure that the IP was always the same:_

    ifconfig
    docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
    inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
    ether 02:42:16:b0:bc:85 txqueuelen 0 (Ethernet)
    RX packets 0 bytes 0 (0.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 0 bytes 0 (0.0 B)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.56.101 netmask 255.255.255.0 broadcast 192.168.56.255
    inet6 fe80::3f84:fc47:13ad:39b6 prefixlen 64 scopeid 0x20
    ether 08:00:27:85:a1:45 txqueuelen 1000 (Ethernet)
    RX packets 570 bytes 45717 (45.7 KB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 408 bytes 61231 (61.2 KB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
    inet6 fe80::ec48:42d9:9120:c0f0 prefixlen 64 scopeid 0x20
    ether 08:00:27:55:0d:19 txqueuelen 1000 (Ethernet)
    RX packets 320 bytes 33779 (33.7 KB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 704 bytes 73413 (73.4 KB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10
    loop txqueuelen 1000 (Local Loopback)
    RX packets 1015 bytes 94040 (94.0 KB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 1015 bytes 94040 (94.0 KB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0_

    the /etc/hosts seems to be rigth configured:

    _cat /etc/hosts
    127.0.0.1 localhost
    127.0.1.1 in7rud3r-VMUK8s
    192.168.56.101 k8smaster

    The following lines are desirable for IPv6 capable hosts

    ::1 ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    _

    Anyway the service seems to be not restart again.

    _service kubelet status
    ● kubelet.service - kubelet: The Kubernetes Node Agent
    Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: activating (auto-restart) (Result: exit-code) since Thu 2021-04-01 18:17:00 CEST;>
    Docs: https://kubernetes.io/docs/home/
    Process: 11334 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $K>
    Main PID: 11334 (code=exited, status=255/EXCEPTION)

    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: goroutine 127 [runnable]:
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: k8s.io/kubernetes/vendor/k8s.io/client-go/tool>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: /workspace/src/k8s.io/kubernetes/_outp>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: created by k8s.io/kubernetes/vendor/k8s.io/cli>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: /workspace/src/k8s.io/kubernetes/_outp>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: goroutine 128 [runnable]:
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: k8s.io/kubernetes/vendor/k8s.io/client-go/tool>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: /workspace/src/k8s.io/kubernetes/_outp>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: created by k8s.io/kubernetes/vendor/k8s.io/cli>
    apr 01 18:17:00 in7rud3r-VMUK8s kubelet[11334]: /workspace/src/k8s.io/kubernetes/_outp>
    _

    Thanks again for your support Chris.

  • chrispokorni
    chrispokorni Posts: 2,346
    edited April 2021

    Hi @andrea.calvario,

    Based on the latest output, it seems that your VM IP addresses may overlap the Pod network managed by Calico, which by default is 192.168.0.0/16. This causes routing issues in your cluster, impacting most intra-cluster communication.


    Regards,

    -Chris

  • There's a solution? And how it can been happen?

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    The solution is to simply avoid the IP address overlap between VMs and Pod network. One approach is to provision your VMs ensuring they do not get assigned IP addresses from the 192.168.0.0/16 subnet through DHCP (static private IPs can be used instead), and the second approach is to provision a cluster and to modify the calico.yaml file and the kubeadm-config.yaml file to use a different private IP network for Pods, one that would not overlap the VM IP addresses.

    Regards,
    -Chris

  • Thanks Chris, finally I solve turning off the swap and restarting the kubelet service that finally start to works. Hope I can go ahead with my course, thanks for your support!

Categories

Upcoming Training