Welcome to the Linux Foundation Forum!

kubectl get nodes : lookup k8scp on 127.0.0.53:53: server misbehaving

Hi,

After few days I noticed the following behavior regarding kubectl, before it was fine.
I noticed that my nodes have a disk full after few days but increasing the partition won't fix the server misbehaving.

First question is what could happen regarding the lack of space after few days? To many logs or something else ?

Second question is why my cluster is having this behavior?

Thanks for helping!

kubectl get nodes
E1203 10:54:05.642934 1524 memcache.go:265] couldn't get current server API group list: Get "https://k8scp:6443/api?timeout=32s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
E1203 10:54:05.644613 1524 memcache.go:265] couldn't get current server API group list: Get "https://k8scp:6443/api?timeout=32s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
E1203 10:54:05.645928 1524 memcache.go:265] couldn't get current server API group list: Get "https://k8scp:6443/api?timeout=32s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
E1203 10:54:05.647162 1524 memcache.go:265] couldn't get current server API group list: Get "https://k8scp:6443/api?timeout=32s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
E1203 10:54:05.648485 1524 memcache.go:265] couldn't get current server API group list: Get "https://k8scp:6443/api?timeout=32s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
Unable to connect to the server: dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving

Broadcast message from ubuntu@cp (somewhere) (Tue Dec 3 10:54:39 2024):

Answers

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @aristideboisseau,

    The lookup error could mean that the hosts file is not correctly populated, sometimes paired with firewalls blocking certain protocols to ports required by the Kubernetes. This is typical to cloud VPC firewalls/security groups, and to local hypervisors as well.

    Size issues are common to local hypervisor setups, when the VMs virtual disks are improperly sized and especially when dynamically allocated (I typically size them with 20-30GB virtual disk and fixed sized/fully pre-allocated).
    Dynamic allocation only allows kubelet to "see" and register the initially allocated disk. Once the registered storage space is filled, kubelet panics. Increasing the storage space through the hypervisor alone (whether manual or dynamic) after full cluster bootstrapping takes effect on kubelet only after the guest OS file system table is also updated - meaning that the dynamic storage increase is not registered by kubelet otherwise.

    Regards,
    -Chris

Categories

Upcoming Training