Welcome to the Linux Foundation Forum!

Failed to garbage collect required amount of images. Attempted to free 634456473 bytes, but only fou

Options

All my new spinned up pods are in pending state.
I check the describe pods and nodes found that the disk space is full for kube-system namespace while I do not know how to access into these pods.

Here is the df -h output

Filesystem Size Used Avail Use% Mounted on
tmpfs 193M 2.6M 190M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 9.8G 8.0G 1.3G 86% /
tmpfs 961M 0 961M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 1.8G 192M 1.5G 12% /boot
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/2281dfaccd6994c44ee1e1f463d8d2325896d9275ebb6a239fde2799cf41f64d/shm
shm 64M 4.0K 64M 1% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/9672c12f00d336a40d8332122033785a69143954c6387b54fbc04eda9243110c/shm
tmpfs 192M 12K 192M 1% /run/user/1000
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/d39692e7944f70089fb92049f58635d1917f4205640694f138507fb4d76555cd/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/1ed96b60fe54c9489fc228132931b5d235a14b1f6cd4df7e3a24bdf1ad199202/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/2d4eb3fed8665703196a44277121632013dcf2044cdb1a1a850ec9b44ef664df/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/cd6d087cd6b0a246ba91c36c27bce64596ae7ea46bc3a7b9277ce146c266cbc7/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/cc2af31bcf5f20f310c393770619f5254338f159823842917a58f75e534f01ea/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/ab57ec73233ae8c5abb1372e654ef6469660c5a53a7bdfde1a36281f19557ad1/shm

My VM is configured with 20GB Disk

and saw this after running describe nodes

Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Failed to garbage collect required amount of images. Attempted to free 649836953 bytes, but only found 0 bytes eligible to free.

Saw this for a pod description:
0/2 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: }, 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

This is blocking me to proceed any labs further and I cannot find any useful solution for this from google.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Answers

  • Posts: 2,473

    Hi @goldone,

    Can you provide more specific details about your infrastructure? Are you on cloud VMs or local VMs? What cloud provider or hypervisor are you using? What are the sizes of your VMs?
    There seems to be some discrepancy between the LV size and the size declared for the VM disk.
    Improperly sized virtual disk adversely impacts the cluster's bootstrapping. Also, I highly recommend fully pre-provisioned v-disks (no dynamic provisioning).

    Regards,
    -Chris

  • Posts: 2

    Hi Chris

    I am running VMs on VMware and you are right. I am using dynamic provisioning for disk as 20GB but it actually using 8GB. So I need to change to let VM commit 20GB totally for the settings, right?

  • Posts: 2,473

    Hi @goldone,

    You are correct, a fully pre-allocated disk should prevent future kubelet panics. Remember that kubelet does not see outside of its environment - meaning it has no knowledge of how the hypervisor manages disk space. Kubelet only sees the currently allocated amount, and when not sufficient, it panics.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training