Welcome to the Linux Foundation Forum!

Deleted pods stuck in "Terminating" on LAB K8s v1.30 on Ubuntu 24.04 [workaround]

I"m sharing this in case anyone else stumbles on the same issue.
New lab built on-prem on 3 Virtualbox machines running K8s v1.30 on top of Ubuntu 24.04.

Going through LAB 3.4 I saw that every pod I deleted with kubectl remained stuck in "Terminatig" status (some for more than 1h).

Doing kubectl describe pod xxxx for one of the "zombies" I saw in the events:

  Normal   Killing        44s (x5 over 3m41s)   kubelet            Stopping container nginx
  Warning  FailedKillPod  14s (x5 over 3m11s)   kubelet            error killing pod: [failed to "KillContainer" for "nginx" with KillContainerError: "rpc error: code = Unknown desc = failed to kill container \"6bfa5cb54802b7548e2b60a5ba251c68ed05e322bd896a1d204728f7716b1c48\": unknown error after kill: runc did not terminate successfully: exit status 1: unable to signal init: permission denied\n: unknown", failed to "KillPodSandbox" for "98506bfe-1b02-4ac6-a608-78427fbc7e00" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to stop container \"6bfa5cb54802b7548e2b60a5ba251c68ed05e322bd896a1d204728f7716b1c48\": failed to kill container \"6bfa5cb54802b7548e2b60a5ba251c68ed05e322bd896a1d204728f7716b1c48\": unknown error after kill: runc did not terminate successfully: exit status 1: unable to signal init: permission denied\n: unknown"]

A quick search online led me to this issue related to AppArmor: https://github.com/containrrr/watchtower/issues/1891.

The workaround at the end of the discussion (disable AppArmor for the runc profile as per Ubuntu doc) fixed the issue for me. Sharing here in case someone's stuck like I was:

mkdir /etc/apparmor.d/disable      # only if 'disable' directory is missing
ln -s /etc/apparmor.d/runc /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/runc
systemctl restart containerd

CAVEAT: This disables AppArmor security for runc so it's not necessarily the recommended solution on a prod cluster but for my lab I just needed a way to move forward.

If anyone has a better/more elegant solution, please share.

Comments

  • Thank you for sharing this.

    I had four different cluster setups using GCP, AWS, Azure, and Linode. I followed the lab guide exactly as written, and everything worked perfectly without a hitch. I was able to delete the pods, and all the other steps went smoothly, with no issues whatsoever.

    Perhaps the challenges you're facing are specific to setups on VirtualBox? Just a thought.

  • root@cp:~# kubectl scale deployment nginx --replicas=3
    deployment.apps/nginx scaled

    root@cp:~# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    nginx-bf5d5cf98-79z8r 1/1 Running 0 26s
    nginx-bf5d5cf98-qmcfm 1/1 Running 0 91s
    nginx-bf5d5cf98-z7n6q 1/1 Running 0 26s

    root@cp:~# kubectl delete pod nginx-bf5d5cf98-79z8r
    pod "nginx-bf5d5cf98-79z8r" deleted

    root@cp:~# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    nginx-bf5d5cf98-qmcfm 1/1 Running 0 101s
    nginx-bf5d5cf98-wff7j 1/1 Running 0 3s
    nginx-bf5d5cf98-z7n6q 1/1 Running 0 36s

    root@cp:~# kubectl delete deployments nginx
    deployment.apps "nginx" deleted

    root@cp:~# kubectl get pods,deploy
    No resources found in default namespace.

  • admusin
    admusin Posts: 7

    It may be but I'm not convinced VirtualBox plays such a big role inside the OS itself.

    These VMs had been successfully used previously for another K8s lab (for another course) based on Ubuntu 22.04 and K8s v.1.29.1. It all worked without a hitch.

    Then I upgraded all of them to Ubuntu 24.04 once I read the specs for this (LFS258) lab's and completely wiped all K8s artifacts from them, in order to prepare a clean slate.
    So, as far as I can tell the only differences are:

    • Ubuntu 24.04 instead of 22.04
    • K8s v1.30.1 instead of K8s v1.29.1 as starting point.

    Besides, the error I had matched similar issues other users were reporting starting with Ubuntu 23.10 upwards and once I disabled the AppArmor profile for runc the issue went away, so it seems to be at the root of the problem.

    Maybe the cloud instances of Ubuntu 24.04 which you're using are better tuned for K8s out-of-the-box?
    If you could share the output of sudo apparmor_status from one of these machines we'd be able to tell maybe.

  • chrispokorni
    chrispokorni Posts: 2,340

    Hi @admusin and @fazlur.khan,

    While I have not yet tested the content of this recent release of the training material of Kubernetes v1.31.1 on Ubuntu 24.04 LTS, I did notice however that not all images of the same OS release/version are equal. Meaning that the official image from the source (Canonical in this case), may be slightly different from the ones maintained and available through the cloud services providers.

    Regards,
    -Chris

Categories

Upcoming Training