Welcome to the Linux Foundation Forum!

gVisor pod fail to create pod sandbox

marcozu Posts: 1
edited January 17 in LFS260 Class Forum

Hello, I've followed the container-setup.txt to set up the environment**

student@cp:~/s03$ cat /etc/crictl.yaml
runtime-endpoint: "unix:///run/containerd/containerd.sock"
image-endpoint: "unix:///run/containerd/containerd.sock"
timeout: 0
debug: false
pull-image-on-create: false
disable-pull-on-run: false
student@cp:~/s03$ cat /etc/containerd/config.toml
version = 2
shim_debug = true
runtime_type = "io.containerd.runc.v2"
runtime_type = "io.containerd.runsc.v1"

And I use the yaml files to create runtimeclass and gvisor pod, but the pod is stuck in "ContainerCreating"
Warning FailedCreatePodSandBox 43s (x71 over 15m) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "runsc" is configured

I wonder if my environment is correctly set up or it's a bug
student@cp:~/s03$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
Active: active (running) since Mon 2024-01-15 12:38:45 UTC; 1 day 23h ago
Docs: https://kubernetes.io/docs/home/
Main PID: 4998 (kubelet)
Tasks: 12 (limit: 9511)
Memory: 57.3M
CGroup: /system.slice/kubelet.service
└─4998 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubele>

Jan 17 11:34:43 cp kubelet[4998]: W0117 11:34:43.150934 4998 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.create>
Jan 17 11:34:43 cp kubelet[4998]: "Addr": "/var/run/containerd/containerd.sock",
Jan 17 11:34:43 cp kubelet[4998]: "ServerName": "/var/run/containerd/containerd.sock",
Jan 17 11:34:43 cp kubelet[4998]: "Attributes": null,
Jan 17 11:34:43 cp kubelet[4998]: "BalancerAttributes": null,
Jan 17 11:34:43 cp kubelet[4998]: "Type": 0,
Jan 17 11:34:43 cp kubelet[4998]: "Metadata": null
Jan 17 11:34:43 cp kubelet[4998]: }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/containerd/cont>
Jan 17 11:34:43 cp kubelet[4998]: E0117 11:34:43.167792 4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc >
Jan 17 11:34:43 cp kubelet[4998]: E0117 11:34:43.167898 4998 eviction_manager.go:258] "Eviction manager: failed to get summary stat>


  • I also have the same problem :/

  • I found out a solution, although I'm using a preexisting cluster so maybe these instructions aren't valid for everyone:

    1. Install gvisor following the "Install latest release" instructions in each worker node:


    1. Configure containerd following the instructions under "Configure containerd", including the restart of the containerd service:


    1. Then create the runtimeclass with this command:

    cat <<EOF | kubectl apply -f -
    apiVersion: node.k8s.io/v1
    kind: RuntimeClass
    name: gvisor
    handler: runsc

    1. And create a pod that uses that runtimeclass:

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    name: nginx-gvisor
    runtimeClassName: gvisor
    - name: nginx
    image: nginx

    In my case, the problem that I had is that I forgot to add the following lines to the file and then to restart containerd service:

    runtime_type = "io.containerd.runsc.v1"


Upcoming Training