Welcome to the Linux Foundation Forum!

Lab3.4 step16, pod pending

after run step 16, the pod not running, it is pending
You guys could you help me to fix it?

student@master:~$ kubectl get deploy,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 0/1 1 0 32m

NAME READY STATUS RESTARTS AGE
pod/nginx-d46f5678b-ttzhl 0/1 Pending 0 16m
pod/nginx-f89759699-t9qr9 0/1 Pending 0 32m

thanks

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Hi @Jingze_Ma,

    Running the following command may reveal why your pods are in a pending state:

    kubectl describe pod <pending-pod-name>

    At the bottom of the output you will see an Events section, that is where you may find those clues.

    Regards,
    -Chris

  • @chrispokorni said:
    Hi @Jingze_Ma,

    Running the following command may reveal why your pods are in a pending state:

    kubectl describe pod <pending-pod-name>

    At the bottom of the output you will see an Events section, that is where you may find those clues.

    Regards,
    -Chris

    Hi @chrispokorni ,
    thanks for your reply, I tried run this command, in the Events section, I saw:
    "
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 6s (x880 over 21h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
    "
    Could you please give me any advice what should I do?
    thanks

    Jingze

  • @Jingze_Ma said:

    Hi @chrispokorni ,
    thanks for your reply, I tried run this command, in the Events section, I saw:
    "
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 6s (x880 over 21h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
    "
    Could you please give me any advice what should I do?
    thanks

    Jingze

    @chrispokorni ,
    fixed, thanks

  • Posts: 5
    edited January 2021

    I hope it's ok that I append my problem here as my initial problem is exactly the same.

    1. student@master:~$ kubectl get deployments,pod
    2. NAME READY UP-TO-DATE AVAILABLE AGE
    3. deployment.apps/nginx 0/1 1 0 29m
    4.  
    5. NAME READY STATUS RESTARTS AGE
    6. pod/nginx-f89759699-mbt5v 0/1 ContainerCreating 0 29m

    This is an overview about all my pods:

    1. student@master:~$ kubectl get pods --all-namespaces
    2. NAMESPACE NAME READY STATUS RESTARTS AGE
    3. default nginx-f89759699-kgvkc 0/1 ContainerCreating 0 4m37s
    4. kube-system calico-kube-controllers-7dbc97f587-w4pkt 1/1 Running 1 13h
    5. kube-system calico-node-5ts9j 0/1 Completed 1 3h7m
    6. kube-system calico-node-hj89f 0/1 Running 1 13h
    7. kube-system coredns-66bff467f8-j6xtn 1/1 Running 1 14h
    8. kube-system coredns-66bff467f8-lk65s 1/1 Running 1 14h
    9. kube-system etcd-master 1/1 Running 1 14h
    10. kube-system kube-apiserver-master 1/1 Running 1 14h
    11. kube-system kube-controller-manager-master 1/1 Running 1 14h
    12. kube-system kube-proxy-qt6bm 1/1 Running 1 14h
    13. kube-system kube-proxy-r6f77 0/1 Error 1 3h7m
    14. kube-system kube-scheduler-master 1/1 Running 1 14h
    1. student@master:~$ kubectl get nodes
    2. NAME STATUS ROLES AGE VERSION
    3. master Ready master 15h v1.18.1
    4. worker Ready <none> 4h1m v1.18.1

    When I check for errors of my pod 'nginx-f89759699-mbt5v' I see the following events:
    Warning FailedCreatePodSandBox 3m17s (x117 over 28m) kubelet, worker Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "nginx-f89759699-mbt5v": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"

    I'm not sure what the error message means. Can anyone help me with that?

    (I followed the lab guide to set up the Kubernetes cluster. I'm on a GCE environment, running Ubuntu 18.04.)

  • Posts: 2,451

    Hi @ctschacher,

    It seems that some of your control plane pods - the two calico pods and of of the kube-proxypods are not ready. You could run kubectl -n kube-system describe pod <pod-name> for the two calico pods and the kube-proxy pod, and from the Events section of each output you may be able to determine why they are not ready.

    Then you could attempt to delete the two calico pods and allow the controller to re-create them for you - this method would ideally start them back up and resolve any glitches or minor dependency issues.

    Regards,
    -Chris

  • Ok, it seems that the problem is solved. All pods have READY 1/1 now. Thank you for your help!
    I learnt a lesson: just check the events for every faulty pod. It sounds so simple and still I had not yet fully embraced it.

    One Calico pod said:

    1. Events:
    2. Type Reason Age From Message
    3. ---- ------ ---- ---- -------
    4. Normal SandboxChanged 2m35s (x2185 over 7h55m) kubelet, worker Pod sandbox changed, it will be killed and re-created.

    And that reminded me that I've played around with different start options for the Docker runtime on my worker node. After I changed it to the same settings as my Master node and a restart of the pod (deletion) on the Master node, it was finally working.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training