Welcome to the Linux Foundation Forum!

Lab3.4 step16, pod pending

Options

after run step 16, the pod not running, it is pending
You guys could you help me to fix it?

student@master:~$ kubectl get deploy,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 0/1 1 0 32m

NAME READY STATUS RESTARTS AGE
pod/nginx-d46f5678b-ttzhl 0/1 Pending 0 16m
pod/nginx-f89759699-t9qr9 0/1 Pending 0 32m

thanks

Comments

  • chrispokorni
    Options

    Hi @Jingze_Ma,

    Running the following command may reveal why your pods are in a pending state:

    kubectl describe pod <pending-pod-name>

    At the bottom of the output you will see an Events section, that is where you may find those clues.

    Regards,
    -Chris

  • Jingze_Ma
    Options

    @chrispokorni said:
    Hi @Jingze_Ma,

    Running the following command may reveal why your pods are in a pending state:

    kubectl describe pod <pending-pod-name>

    At the bottom of the output you will see an Events section, that is where you may find those clues.

    Regards,
    -Chris

    Hi @chrispokorni ,
    thanks for your reply, I tried run this command, in the Events section, I saw:
    "
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 6s (x880 over 21h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
    "
    Could you please give me any advice what should I do?
    thanks

    Jingze

  • Jingze_Ma
    Options

    @Jingze_Ma said:

    @chrispokorni said:
    Hi @Jingze_Ma,

    Running the following command may reveal why your pods are in a pending state:

    kubectl describe pod <pending-pod-name>

    At the bottom of the output you will see an Events section, that is where you may find those clues.

    Regards,
    -Chris

    Hi @chrispokorni ,
    thanks for your reply, I tried run this command, in the Events section, I saw:
    "
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 6s (x880 over 21h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
    "
    Could you please give me any advice what should I do?
    thanks

    Jingze

    @chrispokorni ,
    fixed, thanks

  • ctschacher
    ctschacher Posts: 5
    edited January 2021
    Options

    I hope it's ok that I append my problem here as my initial problem is exactly the same.

    student@master:~$ kubectl get deployments,pod
    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx   0/1     1            0           29m
    
    NAME                        READY   STATUS              RESTARTS   AGE
    pod/nginx-f89759699-mbt5v   0/1     ContainerCreating   0          29m
    

    This is an overview about all my pods:

    student@master:~$ kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
    default       nginx-f89759699-kgvkc                      0/1     ContainerCreating   0          4m37s
    kube-system   calico-kube-controllers-7dbc97f587-w4pkt   1/1     Running             1          13h
    kube-system   calico-node-5ts9j                          0/1     Completed           1          3h7m
    kube-system   calico-node-hj89f                          0/1     Running             1          13h
    kube-system   coredns-66bff467f8-j6xtn                   1/1     Running             1          14h
    kube-system   coredns-66bff467f8-lk65s                   1/1     Running             1          14h
    kube-system   etcd-master                                1/1     Running             1          14h
    kube-system   kube-apiserver-master                      1/1     Running             1          14h
    kube-system   kube-controller-manager-master             1/1     Running             1          14h
    kube-system   kube-proxy-qt6bm                           1/1     Running             1          14h
    kube-system   kube-proxy-r6f77                           0/1     Error               1          3h7m
    kube-system   kube-scheduler-master                      1/1     Running             1          14h
    
    student@master:~$ kubectl get nodes
    NAME     STATUS   ROLES    AGE    VERSION
    master   Ready    master   15h    v1.18.1
    worker   Ready    <none>   4h1m   v1.18.1
    

    When I check for errors of my pod 'nginx-f89759699-mbt5v' I see the following events:
    Warning FailedCreatePodSandBox 3m17s (x117 over 28m) kubelet, worker Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "nginx-f89759699-mbt5v": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"

    I'm not sure what the error message means. Can anyone help me with that?

    (I followed the lab guide to set up the Kubernetes cluster. I'm on a GCE environment, running Ubuntu 18.04.)

  • chrispokorni
    chrispokorni Posts: 2,165
    Options

    Hi @ctschacher,

    It seems that some of your control plane pods - the two calico pods and of of the kube-proxypods are not ready. You could run kubectl -n kube-system describe pod <pod-name> for the two calico pods and the kube-proxy pod, and from the Events section of each output you may be able to determine why they are not ready.

    Then you could attempt to delete the two calico pods and allow the controller to re-create them for you - this method would ideally start them back up and resolve any glitches or minor dependency issues.

    Regards,
    -Chris

  • ctschacher
    Options

    Ok, it seems that the problem is solved. All pods have READY 1/1 now. Thank you for your help!
    I learnt a lesson: just check the events for every faulty pod. It sounds so simple and still I had not yet fully embraced it.

    One Calico pod said:

    Events:
      Type    Reason          Age                       From             Message
      ----    ------          ----                      ----             -------
      Normal  SandboxChanged  2m35s (x2185 over 7h55m)  kubelet, worker  Pod sandbox changed, it will be killed and re-created.
    

    And that reminded me that I've played around with different start options for the Docker runtime on my worker node. After I changed it to the same settings as my Master node and a restart of the pod (deletion) on the Master node, it was finally working.

Categories

Upcoming Training