Welcome to the Linux Foundation Forum!

Lab 9.1 - deployment doesn't create pods!

Hi all,
here again with some issue on the course.

Following the lab session, I create the yaml file for deployment (nginx-one.yaml) and created using the command

kubectl create -f nginx-one.yaml

after creating the namespace

The next step is to check the status of the pods using the command kubectl -n accounting get pods.
The output on the lab, provides a list with two different pods in pending.

My output is "No resources found in accounting namespace.".

I try to check for the deployment created, but cannot see any issues...

kubectl -n accounting describe deployments.apps nginx-one
Name: nginx-one
Namespace: accounting
CreationTimestamp: Fri, 09 Apr 2021 12:35:36 +0200
Labels: system=secondary
Annotations:
Selector: system=secondary
Replicas: 2 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: system=secondary
Containers:
nginx:
Image: nginx:1.16.1
Port: 8080/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
OldReplicaSets:
NewReplicaSet:
Events:

Can someone give me any indication to understand what's happening?

Thanks in advance!

Andrea C.

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Please let me know what version of the course you are looking at. When I look at the exercise the next command is not to look at the pods, but rather to look at the nodes. Step two is kubectl get nodes --show-labels in my book. If you were to follow the steps in the book as written you would realize that the pods not running yet.

    Please follow the steps and let us know if you continue to issues.

    Regards,

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    The output of your describe command above shows that there are no replicas available (2 desired, 0 available). At this point your pods cannot be scheduled (expected behavior) because of a missing node label, which gets assigned in a following step.

    Regards,
    -Chris

  • Hi Serewicz and thanks for support.
    I'm actually on the "Kubernetes for Developers (LFD259)" and I'm on the lab 9.1: Deploy A New Service.
    This is correct, but I pass without problem that step, there's no particular indication about that. Let me describe the steps on the exercise book with my execution:
    1. vim nginx-one.yaml
    I write the yaml file for the nginx-one, copying from the book.
    2. kubectl get nodes --show-labels
    This is my output, but on the book there's nothing particular to do after this step, only to give a look at the output (omitted from the book)
    NAME STATUS ROLES AGE VERSION LABELS
    in7rud3r-vmuk8s Ready,SchedulingDisabled control-plane,master 86d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
    in7rud3r-vmuk8s-n2 NotReady 58d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s-n2,kubernetes.io/os=linux

    3. kubectl create -f nginx-one.yaml
    Same output of the book
    4. kubectl create ns accounting
    Same output of the book
    5. kubectl create -f nginx-one.yaml
    Same output of the book
    6. kubectl -n accounting get pods
    The output on the book, provides two pods, but in my execution I have no pods and receive the message "No resources found in accounting namespace."

    Thanks for your support Serewicz!

  • Thanks for your answer Chris,

    but, in the exercise book the next step after the provided in my previous comment, is "kubectl -n accounting describe pod nginx-one-74dd9d578d-fcpmv", which I can't execute, because I have no one pods to run on it.

    I'm missing something?

  • chrispokorni
    chrispokorni Posts: 2,346
    edited April 2021

    Hi @andrea.calvario,

    Can you please provide the output of kubectl get namespaces ?

    Thanks,
    -Chris

  • Sure Chris,

    kubectl get namespaces
    NAME STATUS AGE
    accounting Active 5h31m
    default Active 86d
    kube-node-lease Active 86d
    kube-public Active 86d
    kube-system Active 86d
    low-usage-limit Active 57d
    small Active 2d23h

    The accounting namespace is the one I created this morning, during the execution of the excercise!

  • chrispokorni
    chrispokorni Posts: 2,346

    Thanks.

    I also noticed that your control-plane node shows scheduling disabled, while the worker is not ready - part of this may be the result of missed steps when bootstrapping the Kubernetes cluster from the first lab exercise.

    Regards,
    -Chris

  • It's strange, I made all the other labs more or less without particular problem... can this be coused by some misconfiguration of the VM used?

    Have you the exact step I should missed?

  • Sorry Chris, if as you say my node scheduling is disabled, can you tell me the commands to enable it, so I can try if it resets?

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    I would recommend revisiting LFS258 - Kubrenetes Fundamentals - Lab Exercise 3.3 steps 3 and 4, to ensure all the taints are removed from the control-plane node. If multiple taints are found on the control-plane node, repeat the steps until all taints are removed.

    Then list all your pods again with the kubectl get pods --all-namespaces command and also list your nodes with kubectl get nodes --show-labels.

    Regards,
    -Chris

  • Thanks Chris, I'll try as soon as possible and I'll up to date you!

  • Hi Chris, then, I try to execute the step 3 and 4 of the Lab 3.3 as you said, this is the output:

    _in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe node | grep -i taint
    Taints: node.kubernetes.io/unschedulable:NoSchedule
    Taints: node.kubernetes.io/unreachable:NoExecute
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl taint nodes --all node.kubernetes.io/unschedulable
    error: at least one taint update is required
    in7rud3r@in7rud3r-VMUK8s:~$ kubectl taint nodes --all node.kubernetes.io/unreachable
    error: at least one taint update is required
    _

    It seems that I can't remove the taint.
    The list of the pods from all the namespaces is the follow:

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    default hog-9f86b59cb-khkqw 0/1 Terminating 0 60d
    default nginx-6696fb8664-w9hkq 0/1 Pending 0 6d14h
    kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 60d
    kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 61d
    kube-system calico-node-cl7r5 0/1 Running 18 61d
    kube-system calico-node-hnb98 1/1 Running 56 89d
    kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 60d
    kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 61d
    kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 60d
    kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 61d
    kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 61d
    kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 61d
    kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 61d
    kube-system kube-proxy-7vlqx 1/1 Running 0 61d
    kube-system kube-proxy-qczsp 1/1 Running 3 61d
    kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 61d
    low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 60d

    As expected nothing again from the accounting namespace.
    The last command you ask to execute give me this output:

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get nodes --show-labels
    NAME STATUS ROLES AGE VERSION LABELS
    in7rud3r-vmuk8s Ready,SchedulingDisabled control-plane,master 89d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
    in7rud3r-vmuk8s-n2 NotReady 61d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s-n2,kubernetes.io/os=linux

    Can this give to us any other suggestion about the issue and how to resolve?

    Thanks again for your support!

    Andy

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    If most lab exercises worked without any issues, at this point your cluster may no longer have enough resources, and as a result your workload is in terminating or pending state, while plugin agents calico and coredns are also in terminating and pending states.

    You may be able to run the top command on your control-plane and worked nodes separately to see what processes are using the most node resources.

    Also, what are the sizes of your nodes (CPU, MEM, disk) ?

    Also, running a kubectl describe pod <pod-name> for a few pods that are in pending or terminating state, what error(s) do you see in the events section of the output?

    Regards,
    -Chris

  • Hi @chrispokorni,

    this morning a strange thing happened; proceeding with the suggestions you gave me, I ran the command again to list the pods on all the namespaces, to "describe" those in "pending" or "terminating" and with surprise I see that the two "nginx-ones" from Lab 9.1 that I was doing last week have appeared (this doesn't solve my problem though, because they are still pending).
    I state that as usual, yesterday, after having carried out the checks you suggested the time before, I hibernated the VM, on the shell in fact there are still yesterday's commands.

    Command launched yesterday

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    default hog-9f86b59cb-khkqw 0/1 Terminating 0 60d
    default nginx-6696fb8664-w9hkq 0/1 Pending 0 6d14h
    kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 60d
    kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 61d
    kube-system calico-node-cl7r5 0/1 Running 18 61d
    kube-system calico-node-hnb98 1/1 Running 56 89d
    kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 60d
    kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 61d
    kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 60d
    kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 61d
    kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 61d
    kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 61d
    kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 61d
    kube-system kube-proxy-7vlqx 1/1 Running 0 61d
    kube-system kube-proxy-qczsp 1/1 Running 3 61d
    kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 61d
    low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 60d

    Command launched this morning

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    accounting nginx-one-fb4bdb45d-dmlkd 0/1 Pending 0 18h
    accounting nginx-one-fb4bdb45d-dr9m4 0/1 Pending 0 18h

    default hog-9f86b59cb-khkqw 0/1 Terminating 0 61d
    default nginx-6696fb8664-w9hkq 0/1 Pending 0 7d13h
    kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 61d
    kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 62d
    kube-system calico-node-cl7r5 0/1 Running 18 62d
    kube-system calico-node-hnb98 1/1 Running 56 90d
    kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 61d
    kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 62d
    kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 61d
    kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 62d
    kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 62d
    kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 62d
    kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 62d
    kube-system kube-proxy-7vlqx 1/1 Running 0 62d
    kube-system kube-proxy-qczsp 1/1 Running 3 62d
    kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 62d
    low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 61d

    As you can see, however, the pods are still pending.

    Anyway, I provide the info you ask me in your last comment.

    Each VM have 2 processor with 4 GB Memory and 120 GB HDD until now at 10 GB used.
    Anyway, you can find additional and detailed info in the attached files (one for master and one for worker as you ask) and the description of all the pods pending and in terminating state (as you'll see, despite the two pods is appeared they cannot be described).

    Can this give us some new information to proceed? I have to wait for the creation of the pods or I can do something to accelerate this process? This can be really an issue for me if I'm so blocked, I'm afraid I won't have time to complete the course if is so slow.

    Thanks you so really much Chris for your support, hope we can resolve, so I'll be able to go ahead.

    Andy!

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    Have you ever tried to reboot (or stop and then start) your VMs? In the past kubelet has not responded well to VM/node hibernating.

    Regards,
    -Chris

  • So, I try to restart (reboot) the machine. The strange thing is that after reboot, my kubelet service is not working, I need to turn off the swap (using sudo swapoff -a) and then restart the kubelet service (using service kubelet restart).

    Anyway, the pods are still pending:

    in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    accounting nginx-one-fb4bdb45d-dmlkd 0/1 Pending 0 21h
    accounting nginx-one-fb4bdb45d-dr9m4 0/1 Pending 0 21h

    default hog-9f86b59cb-khkqw 0/1 Terminating 0 62d
    default nginx-6696fb8664-w9hkq 0/1 Pending 0 7d16h
    kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 62d
    kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 62d
    kube-system calico-node-cl7r5 0/1 Running 18 62d
    kube-system calico-node-hnb98 1/1 Running 58 90d
    kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 62d
    kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 62d
    kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 62d
    kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 62d
    kube-system etcd-in7rud3r-vmuk8s 1/1 Running 5 62d
    kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 32 62d
    kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 10 62d
    kube-system kube-proxy-7vlqx 1/1 Running 0 62d
    kube-system kube-proxy-qczsp 1/1 Running 4 62d
    kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 10 62d
    low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 62d

    Any idea?

    Thanks!

    Andy

  • chrispokorni
    chrispokorni Posts: 2,346

    Hi @andrea.calvario,

    From your provided outputs, several things may impact your cluster. A major issue is the fact that the node/VM IP addresses managed by your hypervisor are overlapping the Pod network managed by the calico network plugin. It is critical that a cluster is configured without such overlap. This has already been recommended in an earlier post, but has not been fixed. Please bootstrap a new cluster following these recommendations.

    Regards,
    -Chris

Categories

Upcoming Training