Lab 9.1 - deployment doesn't create pods!
Hi all,
here again with some issue on the course.
Following the lab session, I create the yaml file for deployment (nginx-one.yaml) and created using the command
kubectl create -f nginx-one.yaml
after creating the namespace
The next step is to check the status of the pods using the command kubectl -n accounting get pods.
The output on the lab, provides a list with two different pods in pending.
My output is "No resources found in accounting namespace.".
I try to check for the deployment created, but cannot see any issues...
kubectl -n accounting describe deployments.apps nginx-one
Name: nginx-one
Namespace: accounting
CreationTimestamp: Fri, 09 Apr 2021 12:35:36 +0200
Labels: system=secondary
Annotations:
Selector: system=secondary
Replicas: 2 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: system=secondary
Containers:
nginx:
Image: nginx:1.16.1
Port: 8080/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
OldReplicaSets:
NewReplicaSet:
Events:
Can someone give me any indication to understand what's happening?
Thanks in advance!
Andrea C.
Comments
-
Hello,
Please let me know what version of the course you are looking at. When I look at the exercise the next command is not to look at the pods, but rather to look at the nodes. Step two is kubectl get nodes --show-labels in my book. If you were to follow the steps in the book as written you would realize that the pods not running yet.
Please follow the steps and let us know if you continue to issues.
Regards,
0 -
Hi @andrea.calvario,
The output of your describe command above shows that there are no replicas available (2 desired, 0 available). At this point your pods cannot be scheduled (expected behavior) because of a missing node label, which gets assigned in a following step.
Regards,
-Chris0 -
Hi Serewicz and thanks for support.
I'm actually on the "Kubernetes for Developers (LFD259)" and I'm on the lab 9.1: Deploy A New Service.
This is correct, but I pass without problem that step, there's no particular indication about that. Let me describe the steps on the exercise book with my execution:
1. vim nginx-one.yaml
I write the yaml file for the nginx-one, copying from the book.
2. kubectl get nodes --show-labels
This is my output, but on the book there's nothing particular to do after this step, only to give a look at the output (omitted from the book)
NAME STATUS ROLES AGE VERSION LABELS
in7rud3r-vmuk8s Ready,SchedulingDisabled control-plane,master 86d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
in7rud3r-vmuk8s-n2 NotReady 58d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s-n2,kubernetes.io/os=linux
3. kubectl create -f nginx-one.yaml
Same output of the book
4. kubectl create ns accounting
Same output of the book
5. kubectl create -f nginx-one.yaml
Same output of the book
6. kubectl -n accounting get pods
The output on the book, provides two pods, but in my execution I have no pods and receive the message "No resources found in accounting namespace."Thanks for your support Serewicz!
0 -
Thanks for your answer Chris,
but, in the exercise book the next step after the provided in my previous comment, is "kubectl -n accounting describe pod nginx-one-74dd9d578d-fcpmv", which I can't execute, because I have no one pods to run on it.
I'm missing something?
0 -
0
-
Sure Chris,
kubectl get namespaces
NAME STATUS AGE
accounting Active 5h31m
default Active 86d
kube-node-lease Active 86d
kube-public Active 86d
kube-system Active 86d
low-usage-limit Active 57d
small Active 2d23hThe accounting namespace is the one I created this morning, during the execution of the excercise!
0 -
Thanks.
I also noticed that your control-plane node shows scheduling disabled, while the worker is not ready - part of this may be the result of missed steps when bootstrapping the Kubernetes cluster from the first lab exercise.
Regards,
-Chris0 -
It's strange, I made all the other labs more or less without particular problem... can this be coused by some misconfiguration of the VM used?
Have you the exact step I should missed?
0 -
Sorry Chris, if as you say my node scheduling is disabled, can you tell me the commands to enable it, so I can try if it resets?
0 -
Hi @andrea.calvario,
I would recommend revisiting LFS258 - Kubrenetes Fundamentals - Lab Exercise 3.3 steps 3 and 4, to ensure all the taints are removed from the control-plane node. If multiple taints are found on the control-plane node, repeat the steps until all taints are removed.
Then list all your pods again with the
kubectl get pods --all-namespaces
command and also list your nodes withkubectl get nodes --show-labels
.Regards,
-Chris0 -
Thanks Chris, I'll try as soon as possible and I'll up to date you!
0 -
Hi Chris, then, I try to execute the step 3 and 4 of the Lab 3.3 as you said, this is the output:
_in7rud3r@in7rud3r-VMUK8s:~$ kubectl describe node | grep -i taint
Taints: node.kubernetes.io/unschedulable:NoSchedule
Taints: node.kubernetes.io/unreachable:NoExecute
in7rud3r@in7rud3r-VMUK8s:~$ kubectl taint nodes --all node.kubernetes.io/unschedulable
error: at least one taint update is required
in7rud3r@in7rud3r-VMUK8s:~$ kubectl taint nodes --all node.kubernetes.io/unreachable
error: at least one taint update is required
_It seems that I can't remove the taint.
The list of the pods from all the namespaces is the follow:in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hog-9f86b59cb-khkqw 0/1 Terminating 0 60d
default nginx-6696fb8664-w9hkq 0/1 Pending 0 6d14h
kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 60d
kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 61d
kube-system calico-node-cl7r5 0/1 Running 18 61d
kube-system calico-node-hnb98 1/1 Running 56 89d
kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 60d
kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 61d
kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 60d
kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 61d
kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 61d
kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 61d
kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 61d
kube-system kube-proxy-7vlqx 1/1 Running 0 61d
kube-system kube-proxy-qczsp 1/1 Running 3 61d
kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 61d
low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 60dAs expected nothing again from the accounting namespace.
The last command you ask to execute give me this output:in7rud3r@in7rud3r-VMUK8s:~$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
in7rud3r-vmuk8s Ready,SchedulingDisabled control-plane,master 89d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
in7rud3r-vmuk8s-n2 NotReady 61d v1.20.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=in7rud3r-vmuk8s-n2,kubernetes.io/os=linuxCan this give to us any other suggestion about the issue and how to resolve?
Thanks again for your support!
Andy
0 -
Hi @andrea.calvario,
If most lab exercises worked without any issues, at this point your cluster may no longer have enough resources, and as a result your workload is in terminating or pending state, while plugin agents calico and coredns are also in terminating and pending states.
You may be able to run the
top
command on your control-plane and worked nodes separately to see what processes are using the most node resources.Also, what are the sizes of your nodes (CPU, MEM, disk) ?
Also, running a
kubectl describe pod <pod-name>
for a few pods that are in pending or terminating state, what error(s) do you see in the events section of the output?Regards,
-Chris0 -
Hi @chrispokorni,
this morning a strange thing happened; proceeding with the suggestions you gave me, I ran the command again to list the pods on all the namespaces, to "describe" those in "pending" or "terminating" and with surprise I see that the two "nginx-ones" from Lab 9.1 that I was doing last week have appeared (this doesn't solve my problem though, because they are still pending).
I state that as usual, yesterday, after having carried out the checks you suggested the time before, I hibernated the VM, on the shell in fact there are still yesterday's commands.Command launched yesterday
in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hog-9f86b59cb-khkqw 0/1 Terminating 0 60d
default nginx-6696fb8664-w9hkq 0/1 Pending 0 6d14h
kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 60d
kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 61d
kube-system calico-node-cl7r5 0/1 Running 18 61d
kube-system calico-node-hnb98 1/1 Running 56 89d
kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 60d
kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 61d
kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 60d
kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 61d
kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 61d
kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 61d
kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 61d
kube-system kube-proxy-7vlqx 1/1 Running 0 61d
kube-system kube-proxy-qczsp 1/1 Running 3 61d
kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 61d
low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 60dCommand launched this morning
in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
accounting nginx-one-fb4bdb45d-dmlkd 0/1 Pending 0 18h
accounting nginx-one-fb4bdb45d-dr9m4 0/1 Pending 0 18h
default hog-9f86b59cb-khkqw 0/1 Terminating 0 61d
default nginx-6696fb8664-w9hkq 0/1 Pending 0 7d13h
kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 61d
kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 62d
kube-system calico-node-cl7r5 0/1 Running 18 62d
kube-system calico-node-hnb98 1/1 Running 56 90d
kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 61d
kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 62d
kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 61d
kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 62d
kube-system etcd-in7rud3r-vmuk8s 1/1 Running 4 62d
kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 31 62d
kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 9 62d
kube-system kube-proxy-7vlqx 1/1 Running 0 62d
kube-system kube-proxy-qczsp 1/1 Running 3 62d
kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 9 62d
low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 61dAs you can see, however, the pods are still pending.
Anyway, I provide the info you ask me in your last comment.
Each VM have 2 processor with 4 GB Memory and 120 GB HDD until now at 10 GB used.
Anyway, you can find additional and detailed info in the attached files (one for master and one for worker as you ask) and the description of all the pods pending and in terminating state (as you'll see, despite the two pods is appeared they cannot be described).Can this give us some new information to proceed? I have to wait for the creation of the pods or I can do something to accelerate this process? This can be really an issue for me if I'm so blocked, I'm afraid I won't have time to complete the course if is so slow.
Thanks you so really much Chris for your support, hope we can resolve, so I'll be able to go ahead.
Andy!
0 -
Hi @andrea.calvario,
Have you ever tried to reboot (or stop and then start) your VMs? In the past kubelet has not responded well to VM/node hibernating.
Regards,
-Chris0 -
So, I try to restart (reboot) the machine. The strange thing is that after reboot, my kubelet service is not working, I need to turn off the swap (using sudo swapoff -a) and then restart the kubelet service (using service kubelet restart).
Anyway, the pods are still pending:
in7rud3r@in7rud3r-VMUK8s:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
accounting nginx-one-fb4bdb45d-dmlkd 0/1 Pending 0 21h
accounting nginx-one-fb4bdb45d-dr9m4 0/1 Pending 0 21h
default hog-9f86b59cb-khkqw 0/1 Terminating 0 62d
default nginx-6696fb8664-w9hkq 0/1 Pending 0 7d16h
kube-system calico-kube-controllers-7dbc97f587-6c6f2 0/1 Pending 0 62d
kube-system calico-kube-controllers-7dbc97f587-zf5dm 0/1 Terminating 0 62d
kube-system calico-node-cl7r5 0/1 Running 18 62d
kube-system calico-node-hnb98 1/1 Running 58 90d
kube-system coredns-74ff55c5b-7n8js 0/1 Pending 0 62d
kube-system coredns-74ff55c5b-b7q4j 0/1 Terminating 0 62d
kube-system coredns-74ff55c5b-t2lxh 0/1 Pending 0 62d
kube-system coredns-74ff55c5b-z7c6j 0/1 Terminating 0 62d
kube-system etcd-in7rud3r-vmuk8s 1/1 Running 5 62d
kube-system kube-apiserver-in7rud3r-vmuk8s 1/1 Running 32 62d
kube-system kube-controller-manager-in7rud3r-vmuk8s 1/1 Running 10 62d
kube-system kube-proxy-7vlqx 1/1 Running 0 62d
kube-system kube-proxy-qczsp 1/1 Running 4 62d
kube-system kube-scheduler-in7rud3r-vmuk8s 1/1 Running 10 62d
low-usage-limit limited-hog-7c5ddc8c74-rndkj 0/1 Pending 0 62dAny idea?
Thanks!
Andy
0 -
Hi @andrea.calvario,
From your provided outputs, several things may impact your cluster. A major issue is the fact that the node/VM IP addresses managed by your hypervisor are overlapping the Pod network managed by the calico network plugin. It is critical that a cluster is configured without such overlap. This has already been recommended in an earlier post, but has not been fixed. Please bootstrap a new cluster following these recommendations.
Regards,
-Chris0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 154 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)