Unable to start pod/container in lab 2.3 - Error message is "Error from server (BadRequest) .."
Hi,
I have completed lab 2.2 to setup the Kubernetes cluster. When I tried to create a pod/container in lab 2.3, I keep seeing this error:
Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: ContainerCreating
When I do a kubectl describe pod nginx, I see these errors:
.....
Warning FailedCreatePodSandBox 95s kubelet Failed to create pod
sandbox: rpc error: code = Unknown desc = failed to mount container k8s_POD_ngi
nx_default_e1106b28-303c-4e75-afc2-d6d14bd67913_0 in pod sandbox k8s_nginx_defau
lt_e1106b28-303c-4e75-afc2-d6d14bd67913_0(507780f27bf6a769b6e7178ebe52a032e8967f
2af9d720f1931933e0a202c917): error creating overlay mount to /var/lib/containers
/storage/overlay/0c9cccedaee7f6a42d1546dc06d3100072fb4ac860040aeb7b58d85d3e39c9a
c/merged, mount_data="nodev,metacopy=on,lowerdir=/var/lib/containers/storage/ove
rlay/l/4NMXH7DOMDNBJNKEOKA65YCBZS,upperdir=/var/lib/containers/storage/overlay/0
c9cccedaee7f6a42d1546dc06d3100072fb4ac860040aeb7b58d85d3e39c9ac/diff,workdir=/va
r/lib/containers/storage/overlay/0c9cccedaee7f6a42d1546dc06d3100072fb4ac860040ae
b7b58d85d3e39c9ac/work": invalid argument
.....
What am I doing wrong?
Thanks
TW
Best Answers
-
Hello,
I notice you have AppArmor enabled. That could be the cause of some headaches, does the problem persist when you disable it?
As all the failed pods are on your worker I would suspect it is either AppArmor, GCE VPC firewall issue, or a networking issue where the nodes are using overlapping IP addresses with the host.
Could you disable AppArmor on all nodes, ensure your VPC allows all traffic, and show the IP ranges used by your primary interface (something like ens4) on both nodes and show the results after.
Regards,
0 -
Hi,
Thanks for your help. I recreated the VMs in GCE again and it is working now. I must have misconfigured VPC the first time. I managed to pass the exam after going through the labs.
Rgds
TW
0
Answers
-
Hi @tanwee,
Is this a generic symptom observed on multiple pods or just this one? Please provide the output of the following command:
kubectl get pods -A -o wide
Regards,
-Chris0 -
Hi Chris,
This error is occurring for any pod that I try to create on the cluster. The output of the command is:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx 0/1 ContainerCreating 0 52s worker
kube-system calico-kube-controllers-5d995d45d6-6mk6b 1/1 Running 1 2d23h 192.168.242.65 cp
kube-system calico-node-s824n 0/1 Init:0/3 0 2d23h 10.2.0.5 worker
kube-system calico-node-zkxrn 1/1 Running 1 2d23h 10.2.0.4 cp
kube-system coredns-78fcd69978-4svtg 1/1 Running 1 2d23h 192.168.242.66 cp
kube-system coredns-78fcd69978-m4nsp 1/1 Running 1 2d23h 192.168.242.67 cp
kube-system etcd-cp 1/1 Running 1 2d23h 10.2.0.4 cp
kube-system kube-apiserver-cp 1/1 Running 1 2d23h 10.2.0.4 cp
kube-system kube-controller-manager-cp 1/1 Running 1 2d23h 10.2.0.4 cp
kube-system kube-proxy-fn5xm 0/1 ContainerCreating 0 2d23h 10.2.0.5 worker
kube-system kube-proxy-trxxb 1/1 Running 1 2d23h 10.2.0.4 cp
kube-system kube-scheduler-cp 1/1 Running 1 2d23h 10.2.0.4 cpRgds
Tan Wee
0 -
Hello,
From the look of things Calico is not running on your worker. There are a few reasons this could happen, but chances are it has to do with a networking configuration error or a firewall between your instances.
What are you using to run the lab exercises, GCE, AWS, Digital Ocean, VMWare, VirtualBox, two Linux laptops?
Regards,
0 -
Hi,
I created the 2 VMs in GCE following the GCE Lab setup video.
Thanks
TW
0 -
Hi @tanwee,
Thank you for the provided output. It seems that none of the containers scheduled to the worker node are able to start. The node itself may not be ready.
What may help are the outputs of the following two commands:kubectl get nodes
kubectl describe node worker
Regards,
-Chris0 -
I would also double check that the VPC is allowing all traffic between your VMs, as well.
Regards,
0 -
Hi,
This is the output of kubectl describe node worker. You can see the last message:
Node worker status is now: NodeReady
Rgds
TW
Name: worker
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=worker
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 20 Nov 2021 12:45:36 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: worker
AcquireTime:
RenewTime: Wed, 24 Nov 2021 12:52:02 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 24 Nov 2021 12:48:48 +0000 Wed, 24 Nov 2021 12:48:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 24 Nov 2021 12:48:48 +0000 Wed, 24 Nov 2021 12:48:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 24 Nov 2021 12:48:48 +0000 Wed, 24 Nov 2021 12:48:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 24 Nov 2021 12:48:48 +0000 Wed, 24 Nov 2021 12:48:48 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.2.0.5
Hostname: worker
Capacity:
cpu: 2
ephemeral-storage: 20145724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7977Mi
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 18566299208
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7877Mi
pods: 110
System Info:
Machine ID: 9df579dd8f5e6ed7ca105568417ac070
System UUID: 9DF579DD-8F5E-6ED7-CA10-5568417AC070
Boot ID: e3e3b6a9-26d9-4c48-bd0a-c0233a067a5c
Kernel Version: 4.15.0-1006-gcp
OS Image: Ubuntu 18.04 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.22.1
Kubelet Version: v1.22.1
Kube-Proxy Version: v1.22.1
PodCIDR: 192.168.1.0/24
PodCIDRs: 192.168.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24h
kube-system calico-node-s824n 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4d
kube-system kube-proxy-fn5xm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 4d (x2 over 4d) kubelet Node worker status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4d (x2 over 4d) kubelet Node worker status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4d (x2 over 4d) kubelet Node worker status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4d kubelet Updated Node Allocatable limit across pods
Normal Starting 4d kubelet Starting kubelet.
Normal NodeReady 4d kubelet Node worker status is now: NodeReady
Normal NodeAllocatableEnforced 24h kubelet Updated Node Allocatable limit across pods
Normal Starting 24h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 24h (x2 over 24h) kubelet Node worker status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24h (x2 over 24h) kubelet Node worker status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24h (x2 over 24h) kubelet Node worker status is now: NodeHasSufficientPID
Warning Rebooted 24h kubelet Node worker has been rebooted, boot id: 603e4fdf-67f1-4f49-8986-5e9f749a6d95
Normal NodeNotReady 24h kubelet Node worker status is now: NodeNotReady
Normal NodeReady 24h kubelet Node worker status is now: NodeReady
Normal Starting 3m27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m26s (x2 over 3m26s) kubelet Node worker status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m26s (x2 over 3m26s) kubelet Node worker status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m26s (x2 over 3m26s) kubelet Node worker status is now: NodeHasSufficientPID
Warning Rebooted 3m26s kubelet Node worker has been rebooted, boot id: e3e3b6a9-26d9-4c48-bd0a-c0233a067a5c
Normal NodeNotReady 3m26s kubelet Node worker status is now: NodeNotReady
Normal NodeAllocatableEnforced 3m26s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m16s kubelet Node worker status is now: NodeReady0 -
Glad it all worked out and congratulations on passing the exam @tanwee!
Regards,
-Chris0 -
Hello, I have the same problem using GCE
kubectl get pods -A -o wide
0 -
any advice on how can I debug why that calico-node-82fl is not running?
I checked firewall and it has Ingress allow all, not sure how to debug it0 -
Hello,
When you say you added an Ingress allow all, are you talking about the GCE VPC? Be sure to allow all traffic between nodes from the Google perspective.
Other than that what IP ranges did you choose? DId you assign 192.168 to your nodes by any chance?
Any deviation from the lab setup and exercise?
Regards,
0 -
Yes, I created a new GCE VPC network with 10.2.0.0/16 subnet, then added a Firewall rule with Ingress allow all ports. Then made sure I used that network in my instances
I am able to ping one node from another, it feels like a network issue, but lucking the experience I am not able to find the root cause. Current UI is little different from the lab setup video on GCE, but I tried to follow as close as possible.I am now trying this guide: https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-public-cloud/gce. they use a little different network setup. I will let you know if this is going to work.
0 -
Using that guide and gcloud sdk I was able to create VPC network and two instances and then ran provisioning scripts from LFD259 solutions, calico is started successfully now:
0 -
Hi @sgurenkov,
The guide from tigera.io/project-calico/ may still be using docker as container runtime, which is different from the cri-o runtime recommended for this class. In the near future docker will no longer be supported as runtime for Kubernetes, therefore the labs have migrated onto a different container runtime.
The 10.8x.0.0 network connecting the
calico-kube-controllers
and thecoredns
pods was most likely initiated by Podman, and during the theinit
phase somehow the pods were assigned IP addresses from Podman's bridge network instead of being exposed over their respective nodes' IP addresses - the expected behavior.A simple
delete
of these pods typically allows for the IP address assignment to be corrected.Regards,
-Chris0 -
Hi Everyone, I am also facing the same issue, can you please help ?
I am using two Virtualbox vms connected with a nat network . AppArmor is uninstalled on both instances. all installation and requirement are exactly as on the lab exercises , I removed the vms and started fresh multiple times.
Thanks in advance!
result for : kubectl get nodes
**result for kubectl get pods -A -o wide : **
**result for kubectl describe node wn1 : **
Name: wn1 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=wn1 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 13 Jan 2022 15:56:16 +0000 Taints: Unschedulable: false Lease: HolderIdentity: wn1 AcquireTime: RenewTime: Thu, 13 Jan 2022 16:32:22 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 13 Jan 2022 16:31:59 +0000 Thu, 13 Jan 2022 15:56:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 13 Jan 2022 16:31:59 +0000 Thu, 13 Jan 2022 15:56:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 13 Jan 2022 16:31:59 +0000 Thu, 13 Jan 2022 15:56:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 13 Jan 2022 16:31:59 +0000 Thu, 13 Jan 2022 15:56:26 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.2.5 Hostname: wn1 Capacity: cpu: 2 ephemeral-storage: 25668836Ki hugepages-2Mi: 0 memory: 4039160Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 23656399219 hugepages-2Mi: 0 memory: 3936760Ki pods: 110 System Info: Machine ID: 05291b4c92144126979989eab08c9a58 System UUID: 7CDF5996-062F-4049-B6C0-087A3C62288F Boot ID: a43595d1-f2b7-40dc-bef2-3063db315ff0 Kernel Version: 4.15.0-166-generic OS Image: Ubuntu 18.04.6 LTS Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.22.1 Kubelet Version: v1.22.1 Kube-Proxy Version: v1.22.1 PodCIDR: 192.168.1.0/24 PodCIDRs: 192.168.1.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-node-zph59 250m (12%) 0 (0%) 0 (0%) 0 (0%) 36m kube-system kube-proxy-82vv9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 250m (12%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 36m kubelet Starting kubelet. Normal NodeHasSufficientMemory 36m (x2 over 36m) kubelet Node wn1 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 36m (x2 over 36m) kubelet Node wn1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 36m (x2 over 36m) kubelet Node wn1 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods Normal NodeReady 35m kubelet Node wn1 status is now: NodeReady
last lines from result for : kubectl describe pod calico-node-zph59 --namespace=kube-system
Warning FailedCreatePodSandBox 2m5s (x151 over 35m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container k8s_POD_calico-node-zph59_kube-system_e8d117b7-aa0a-432f-8376-fe45ce85a4fe_0 in pod sandbox k8s_calico-node-zph59_kube-system_e8d117b7-aa0a-432f-8376-fe45ce85a4fe_0(d7ebf9a863e5dfa8018a6e1a233f0726a4ec9f16c73596e6d183f29313f3cd0c): error creating overlay mount to /var/lib/containers/storage/overlay/71684365cde8ce52493971416573fd46038082aaa807b64f55690d1744e53a78/merged, mount_data="nodev,metacopy=on,lowerdir=/var/lib/containers/storage/overlay/l/N6RBQYMDVUFBOBV74KB3GTYZF7,upperdir=/var/lib/containers/storage/overlay/71684365cde8ce52493971416573fd46038082aaa807b64f55690d1744e53a78/diff,workdir=/var/lib/containers/storage/overlay/71684365cde8ce52493971416573fd46038082aaa807b64f55690d1744e53a78/work": invalid argument
0 -
Hi Everyone, I have solved the issue by adding
sudo sed -i 's/,metacopy=on//g' /etc/containers/storage.conf
to k8sSecond.sh after
sudo apt-get install -y cri-o cri-o-runc podman buildah
This was an issue related to ubuntu 18.04
0 -
Hi @elmoussaoui,
What type of VMs are you using and which Ubuntu 18.04 did you have installed (desktop/server) ?
Regards,
-Chris0 -
Hi @chrispokorni,
Two virtual box VMs, connected with a Nat network, ubuntu-18.04.6-live-server-amd64
Regards,
0 -
Hi @elmoussaoui,
This seems to be encountered on local installs, while cloud ubuntu 18.04 server images do not display the same behavior.
Regards,
-Chris0
Categories
- All Categories
- 208 LFX Mentorship
- 208 LFX Mentorship: Linux Kernel
- 738 Linux Foundation IT Professional Programs
- 340 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 67 DevOps Engineer IT Professional Program
- 133 Cloud Native Developer IT Professional Program
- 123 Express Training Courses
- 123 Express Courses - Discussion Forum
- 6K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 67 LFC131 Class Forum
- 40 LFD102 Class Forum
- 223 LFD103 Class Forum
- 18 LFD110 Class Forum
- 35 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 3 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 691 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 11 LFD273 Class Forum
- 119 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 9 LFS151 Class Forum
- 2 LFS157 Class Forum
- 19 LFS158 Class Forum
- 6 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 2 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 120 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 51 LFS241 Class Forum
- 46 LFS242 Class Forum
- 38 LFS243 Class Forum
- 14 LFS244 Class Forum
- 2 LFS245 Class Forum
- 46 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 148 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 117 LFS260 Class Forum
- 158 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 19 LFS268 Class Forum
- 30 LFS269 Class Forum
- 201 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 258 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 755 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 352 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.2K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 312 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)