question LAB 4.1
Hello everyone.
I need you help for the lab 4.1 point 6 :
"Make sure Ambassador is ready before you continue by entering the command below.You will see a status for both the ambassador- and ambassador-operator- names.If you don’t see a Running status for both with a Ready value of 1/1, wait while thestartup continues, and an updated status of Running and 1/1 will be reported when it’sready. "
with the command kubectl get pods -n ambassador -w it displays :
NAME READY STATUS
ambassador-operator-8484bc8c86-q4l29 0/1 ContainerCreating
but I check my master node, it's in status "not ready"
NAME STATUS ROLES AGE VERSION
consul-control-plane NotReady master 9d v1.18.2
Here the description of the node Than
Name: consul-control-plane
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
ingress-ready=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=consul-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 15 Oct 2021 20:44:59 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: consul-control-plane
AcquireTime:
RenewTime: Mon, 25 Oct 2021 09:49:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime R eason Message
MemoryPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNo tReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 172.18.0.2
Hostname: consul-control-plane
Capacity:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
System Info:
Machine ID: d3a349af4c374bee873ac5afe1d78bc2
System UUID: 66501c5f-c0a8-48fd-942f-2731288a6079
Boot ID: be3394a3-7813-4435-9a3a-7411b26c56fc
Kernel Version: 5.11.0-1020-gcp
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
ambassador ambassador-operator-8484bc8c86-q4l29 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 114m
emojivoto emoji-65df4d68f7-g5tgk 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto vote-bot-7c59767698-rgvn9 10m (0%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto voting-768f496cd8-mzn2c 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto web-545f869fc4-k5qb7 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
kube-system coredns-66bff467f8-lgwt6 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 9d
kube-system coredns-66bff467f8-nckcg 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 56m
kube-system etcd-consul-control-plane 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kindnet-9sbrw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 9d
kube-system kube-apiserver-consul-control-plane 250m (6%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-controller-manager-consul-control-plane 200m (5%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-proxy-kqx79 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-scheduler-consul-control-plane 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 9d
local-path-storage local-path-provisioner-bd4bb6b75-d4rct 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 1160m (28%) 100m (2%)
memory 190Mi (1%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit ac ross pods
Thank you so much.
Best Regards
Mike
Answers
-
I'm still stop on the same task (lab 4.1 item 6). I am try to fix it. But until now, nothing. I appreciate if anyone knows the solution and could help me.
Obrigado.
$ kubectl get pods -n ambassador
NAME READY STATUS RESTARTS AGE
ambassador-operator-8484bc8c86-pgcpj 0/1 Pending 0 14m$ kubectl describe pod -n ambassador ambassador-operator-8484bc8c86-pgcpj
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.0 -
Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etc
0 -
I installed on my machine.
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz RAM 16GB
kubectl version v1.22.3
OS Ubuntu 20.04.3 LTS (Focal Fossa)
0 -
Aditional information
Before lab 3.2
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-linkerd-control-plane 1/1 Running 1 53s
kube-apiserver-linkerd-control-plane 1/1 Running 1 53s
kube-controller-manager-linkerd-control-plane 1/1 Running 1 53s
kube-scheduler-linkerd-control-plane 1/1 Running 1 53sAfter lab 3.2
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-4ksk7 0/1 Pending 0 3d23h
coredns-66bff467f8-7bdlh 0/1 Pending 0 3d23h
etcd-linkerd-control-plane 1/1 Running 3 3d23h
kindnet-8xl75 0/1 CrashLoopBackOff 2 3d23h
kube-apiserver-linkerd-control-plane 1/1 Running 3 3d23h
kube-controller-manager-linkerd-control-plane 1/1 Running 3 3d23h
kube-proxy-64vbv 0/1 CrashLoopBackOff 6 3d23h
kube-scheduler-linkerd-control-plane 1/1 Running 3 3d23h$ kubectl describe pods -n kube-system kindnet-8xl75
Warning FailedMount 4m25s (x2 over 4m26s) kubelet MountVolume.SetUp failed for volume "kindnet-token-bpck8" : failed to sync secret cache: timed out waiting for the condition0 -
I ran into the same problem when did the lab 3.2 on my PC (3 Ubuntu 20.02 VM (1 control plane, 2 workers) with kube*=v1.21-00), eventually I followed the lab and use the kind cluster instead and those 2 ambassadors started successfully.
0 -
Hi danielbryantuk
I installed on my machine.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 158
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
...$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
...0 -
Hello,
If you only have one node in your cluster then that node is the control plane. That control plane has a taint. If you look at the output of your node describe, you will find this line: Taints: node.kubernetes.io/not-ready:NoSchedule which would limit the pods which can be scheduled on that node.
How many nodes are in your cluster?
What is the output of kubectl get pod --all-namespaces?Regards,
0 -
Hi @serewicz,
it worked like @proliant said. Honestly, I didn't understand the cause neither the solution.
But, thank you for respond.
$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
emojivoto emoji-66ccdb4d86-plw5b 1/1 Running 0 5m12s
emojivoto vote-bot-69754c864f-nnr77 1/1 Running 0 5m12s
emojivoto voting-f999bd4d7-b9lvk 1/1 Running 0 5m12s
emojivoto web-79469b946f-jclqf 1/1 Running 0 5m12s
kube-system coredns-558bd4d5db-4nxlw 1/1 Running 2 13m
kube-system coredns-558bd4d5db-9x4lr 1/1 Running 2 13m
kube-system etcd-linkerd-control-plane 1/1 Running 3 18m
kube-system kindnet-96r8v 1/1 Running 2 13m
kube-system kube-apiserver-linkerd-control-plane 1/1 Running 3 18m
kube-system kube-controller-manager-linkerd-control-plane 1/1 Running 3 18m
kube-system kube-proxy-xgz6k 1/1 Running 2 13m
kube-system kube-scheduler-linkerd-control-plane 1/1 Running 3 18m
local-path-storage local-path-provisioner-547f784dff-2pft4 1/1 Running 2 13m0 -
Hi everyone.
I have found a solution to our problem.
Like you @ErlisonSantos , I had my kube-proxy and kindset pod in CrashLoopBackOff status.
I think the problem comes when creating the cluster with kind. I created a new cluster with kind via the following command:kind create cluster --name=cluster-test
and I got the same situation (same pods in errors)
After several leads, I found the following information on github:Manually set the parameter before creating the Kind cluster.
the command to do it:
sudo sysctl net/netfilter/nf_conntrack_max=131072
Why ? you can follow the conversation in the link
I recreated a cluster with kind and there it all worked.
So I re-entered this command in each context and stopped and restarted the container with the command indicated in the lab
docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)
I hope I could help you.
here is the source:
https://github.com/kubernetes-sigs/kind/issues/2240
Best regards
Mike
2 -
This is another solution, the exact solution I posted on another thread at LFS244.
penguin@vm066:~$ k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME vm067 Ready control-plane,master 2d19h v1.21.1 192.168.1.67 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 vm068 Ready <none> 2d19h v1.21.1 192.168.1.68 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 vm069 Ready <none> 2d18h v1.21.1 192.168.1.69 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 penguin@vm066:~$ k get pods -A | egrep -e 'NAMESPACE|ambassador|default' NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-6bf6958bc-g7glz 1/1 Running 0 73m ambassador ambassador-agent-7fdf56587b-8zg9k 1/1 Running 0 73m ambassador ambassador-operator-7b77fbcfc-74shj 1/1 Running 0 75m default nfs-subdir-external-provisioner-6b7ff5bd4c-jgkx6 1/1 Running 1 2d18h penguin@vm066:~$ showmount -e localhost Export list for localhost: /vdb1 * penguin@vm066:~$ df -h /vdb1 Filesystem Size Used Avail Use% Mounted on /dev/vdb1 49G 53M 47G 1% /vdb1 penguin@vm066:~$
The problem is related to dynamic volume provisioning when we don't use kind cluster.
You can see at line 11 there is a pod called nfs-subdir-external-provisioner-*0 -
I installed on my machine.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 158
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
...$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
...0 -
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-$(uname)-amd64
try to use latest version of kind0 -
@danielbryantuk said:
Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etcI am running it in GCP and still same issue, I repeated the excercise and still same, some pods still getting crashed:
0 -
@yorimina said:
sudo sysctl net/netfilter/nf_conntrack_max=131072
docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)Same worked here, thanks
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)