question LAB 4.1
Hello everyone.
I need you help for the lab 4.1 point 6 :
"Make sure Ambassador is ready before you continue by entering the command below.You will see a status for both the ambassador- and ambassador-operator- names.If you don’t see a Running status for both with a Ready value of 1/1, wait while thestartup continues, and an updated status of Running and 1/1 will be reported when it’sready. "
with the command kubectl get pods -n ambassador -w it displays :
NAME READY STATUS
ambassador-operator-8484bc8c86-q4l29 0/1 ContainerCreating
but I check my master node, it's in status "not ready"
NAME STATUS ROLES AGE VERSION
consul-control-plane NotReady master 9d v1.18.2
Here the description of the node Than
Name: consul-control-plane
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
ingress-ready=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=consul-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 15 Oct 2021 20:44:59 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: consul-control-plane
AcquireTime:
RenewTime: Mon, 25 Oct 2021 09:49:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime R eason Message
MemoryPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNo tReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 172.18.0.2
Hostname: consul-control-plane
Capacity:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
System Info:
Machine ID: d3a349af4c374bee873ac5afe1d78bc2
System UUID: 66501c5f-c0a8-48fd-942f-2731288a6079
Boot ID: be3394a3-7813-4435-9a3a-7411b26c56fc
Kernel Version: 5.11.0-1020-gcp
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
ambassador ambassador-operator-8484bc8c86-q4l29 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 114m
emojivoto emoji-65df4d68f7-g5tgk 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto vote-bot-7c59767698-rgvn9 10m (0%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto voting-768f496cd8-mzn2c 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto web-545f869fc4-k5qb7 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
kube-system coredns-66bff467f8-lgwt6 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 9d
kube-system coredns-66bff467f8-nckcg 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 56m
kube-system etcd-consul-control-plane 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kindnet-9sbrw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 9d
kube-system kube-apiserver-consul-control-plane 250m (6%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-controller-manager-consul-control-plane 200m (5%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-proxy-kqx79 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-scheduler-consul-control-plane 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 9d
local-path-storage local-path-provisioner-bd4bb6b75-d4rct 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 1160m (28%) 100m (2%)
memory 190Mi (1%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit ac ross pods
Thank you so much.
Best Regards
Mike
Answers
-
I'm still stop on the same task (lab 4.1 item 6). I am try to fix it. But until now, nothing. I appreciate if anyone knows the solution and could help me.
Obrigado.
$ kubectl get pods -n ambassador
NAME READY STATUS RESTARTS AGE
ambassador-operator-8484bc8c86-pgcpj 0/1 Pending 0 14m$ kubectl describe pod -n ambassador ambassador-operator-8484bc8c86-pgcpj
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.0 -
Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etc
0 -
I installed on my machine.
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz RAM 16GB
kubectl version v1.22.3
OS Ubuntu 20.04.3 LTS (Focal Fossa)
0 -
Aditional information
Before lab 3.2
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-linkerd-control-plane 1/1 Running 1 53s
kube-apiserver-linkerd-control-plane 1/1 Running 1 53s
kube-controller-manager-linkerd-control-plane 1/1 Running 1 53s
kube-scheduler-linkerd-control-plane 1/1 Running 1 53sAfter lab 3.2
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-4ksk7 0/1 Pending 0 3d23h
coredns-66bff467f8-7bdlh 0/1 Pending 0 3d23h
etcd-linkerd-control-plane 1/1 Running 3 3d23h
kindnet-8xl75 0/1 CrashLoopBackOff 2 3d23h
kube-apiserver-linkerd-control-plane 1/1 Running 3 3d23h
kube-controller-manager-linkerd-control-plane 1/1 Running 3 3d23h
kube-proxy-64vbv 0/1 CrashLoopBackOff 6 3d23h
kube-scheduler-linkerd-control-plane 1/1 Running 3 3d23h$ kubectl describe pods -n kube-system kindnet-8xl75
Warning FailedMount 4m25s (x2 over 4m26s) kubelet MountVolume.SetUp failed for volume "kindnet-token-bpck8" : failed to sync secret cache: timed out waiting for the condition0 -
I ran into the same problem when did the lab 3.2 on my PC (3 Ubuntu 20.02 VM (1 control plane, 2 workers) with kube*=v1.21-00), eventually I followed the lab and use the kind cluster instead and those 2 ambassadors started successfully.
0 -
Hi danielbryantuk
I installed on my machine.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 158
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
...$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
...0 -
Hello,
If you only have one node in your cluster then that node is the control plane. That control plane has a taint. If you look at the output of your node describe, you will find this line: Taints: node.kubernetes.io/not-ready:NoSchedule which would limit the pods which can be scheduled on that node.
How many nodes are in your cluster?
What is the output of kubectl get pod --all-namespaces?Regards,
0 -
Hi @serewicz,
it worked like @proliant said. Honestly, I didn't understand the cause neither the solution.
But, thank you for respond.
$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
emojivoto emoji-66ccdb4d86-plw5b 1/1 Running 0 5m12s
emojivoto vote-bot-69754c864f-nnr77 1/1 Running 0 5m12s
emojivoto voting-f999bd4d7-b9lvk 1/1 Running 0 5m12s
emojivoto web-79469b946f-jclqf 1/1 Running 0 5m12s
kube-system coredns-558bd4d5db-4nxlw 1/1 Running 2 13m
kube-system coredns-558bd4d5db-9x4lr 1/1 Running 2 13m
kube-system etcd-linkerd-control-plane 1/1 Running 3 18m
kube-system kindnet-96r8v 1/1 Running 2 13m
kube-system kube-apiserver-linkerd-control-plane 1/1 Running 3 18m
kube-system kube-controller-manager-linkerd-control-plane 1/1 Running 3 18m
kube-system kube-proxy-xgz6k 1/1 Running 2 13m
kube-system kube-scheduler-linkerd-control-plane 1/1 Running 3 18m
local-path-storage local-path-provisioner-547f784dff-2pft4 1/1 Running 2 13m0 -
Hi everyone.
I have found a solution to our problem.
Like you @ErlisonSantos , I had my kube-proxy and kindset pod in CrashLoopBackOff status.
I think the problem comes when creating the cluster with kind. I created a new cluster with kind via the following command:kind create cluster --name=cluster-test
and I got the same situation (same pods in errors)
After several leads, I found the following information on github:Manually set the parameter before creating the Kind cluster.
the command to do it:
sudo sysctl net/netfilter/nf_conntrack_max=131072
Why ? you can follow the conversation in the link
I recreated a cluster with kind and there it all worked.
So I re-entered this command in each context and stopped and restarted the container with the command indicated in the lab
docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)
I hope I could help you.
here is the source:
https://github.com/kubernetes-sigs/kind/issues/2240
Best regards
Mike
2 -
This is another solution, the exact solution I posted on another thread at LFS244.
penguin@vm066:~$ k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME vm067 Ready control-plane,master 2d19h v1.21.1 192.168.1.67 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 vm068 Ready <none> 2d19h v1.21.1 192.168.1.68 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 vm069 Ready <none> 2d18h v1.21.1 192.168.1.69 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5 penguin@vm066:~$ k get pods -A | egrep -e 'NAMESPACE|ambassador|default' NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-6bf6958bc-g7glz 1/1 Running 0 73m ambassador ambassador-agent-7fdf56587b-8zg9k 1/1 Running 0 73m ambassador ambassador-operator-7b77fbcfc-74shj 1/1 Running 0 75m default nfs-subdir-external-provisioner-6b7ff5bd4c-jgkx6 1/1 Running 1 2d18h penguin@vm066:~$ showmount -e localhost Export list for localhost: /vdb1 * penguin@vm066:~$ df -h /vdb1 Filesystem Size Used Avail Use% Mounted on /dev/vdb1 49G 53M 47G 1% /vdb1 penguin@vm066:~$
The problem is related to dynamic volume provisioning when we don't use kind cluster.
You can see at line 11 there is a pod called nfs-subdir-external-provisioner-*0 -
I installed on my machine.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 158
model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
...$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
...0 -
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-$(uname)-amd64
try to use latest version of kind0 -
@danielbryantuk said:
Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etcI am running it in GCP and still same issue, I repeated the excercise and still same, some pods still getting crashed:
0 -
@yorimina said:
sudo sysctl net/netfilter/nf_conntrack_max=131072
docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)Same worked here, thanks
0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 152 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 260 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)