Welcome to the Linux Foundation Forum!

question LAB 4.1

Hello everyone.

I need you help for the lab 4.1 point 6 :

"Make sure Ambassador is ready before you continue by entering the command below.You will see a status for both the ​ambassador-​ and ​ambassador-operator-​ names.If you don’t see a ​Running​ status for both with a Ready value of ​1/1​, wait while thestartup continues, and an updated status of ​Running​ and ​1/1​ will be reported when it’sready. "

with the command kubectl get pods -n ambassador -w it displays :

NAME READY STATUS
ambassador-operator-8484bc8c86-q4l29 0/1 ContainerCreating

but I check my master node, it's in status "not ready"
NAME STATUS ROLES AGE VERSION
consul-control-plane NotReady master 9d v1.18.2

Here the description of the node :blush: Than

Name: consul-control-plane
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
ingress-ready=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=consul-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 15 Oct 2021 20:44:59 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: consul-control-plane
AcquireTime:
RenewTime: Mon, 25 Oct 2021 09:49:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime R eason Message


MemoryPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNo tReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 172.18.0.2
Hostname: consul-control-plane
Capacity:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
System Info:
Machine ID: d3a349af4c374bee873ac5afe1d78bc2
System UUID: 66501c5f-c0a8-48fd-942f-2731288a6079
Boot ID: be3394a3-7813-4435-9a3a-7411b26c56fc
Kernel Version: 5.11.0-1020-gcp
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


ambassador ambassador-operator-8484bc8c86-q4l29 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 114m
emojivoto emoji-65df4d68f7-g5tgk 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto vote-bot-7c59767698-rgvn9 10m (0%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto voting-768f496cd8-mzn2c 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto web-545f869fc4-k5qb7 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
kube-system coredns-66bff467f8-lgwt6 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 9d
kube-system coredns-66bff467f8-nckcg 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 56m
kube-system etcd-consul-control-plane 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kindnet-9sbrw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 9d
kube-system kube-apiserver-consul-control-plane 250m (6%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-controller-manager-consul-control-plane 200m (5%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-proxy-kqx79 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-scheduler-consul-control-plane 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 9d
local-path-storage local-path-provisioner-bd4bb6b75-d4rct 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 1160m (28%) 100m (2%)
memory 190Mi (1%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit ac ross pods

Thank you so much.

Best Regards

Mike

Answers

  • I'm still stop on the same task (lab 4.1 item 6). I am try to fix it. But until now, nothing. I appreciate if anyone knows the solution and could help me.

    Obrigado.

    $ kubectl get pods -n ambassador
    NAME READY STATUS RESTARTS AGE
    ambassador-operator-8484bc8c86-pgcpj 0/1 Pending 0 14m

    $ kubectl describe pod -n ambassador ambassador-operator-8484bc8c86-pgcpj
    ...
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
    Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

  • Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etc

  • Hi @danielbryantuk

    I installed on my machine.

    model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz RAM 16GB

    kubectl version v1.22.3

    OS Ubuntu 20.04.3 LTS (Focal Fossa)

  • Aditional information

    Before lab 3.2

    $ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    etcd-linkerd-control-plane 1/1 Running 1 53s
    kube-apiserver-linkerd-control-plane 1/1 Running 1 53s
    kube-controller-manager-linkerd-control-plane 1/1 Running 1 53s
    kube-scheduler-linkerd-control-plane 1/1 Running 1 53s

    After lab 3.2

    $ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-66bff467f8-4ksk7 0/1 Pending 0 3d23h
    coredns-66bff467f8-7bdlh 0/1 Pending 0 3d23h
    etcd-linkerd-control-plane 1/1 Running 3 3d23h
    kindnet-8xl75 0/1 CrashLoopBackOff 2 3d23h
    kube-apiserver-linkerd-control-plane 1/1 Running 3 3d23h
    kube-controller-manager-linkerd-control-plane 1/1 Running 3 3d23h
    kube-proxy-64vbv 0/1 CrashLoopBackOff 6 3d23h
    kube-scheduler-linkerd-control-plane 1/1 Running 3 3d23h

    $ kubectl describe pods -n kube-system kindnet-8xl75
    Warning FailedMount 4m25s (x2 over 4m26s) kubelet MountVolume.SetUp failed for volume "kindnet-token-bpck8" : failed to sync secret cache: timed out waiting for the condition

  • I ran into the same problem when did the lab 3.2 on my PC (3 Ubuntu 20.02 VM (1 control plane, 2 workers) with kube*=v1.21-00), eventually I followed the lab and use the kind cluster instead and those 2 ambassadors started successfully.

  • Hi danielbryantuk

    I installed on my machine.

    $ cat /proc/cpuinfo
    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 158
    model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
    ...

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

    $ cat /etc/os-release
    NAME="Ubuntu"
    VERSION="20.04.3 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.3 LTS"
    VERSION_ID="20.04"
    ...

  • serewicz
    serewicz Posts: 970

    Hello,

    If you only have one node in your cluster then that node is the control plane. That control plane has a taint. If you look at the output of your node describe, you will find this line: Taints: node.kubernetes.io/not-ready:NoSchedule which would limit the pods which can be scheduled on that node.

    How many nodes are in your cluster?
    What is the output of kubectl get pod --all-namespaces?

    Regards,

Categories

Upcoming Training