Welcome to the Linux Foundation Forum!

question LAB 4.1

Hello everyone.

I need you help for the lab 4.1 point 6 :

"Make sure Ambassador is ready before you continue by entering the command below.You will see a status for both the ​ambassador-​ and ​ambassador-operator-​ names.If you don’t see a ​Running​ status for both with a Ready value of ​1/1​, wait while thestartup continues, and an updated status of ​Running​ and ​1/1​ will be reported when it’sready. "

with the command kubectl get pods -n ambassador -w it displays :

NAME READY STATUS
ambassador-operator-8484bc8c86-q4l29 0/1 ContainerCreating

but I check my master node, it's in status "not ready"
NAME STATUS ROLES AGE VERSION
consul-control-plane NotReady master 9d v1.18.2

Here the description of the node :blush: Than

Name: consul-control-plane
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
ingress-ready=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=consul-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 15 Oct 2021 20:44:59 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: consul-control-plane
AcquireTime:
RenewTime: Mon, 25 Oct 2021 09:49:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime R eason Message


MemoryPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 25 Oct 2021 09:49:22 +0000 Fri, 15 Oct 2021 20:44:57 +0000 K ubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNo tReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 172.18.0.2
Hostname: consul-control-plane
Capacity:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 30308240Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15358012Ki
pods: 110
System Info:
Machine ID: d3a349af4c374bee873ac5afe1d78bc2
System UUID: 66501c5f-c0a8-48fd-942f-2731288a6079
Boot ID: be3394a3-7813-4435-9a3a-7411b26c56fc
Kernel Version: 5.11.0-1020-gcp
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


ambassador ambassador-operator-8484bc8c86-q4l29 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 114m
emojivoto emoji-65df4d68f7-g5tgk 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto vote-bot-7c59767698-rgvn9 10m (0%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto voting-768f496cd8-mzn2c 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
emojivoto web-545f869fc4-k5qb7 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 3d15h
kube-system coredns-66bff467f8-lgwt6 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 9d
kube-system coredns-66bff467f8-nckcg 100m (2%) 0 (0 %) 70Mi (0%) 170Mi (1%) 56m
kube-system etcd-consul-control-plane 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kindnet-9sbrw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 9d
kube-system kube-apiserver-consul-control-plane 250m (6%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-controller-manager-consul-control-plane 200m (5%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-proxy-kqx79 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
kube-system kube-scheduler-consul-control-plane 100m (2%) 0 (0 %) 0 (0%) 0 (0%) 9d
local-path-storage local-path-provisioner-bd4bb6b75-d4rct 0 (0%) 0 (0 %) 0 (0%) 0 (0%) 9d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 1160m (28%) 100m (2%)
memory 190Mi (1%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node consul-control-plane status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node consul-control-plane status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit ac ross pods

Thank you so much.

Best Regards

Mike

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Answers

  • I'm still stop on the same task (lab 4.1 item 6). I am try to fix it. But until now, nothing. I appreciate if anyone knows the solution and could help me.

    Obrigado.

    $ kubectl get pods -n ambassador
    NAME READY STATUS RESTARTS AGE
    ambassador-operator-8484bc8c86-pgcpj 0/1 Pending 0 14m

    $ kubectl describe pod -n ambassador ambassador-operator-8484bc8c86-pgcpj
    ...
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
    Warning FailedScheduling 10m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

  • Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etc

  • Hi @danielbryantuk

    I installed on my machine.

    model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz RAM 16GB

    kubectl version v1.22.3

    OS Ubuntu 20.04.3 LTS (Focal Fossa)

  • Aditional information

    Before lab 3.2

    $ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    etcd-linkerd-control-plane 1/1 Running 1 53s
    kube-apiserver-linkerd-control-plane 1/1 Running 1 53s
    kube-controller-manager-linkerd-control-plane 1/1 Running 1 53s
    kube-scheduler-linkerd-control-plane 1/1 Running 1 53s

    After lab 3.2

    $ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-66bff467f8-4ksk7 0/1 Pending 0 3d23h
    coredns-66bff467f8-7bdlh 0/1 Pending 0 3d23h
    etcd-linkerd-control-plane 1/1 Running 3 3d23h
    kindnet-8xl75 0/1 CrashLoopBackOff 2 3d23h
    kube-apiserver-linkerd-control-plane 1/1 Running 3 3d23h
    kube-controller-manager-linkerd-control-plane 1/1 Running 3 3d23h
    kube-proxy-64vbv 0/1 CrashLoopBackOff 6 3d23h
    kube-scheduler-linkerd-control-plane 1/1 Running 3 3d23h

    $ kubectl describe pods -n kube-system kindnet-8xl75
    Warning FailedMount 4m25s (x2 over 4m26s) kubelet MountVolume.SetUp failed for volume "kindnet-token-bpck8" : failed to sync secret cache: timed out waiting for the condition

  • I ran into the same problem when did the lab 3.2 on my PC (3 Ubuntu 20.02 VM (1 control plane, 2 workers) with kube*=v1.21-00), eventually I followed the lab and use the kind cluster instead and those 2 ambassadors started successfully.

  • Hi danielbryantuk

    I installed on my machine.

    $ cat /proc/cpuinfo
    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 158
    model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
    ...

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

    $ cat /etc/os-release
    NAME="Ubuntu"
    VERSION="20.04.3 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.3 LTS"
    VERSION_ID="20.04"
    ...

  • Posts: 1,000

    Hello,

    If you only have one node in your cluster then that node is the control plane. That control plane has a taint. If you look at the output of your node describe, you will find this line: Taints: node.kubernetes.io/not-ready:NoSchedule which would limit the pods which can be scheduled on that node.

    How many nodes are in your cluster?
    What is the output of kubectl get pod --all-namespaces?

    Regards,

  • Hi @serewicz,

    it worked like @proliant said. Honestly, I didn't understand the cause neither the solution.

    But, thank you for respond.

    $ kubectl get pod --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    emojivoto emoji-66ccdb4d86-plw5b 1/1 Running 0 5m12s
    emojivoto vote-bot-69754c864f-nnr77 1/1 Running 0 5m12s
    emojivoto voting-f999bd4d7-b9lvk 1/1 Running 0 5m12s
    emojivoto web-79469b946f-jclqf 1/1 Running 0 5m12s
    kube-system coredns-558bd4d5db-4nxlw 1/1 Running 2 13m
    kube-system coredns-558bd4d5db-9x4lr 1/1 Running 2 13m
    kube-system etcd-linkerd-control-plane 1/1 Running 3 18m
    kube-system kindnet-96r8v 1/1 Running 2 13m
    kube-system kube-apiserver-linkerd-control-plane 1/1 Running 3 18m
    kube-system kube-controller-manager-linkerd-control-plane 1/1 Running 3 18m
    kube-system kube-proxy-xgz6k 1/1 Running 2 13m
    kube-system kube-scheduler-linkerd-control-plane 1/1 Running 3 18m
    local-path-storage local-path-provisioner-547f784dff-2pft4 1/1 Running 2 13m

  • Posts: 7
    edited December 2021

    Hi @serewicz

    That's true, I only one node (master)

    kubectl get nodes -n kube-system
    NAME STATUS ROLES AGE VERSION
    consul-control-plane NotReady master 45h v1.18.2

    Thank you in advance,

    Best Regards

    mike

  • Posts: 7
    edited December 2021

    Hi everyone.

    I have found a solution to our problem.
    Like you @ErlisonSantos , I had my kube-proxy and kindset pod in CrashLoopBackOff status.
    I think the problem comes when creating the cluster with kind. I created a new cluster with kind via the following command:

    kind create cluster --name=cluster-test

    and I got the same situation (same pods in errors)
    After several leads, I found the following information on github:

    Manually set the parameter before creating the Kind cluster.

    the command to do it:

    sudo sysctl net/netfilter/nf_conntrack_max=131072

    Why ? you can follow the conversation in the link

    I recreated a cluster with kind and there it all worked.

    So I re-entered this command in each context and stopped and restarted the container with the command indicated in the lab

    docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)

    I hope I could help you.

    here is the source:

    https://github.com/kubernetes-sigs/kind/issues/2240

    Best regards

    Mike

  • This is another solution, the exact solution I posted on another thread at LFS244.

    1. penguin@vm066:~$ k get nodes -o wide
    2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    3. vm067 Ready control-plane,master 2d19h v1.21.1 192.168.1.67 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5
    4. vm068 Ready <none> 2d19h v1.21.1 192.168.1.68 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5
    5. vm069 Ready <none> 2d18h v1.21.1 192.168.1.69 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic containerd://1.5.5
    6. penguin@vm066:~$ k get pods -A | egrep -e 'NAMESPACE|ambassador|default'
    7. NAMESPACE NAME READY STATUS RESTARTS AGE
    8. ambassador ambassador-6bf6958bc-g7glz 1/1 Running 0 73m
    9. ambassador ambassador-agent-7fdf56587b-8zg9k 1/1 Running 0 73m
    10. ambassador ambassador-operator-7b77fbcfc-74shj 1/1 Running 0 75m
    11. default nfs-subdir-external-provisioner-6b7ff5bd4c-jgkx6 1/1 Running 1 2d18h
    12. penguin@vm066:~$ showmount -e localhost
    13. Export list for localhost:
    14. /vdb1 *
    15. penguin@vm066:~$ df -h /vdb1
    16. Filesystem Size Used Avail Use% Mounted on
    17. /dev/vdb1 49G 53M 47G 1% /vdb1
    18. penguin@vm066:~$

    The problem is related to dynamic volume provisioning when we don't use kind cluster.
    You can see at line 11 there is a pod called nfs-subdir-external-provisioner-*

  • Hi @danielbryantuk

    I installed on my machine.

    $ cat /proc/cpuinfo
    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 158
    model name : Intel(R) Xeon(R) E-2224 CPU @ 3.40GHz
    ...

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

    $ cat /etc/os-release
    NAME="Ubuntu"
    VERSION="20.04.3 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.3 LTS"
    VERSION_ID="20.04"
    ...

  • curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-$(uname)-amd64
    try to use latest version of kind

  • Posts: 3
    edited May 2022

    @danielbryantuk said:
    Hi! I was one of the original course creators and will try to help. Could you confirm what type of cluster you are running the example on, please? e.g. minikube, GKE etc

    I am running it in GCP and still same issue, I repeated the excercise and still same, some pods still getting crashed:

  • @yorimina said:
    sudo sysctl net/netfilter/nf_conntrack_max=131072
    docker start / stop $ (docker ps -a -fname = xxx-control-plane -q)

    Same worked here, thanks

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training