Welcome to the Linux Foundation Forum!

Lab 12.1 vip pod in Pending status

While doing the lab 12.1 i had my pod vip pod stuck in pending.
The issue with it was there was not enough resources, specifically the CPU.
to let it work I had to edit the vip.yaml file adding the "resources.requests" and "resources.limits" to it.
For whoever is facing the same issue you can find attached to this discussion the content of the vip.yaml file

Comments

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @Turigebbia,

    What other workload is running on the nodes?
    kubectl get pods -A -o wide

    Regards,
    -Chris

  • i'm following the lab 13.3 now so the monitoring pods were not there at that time.
    this is the output:

    ubuntu@k8scp:~/metrics-server$ kubectl get pods -A -o wide
    NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    kube-system cilium-5vklj 1/1 Running 7 (91m ago) 52d 10.0.3.248 k8scp
    kube-system cilium-operator-56bdb99ff6-mb4vl 1/1 Running 5 (91m ago) 50d 10.0.3.248 k8scp
    kube-system cilium-operator-56bdb99ff6-xbxd9 1/1 Running 3 (16h ago) 50d 10.0.3.38 worker
    kube-system cilium-pm74p 1/1 Running 6 (16h ago) 52d 10.0.3.38 worker
    kube-system coredns-76f75df574-dt8lw 1/1 Running 2 (91m ago) 10d 192.168.0.32 k8scp
    kube-system coredns-76f75df574-hxgwm 1/1 Running 1 (91m ago) 14h 192.168.0.83 k8scp
    kube-system etcd-k8scp 1/1 Running 5 (91m ago) 50d 10.0.3.248 k8scp
    kube-system kube-apiserver-k8scp 1/1 Running 5 (91m ago) 50d 10.0.3.248 k8scp
    kube-system kube-controller-manager-k8scp 1/1 Running 5 (91m ago) 50d 10.0.3.248 k8scp
    kube-system kube-proxy-k9g9s 1/1 Running 3 (16h ago) 50d 10.0.3.38 worker
    kube-system kube-proxy-vnq52 1/1 Running 4 (91m ago) 50d 10.0.3.248 k8scp
    kube-system kube-scheduler-k8scp 1/1 Running 5 (91m ago) 50d 10.0.3.248 k8scp
    kube-system metrics-server-85477c4f6-js2gm 1/1 Running 0 11m 192.168.0.205 k8scp
    linkerd-viz metrics-api-f65cc5f94-xql67 2/2 Running 3 (89m ago) 15h 192.168.0.245 k8scp
    linkerd-viz prometheus-5c6766c88-dxjqx 1/1 Running 1 (91m ago) 14h 192.168.0.139 k8scp
    linkerd-viz tap-674cfddfdb-tl77r 2/2 Running 4 (89m ago) 15h 192.168.0.36 k8scp
    linkerd-viz tap-injector-6bd67b7bcb-7pdj7 2/2 Running 2 (91m ago) 14h 192.168.0.113 k8scp
    linkerd-viz web-7fd44d6fcd-k5dtm 2/2 Running 2 (91m ago) 15h 192.168.0.238 k8scp
    linkerd linkerd-destination-5c8bf585b5-k98qr 4/4 Running 4 (91m ago) 14h 192.168.0.48 k8scp
    linkerd linkerd-identity-5c4965cf78-cnnkn 2/2 Running 2 (91m ago) 14h 192.168.0.92 k8scp
    linkerd linkerd-proxy-injector-75f49fcf4f-zp5nc 2/2 Running 2 (91m ago) 14h 192.168.0.144 k8scp

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @Turigebbia,

    The workload seems normal. The only discrepancy is the control plane node name, in your case it is k8scp intended to be only an alias in preparation for the HA control plane in Chapter 16.

    Can you describe your nodes and provide the output? This time please format the output as Code from the action ribbon above the comment text box.

    Regards,
    -Chris

  • Hi @chrispokorni, sorry for the late reply. after a few more hours of troubleshooting i realized the worker node was not reachable anymore. not sure why kubernetes did not give me the "NotReady" next to it worker node when running the "kubectl get nodes" command straight away. seems i edited my aws security group attached to the instances with a typo and they couldn't communicate between them. after fixing it everything was working properly again.

Categories

Upcoming Training