Welcome to the Linux Foundation Forum!

LFD259 --- Lab 3.3. Configure Probes --- goproxy not able to start

Readynessprobe till Step-8 worked fine. Issue is from Step-9 with livenessprobe.

Need clue to debug further why goproxy is not running (direct answer will also be appreciated). How to debug from the points from describe pod. Which logs to look for to understand why the container is not running as expected. simpleapp.yaml attached as simpleapp.txt.

State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
...
Ready:          False
    Restart Count:  5
    Liveness:       tcp-socket :8080 delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdphk (ro)
...

Detailed logs attached as log1.txt.

Answers

  • Hi @nirmalyad123,

    From your attached output it seems that kubectl runs as root, which is not a recommended method to manage the cluster.

    After the -- touch /tmp/healthy loop, what is the status of the try1 pods?

    Regards,
    -Chris

  • nirmalyad123
    nirmalyad123 Posts: 3
    edited December 2021

    Hi @chrispokorni,

    I re-initiated the cluster so that kubectl could be run from non-root user. Same issue persists.

    Followed the steps at https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe.
    Still same issue. netstat logs attached. No port 8080 in the entries.

    Pod status loops from CrashLoopBackOff to Error, then finally settles to CrashLoopBackOff.

    Running the container alone

    ubuntu@s00:~/app1$ sudo podman run k8s.gcr.io/goproxy:0.1
    ERRO[0000] error loading cached network config: network "podman" not found in CNI cache
    WARN[0000] falling back to loading from existing plugins on disk
    ERRO[0000] Error tearing down partially created network namespace for container 1098f67ee88cc48d6ad509cd79f5b6b431db4f7a35936be7805531e189135d1d: CNI network "podman" not found
    Error: error configuring network namespace for container 1098f67ee88cc48d6ad509cd79f5b6b431db4f7a35936be7805531e189135d1d: CNI network "podman" not found

    Running sudo docker run k8s.gcr.io/goproxy:0.1

    Unable to find image 'k8s.gcr.io/goproxy:0.1' locally
    0.1: Pulling from goproxy
    ebefbb1a8dca: Pull complete
    a3ed95caeb02: Pull complete
    Digest: sha256:5334c7ad43048e3538775cb09aaf184f5e8acf4b0ea60e3bc8f1d93c209865a5
    Status: Downloaded newer image for k8s.gcr.io/goproxy:0.1

    NUC:~$ sudo docker container ls
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    4f92b1f015c4 k8s.gcr.io/goproxy:0.1 "/goproxy" 2 minutes ago Up 2 minutes 8080/tcp suspicious_aryabhata
    NUC:~$

    Also there is a warning in the following output-

    ubuntu@s00:~/app1$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    configmap/calico-config unchanged
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
    clusterrole.rbac.authorization.k8s.io/calico-node unchanged
    clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
    daemonset.apps/calico-node configured
    serviceaccount/calico-node unchanged
    deployment.apps/calico-kube-controllers unchanged
    serviceaccount/calico-kube-controllers unchanged
    Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    poddisruptionbudget.policy/calico-kube-controllers configured
    ubuntu@s00:~/app1$

  • Hi @nirmalyad123,

    What Pod network is used by Calico in your cluster? And what are the IP addresses of your control-plane and worker VMs?

    Keep in mind that port 8080 is a container port, not a host port.

    Also, mixing between podman and docker may not necessarily help. podman is used to build the container image, interact with the container registry, and run a container in Lab 3, while crictl is used to troubleshoot containers that Kubernetes is running on the cri-o runtime.

    Regards,
    -Chris

  • Hi @chrispokorni,

    I was trying on arm architecture(Raspberry Pi4 B - 8GB RAM) with both Ubuntu 18/20. Here this assigment got stuck for goproxy.

    Trying with Ubuntu18 on AWS - this issue was not seen.

    Thanks for sharing different debug approaches.

    regards,
    Nirmalya

  • Hi @nirmalyad123,

    Raspberry Pi is not a supported environment for the labs of this course.
    However, AWS EC2 instances are recommended, together with GCE instances as alternatives. Labs have also been successfully completed on local hypervisors such as VirtualBox and KVM.

    Regards,
    -Chris

Categories

Upcoming Training