Welcome to the Linux Foundation Forum!

Container fails with no apparent reason

suser
suser Posts: 67
edited April 2020 in LFD259 Class Forum

Hello,
I apply this yaml configuration https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/probe/exec-liveness.yaml
after first I edited the line
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
and changed it to
- touch /tmp/healthy; sleep 30
Surprisingly the container still fails after 36 seconds even if the file /tmp/healthy is not removed (file checked by the livenessProbe every 5 seconds).
Why this is happening?

Stefan

Best Answers

Answers

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Please let us know where in the course you find this so we can help troubleshoot. The more details the better.

    Regards,

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi Stefan,

    I recommend first following the exercises as they are presented in the lab manual, reading the concepts presented in the course and also reading the supporting official documentation. Once familiar and comfortable with the topic, then attempt to deviate from the lab exercise by making changes to code, understanding the implications to each change and confirm expectations by monitoring the behavior of your resources.

    Keep in mind that some code lines are pure Linux commands, which work the same way inside Kubernetes and containers as they would normally do in a typical virtual or physical environment.

    Regards,
    -Chris

  • suser
    suser Posts: 67

    Chris,
    The example above is taken from allowed resources I might use on exam (https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) and I am not sure if I am out of scope with this one:
    If I remove the 2 linux commands "rm -rf /tmp/healthy; sleep 600" from the yaml configuration, then delete/(re)create the pod, this particular pod still fails and I do not understand why (they say it will fail only because the /tmp/healthy file will be deleted).

    I am confused, sorry for the trouble.

    Stefan

  • chrispokorni
    chrispokorni Posts: 2,155

    Did you try removing the probe section altogether? What happens with your container then?

  • suser
    suser Posts: 67

    Thank you for your suggestion Chris!
    My yaml:

    apiVersion: v1
    kind: Pod
    metadata:
    labels:
    test: liveness
    name: liveness-exec
    spec:
    containers:
    - name: liveness
    image: k8s.gcr.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30;

    The result is that the container keeps failing while there in no probe at all:

    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 17m default-scheduler Successfully assigned default/liveness-exec to kw1
    Normal Created 15m (x4 over 17m) kubelet, kw1 Created container liveness
    Normal Started 15m (x4 over 17m) kubelet, kw1 Started container liveness
    Normal Pulling 13m (x5 over 17m) kubelet, kw1 Pulling image "k8s.gcr.io/busybox"
    Normal Pulled 13m (x5 over 17m) kubelet, kw1 Successfully pulled image "k8s.gcr.io/busybox"
    Warning BackOff 2m29s (x52 over 16m) kubelet, kw1 Back-off restarting failed container

    Stefan

  • suser
    suser Posts: 67

    Hi again :)

    I have the same problem over and over every single time when I set sh commands on pod's containers.
    As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just completed after command or if I set them to restartPolicy: Always they keep crashing for reason CrashLoopBackOff.
    (Containers work fine if I do not set any command on them.)

    I cannot get why and I need little help:

    yaml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: special-config
    data:

    how: very

    apiVersion: v1
    kind: Pod
    metadata:
    name: dapi-test-pod
    spec:
    containers:
    - name: test-container
    image: nginx
    ports:
    - containerPort: 88
    command: [ "/bin/sh", "-c", "env" ]
    env:
    # Define the environment variable
    - name: SPECIAL_LEVEL_KEY
    valueFrom:
    configMapKeyRef:
    # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
    name: special-config
    # Specify the key associated with the value
    key: how
    restartPolicy: Always

    kubectl describe pod dapi-test-pod
    Name: dapi-test-pod
    Namespace: default
    Priority: 0
    Node: kw1/10.1.10.31
    Start Time: Thu, 21 May 2020 01:02:17 +0000
    Labels:
    Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
    kubectl.kubernetes.io/last-applied-configuration:
    {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
    Status: Running
    IP: 192.168.159.83
    IPs:
    IP: 192.168.159.83
    Containers:
    test-container:
    Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
    Image: nginx
    Image ID: docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
    Port: 88/TCP
    Host Port: 0/TCP
    Command:
    sh
    -c
    env
    State: Waiting
    Reason: CrashLoopBackOff
    Last State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Thu, 21 May 2020 01:13:21 +0000
    Finished: Thu, 21 May 2020 01:13:21 +0000
    Ready: False
    Restart Count: 7
    Environment:
    SPECIAL_LEVEL_KEY: Optional: false
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    default-token-zqbsw:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-zqbsw
    Optional: false
    QoS Class: BestEffort
    Node-Selectors:
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
    Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
    Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
    Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
    Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
    Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed container

    Stefan

  • suser
    suser Posts: 67

    Thank you very much for your insight @Pascal.
    For me the whole circus looks like an overkill just for setting some variables. Where is the advancement? Is there a simple way to achieve that, what are the trade-offs then?

    Stefan

Categories

Upcoming Training