Container fails with no apparent reason

Hello,
I apply this yaml configuration https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/probe/exec-liveness.yaml
after first I edited the line
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
and changed it to
- touch /tmp/healthy; sleep 30
Surprisingly the container still fails after 36 seconds even if the file /tmp/healthy is not removed (file checked by the livenessProbe every 5 seconds).
Why this is happening?
Stefan
Best Answers
-
chrispokorni Posts: 800
Hi Stefan,
In your prior post, your
liveness
container behaves according to itsPodSpec
: it performs a task, and then itcompletes
. Depending on the restart policy, a continuous loop will have your container restarted.From the most recent output, it seems that the
test-container
running thenginx
image behaves as expected, based on yourPodSpec
. If you need help understanding container behavior, I recommend reading up on key instructions used when building container images. The following instructions should clarify the difference between a container running with its default configuration and an executable container:https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
Regards,
-Chris5 -
pamvdam Posts: 3
Stefan,
Like above, the issue is dat your POD has one single container with the NGINX image, but you are starting the command:
"/bin/sh -c env"Ergo; this wil print the env vars and then exit. It is a POD with a finite workload. The process exists after having done it's job. As it's the main proces in the container, the container will also exit. It's a finite proces. As the restart policy is Always; the container will get restarted with the same result -> it exits. You can play with this by for example substituing your command content with "sleep 3600". This will let the container live for 1h and then exit and the circus will start all over.
This is normal 'designed' behaviour. This kind of workload (which is finite) is best suited using a job/cronjob. Workload that is infinite (actually never ends like a webapp always listening for connection on a socket) is best suited with a ReplicaSet or Deployment.
Kind regards,
Pascal
5 -
pamvdam Posts: 3
In your post of 16 April, after 600s your command will 'fall through'. The sleep will be a completed command after 10min and it exits with a exit code (probably 0). But that's not what the POD is expecting. So it restarts. So either it gets killed by the probe and restarted or eventually it gets restarted just because your command has finished.
If you have more questions, just let me know.
Kind regards,
Pascal
5
Answers
Hello,
Please let us know where in the course you find this so we can help troubleshoot. The more details the better.
Regards,
Hi Stefan,
I recommend first following the exercises as they are presented in the lab manual, reading the concepts presented in the course and also reading the supporting official documentation. Once familiar and comfortable with the topic, then attempt to deviate from the lab exercise by making changes to code, understanding the implications to each change and confirm expectations by monitoring the behavior of your resources.
Keep in mind that some code lines are pure Linux commands, which work the same way inside Kubernetes and containers as they would normally do in a typical virtual or physical environment.
Regards,
-Chris
Chris,
The example above is taken from allowed resources I might use on exam (https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) and I am not sure if I am out of scope with this one:
If I remove the 2 linux commands "rm -rf /tmp/healthy; sleep 600" from the yaml configuration, then delete/(re)create the pod, this particular pod still fails and I do not understand why (they say it will fail only because the /tmp/healthy file will be deleted).
I am confused, sorry for the trouble.
Stefan
Did you try removing the probe section altogether? What happens with your container then?
Thank you for your suggestion Chris!
My yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30;
The result is that the container keeps failing while there in no probe at all:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/liveness-exec to kw1
Normal Created 15m (x4 over 17m) kubelet, kw1 Created container liveness
Normal Started 15m (x4 over 17m) kubelet, kw1 Started container liveness
Normal Pulling 13m (x5 over 17m) kubelet, kw1 Pulling image "k8s.gcr.io/busybox"
Normal Pulled 13m (x5 over 17m) kubelet, kw1 Successfully pulled image "k8s.gcr.io/busybox"
Warning BackOff 2m29s (x52 over 16m) kubelet, kw1 Back-off restarting failed container
Stefan
Hi again
I have the same problem over and over every single time when I set sh commands on pod's containers.
As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just completed after command or if I set them to restartPolicy: Always they keep crashing for reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.)
I cannot get why and I need little help:
yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
kubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels:
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://[email protected]:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed container
Stefan
Thank you very much for your insight @Pascal.
For me the whole circus looks like an overkill just for setting some variables. Where is the advancement? Is there a simple way to achieve that, what are the trade-offs then?
Stefan