Container fails with no apparent reason
Hello,
I apply this yaml configuration https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/probe/exec-liveness.yaml
after first I edited the line
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
and changed it to
- touch /tmp/healthy; sleep 30
Surprisingly the container still fails after 36 seconds even if the file /tmp/healthy is not removed (file checked by the livenessProbe every 5 seconds).
Why this is happening?
Stefan
Best Answers
-
Hi Stefan,
In your prior post, your
liveness
container behaves according to itsPodSpec
: it performs a task, and then itcompletes
. Depending on the restart policy, a continuous loop will have your container restarted.From the most recent output, it seems that the
test-container
running thenginx
image behaves as expected, based on yourPodSpec
. If you need help understanding container behavior, I recommend reading up on key instructions used when building container images. The following instructions should clarify the difference between a container running with its default configuration and an executable container:https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
Regards,
-Chris5 -
Stefan,
Like above, the issue is dat your POD has one single container with the NGINX image, but you are starting the command:
"/bin/sh -c env"Ergo; this wil print the env vars and then exit. It is a POD with a finite workload. The process exists after having done it's job. As it's the main proces in the container, the container will also exit. It's a finite proces. As the restart policy is Always; the container will get restarted with the same result -> it exits. You can play with this by for example substituing your command content with "sleep 3600". This will let the container live for 1h and then exit and the circus will start all over.
This is normal 'designed' behaviour. This kind of workload (which is finite) is best suited using a job/cronjob. Workload that is infinite (actually never ends like a webapp always listening for connection on a socket) is best suited with a ReplicaSet or Deployment.
Kind regards,
Pascal
5 -
In your post of 16 April, after 600s your command will 'fall through'. The sleep will be a completed command after 10min and it exits with a exit code (probably 0). But that's not what the POD is expecting. So it restarts. So either it gets killed by the probe and restarted or eventually it gets restarted just because your command has finished.
If you have more questions, just let me know.
Kind regards,
Pascal
5
Answers
-
Hello,
Please let us know where in the course you find this so we can help troubleshoot. The more details the better.
Regards,
0 -
Hi Stefan,
I recommend first following the exercises as they are presented in the lab manual, reading the concepts presented in the course and also reading the supporting official documentation. Once familiar and comfortable with the topic, then attempt to deviate from the lab exercise by making changes to code, understanding the implications to each change and confirm expectations by monitoring the behavior of your resources.
Keep in mind that some code lines are pure Linux commands, which work the same way inside Kubernetes and containers as they would normally do in a typical virtual or physical environment.
Regards,
-Chris0 -
Chris,
The example above is taken from allowed resources I might use on exam (https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) and I am not sure if I am out of scope with this one:
If I remove the 2 linux commands "rm -rf /tmp/healthy; sleep 600" from the yaml configuration, then delete/(re)create the pod, this particular pod still fails and I do not understand why (they say it will fail only because the /tmp/healthy file will be deleted).I am confused, sorry for the trouble.
Stefan
0 -
Did you try removing the probe section altogether? What happens with your container then?
0 -
Thank you for your suggestion Chris!
My yaml:apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30;The result is that the container keeps failing while there in no probe at all:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/liveness-exec to kw1
Normal Created 15m (x4 over 17m) kubelet, kw1 Created container liveness
Normal Started 15m (x4 over 17m) kubelet, kw1 Started container liveness
Normal Pulling 13m (x5 over 17m) kubelet, kw1 Pulling image "k8s.gcr.io/busybox"
Normal Pulled 13m (x5 over 17m) kubelet, kw1 Successfully pulled image "k8s.gcr.io/busybox"
Warning BackOff 2m29s (x52 over 16m) kubelet, kw1 Back-off restarting failed containerStefan
0 -
Hi again
I have the same problem over and over every single time when I set sh commands on pod's containers.
As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just completed after command or if I set them to restartPolicy: Always they keep crashing for reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.)I cannot get why and I need little help:
yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:how: very
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Alwayskubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels:
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed containerStefan
0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 734 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 166 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 6K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 222 LFD103 Class Forum
- 17 LFD110 Class Forum
- 34 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 110 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 114 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 18 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 118 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 50 LFS241 Class Forum
- 44 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 146 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 156 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 8 LFW111 Class Forum
- 257 LFW211 Class Forum
- 180 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.2K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 312 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)