Container fails with no apparent reason
Hello,
I apply this yaml configuration https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/probe/exec-liveness.yaml
after first I edited the line
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
and changed it to
- touch /tmp/healthy; sleep 30
Surprisingly the container still fails after 36 seconds even if the file /tmp/healthy is not removed (file checked by the livenessProbe every 5 seconds).
Why this is happening?
Stefan
Best Answers
-
Hi Stefan,
In your prior post, your
liveness
container behaves according to itsPodSpec
: it performs a task, and then itcompletes
. Depending on the restart policy, a continuous loop will have your container restarted.From the most recent output, it seems that the
test-container
running thenginx
image behaves as expected, based on yourPodSpec
. If you need help understanding container behavior, I recommend reading up on key instructions used when building container images. The following instructions should clarify the difference between a container running with its default configuration and an executable container:https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
Regards,
-Chris5 -
Stefan,
Like above, the issue is dat your POD has one single container with the NGINX image, but you are starting the command:
"/bin/sh -c env"Ergo; this wil print the env vars and then exit. It is a POD with a finite workload. The process exists after having done it's job. As it's the main proces in the container, the container will also exit. It's a finite proces. As the restart policy is Always; the container will get restarted with the same result -> it exits. You can play with this by for example substituing your command content with "sleep 3600". This will let the container live for 1h and then exit and the circus will start all over.
This is normal 'designed' behaviour. This kind of workload (which is finite) is best suited using a job/cronjob. Workload that is infinite (actually never ends like a webapp always listening for connection on a socket) is best suited with a ReplicaSet or Deployment.
Kind regards,
Pascal
5 -
In your post of 16 April, after 600s your command will 'fall through'. The sleep will be a completed command after 10min and it exits with a exit code (probably 0). But that's not what the POD is expecting. So it restarts. So either it gets killed by the probe and restarted or eventually it gets restarted just because your command has finished.
If you have more questions, just let me know.
Kind regards,
Pascal
5
Answers
-
Hello,
Please let us know where in the course you find this so we can help troubleshoot. The more details the better.
Regards,
0 -
Hi Stefan,
I recommend first following the exercises as they are presented in the lab manual, reading the concepts presented in the course and also reading the supporting official documentation. Once familiar and comfortable with the topic, then attempt to deviate from the lab exercise by making changes to code, understanding the implications to each change and confirm expectations by monitoring the behavior of your resources.
Keep in mind that some code lines are pure Linux commands, which work the same way inside Kubernetes and containers as they would normally do in a typical virtual or physical environment.
Regards,
-Chris0 -
Chris,
The example above is taken from allowed resources I might use on exam (https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) and I am not sure if I am out of scope with this one:
If I remove the 2 linux commands "rm -rf /tmp/healthy; sleep 600" from the yaml configuration, then delete/(re)create the pod, this particular pod still fails and I do not understand why (they say it will fail only because the /tmp/healthy file will be deleted).I am confused, sorry for the trouble.
Stefan
0 -
Did you try removing the probe section altogether? What happens with your container then?
0 -
Thank you for your suggestion Chris!
My yaml:apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30;The result is that the container keeps failing while there in no probe at all:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/liveness-exec to kw1
Normal Created 15m (x4 over 17m) kubelet, kw1 Created container liveness
Normal Started 15m (x4 over 17m) kubelet, kw1 Started container liveness
Normal Pulling 13m (x5 over 17m) kubelet, kw1 Pulling image "k8s.gcr.io/busybox"
Normal Pulled 13m (x5 over 17m) kubelet, kw1 Successfully pulled image "k8s.gcr.io/busybox"
Warning BackOff 2m29s (x52 over 16m) kubelet, kw1 Back-off restarting failed containerStefan
0 -
Hi again
I have the same problem over and over every single time when I set sh commands on pod's containers.
As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just completed after command or if I set them to restartPolicy: Always they keep crashing for reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.)I cannot get why and I need little help:
yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:how: very
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Alwayskubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels:
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed containerStefan
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 791 Linux Foundation IT Professional Programs
- 353 Cloud Engineer IT Professional Program
- 178 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 147 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 47 LFC110 Class Forum - Discontinued
- 71 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 38 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 697 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 148 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 29 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 134 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 152 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)