Ex. 4.2 Working with CPU and Memory Constraints
I understand that we are deploying an app with one container dish/stress:
The container has a request of cpu: "0.5"
memory: "2500Mi" and limit of cpu: "1"
memory: "4Gi"
apiVersion: apps/v1
kind: Deployment
<..>
spec:
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources:
limits:
cpu: "1"
memory: "4Gi"
requests:
cpu: "0.5"
memory: "2500Mi"
Also we are specifying arguments to the stress application https://github.com/vishh/stress/blob/master/main.go running in the container:
args:
- -cpus
- "2"
- -mem-total
- "950Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- "1s"
In the log I see: kubectl logs hog-854946d4cb-mttm4
kubectl logs hog-854946d4cb-mttm4
I1117 08:34:15.502674 1 main.go:26] Allocating "950Mi" memory, in "100Mi" chunks, with a 1s sleep between allocations
I1117 08:34:15.502796 1 main.go:39] Spawning a thread to consume CPU
I1117 08:34:15.502839 1 main.go:39] Spawning a thread to consume CPU
I1117 08:34:28.344881 1 main.go:29] Allocated "950Mi" memory
Top on the control plan is showing:
top - 09:00:44 up 5 days, 12:23, 2 users, load average: 0.18, 0.18, 0.27
Tasks: 157 total, 1 running, 156 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.4 us, 1.9 sy, 0.0 ni, 94.2 id, 0.0 wa, 0.0 hi, 0.3 si, 0.2 st
MiB Mem : 7947.0 total, 548.7 free, 966.2 used, 6432.2 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 6683.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3191595 root 20 0 1185024 383236 76164 S 3.3 4.7 92:49.73 kube-apiserver
3191764 root 20 0 1863396 105784 66944 S 2.3 1.3 42:17.89 kubelet
3191519 root 20 0 10.7g 71320 25688 S 2.0 0.9 52:08.51 etcd
6293 root 20 0 1746420 60532 44848 S 1.7 0.7 110:13.71 calico-node
3374 root 20 0 1869056 70856 39032 S 1.3 0.9 52:05.57 containerd
3191091 root 20 0 826352 106800 62008 S 1.3 1.3 35:58.17 kube-controller
559 root 20 0 395552 13540 11368 S 0.3 0.2 6:53.98 udisksd
5695 root 20 0 712456 11236 8192 S 0.3 0.1 9:54.90 containerd-shim
224978 root 20 0 1305108 14448 9448 S 0.3 0.2 0:19.37 amazon-ssm-agen
3191557 root 20 0 759884 51768 35056 S 0.3 0.6 6:40.16 kube-scheduler
3946218 ubuntu 20 0 19168 9792 8196 S 0.3 0.1 0:10.71 systemd
3947027 root 20 0 752520 45700 34312 S 0.3 0.6 0:04.92 coredns
1 root 20 0 171712 13168 8356 S 0.0 0.2 17:03.94 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.06 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns
7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_highpri
9 root 0 -20 0 0 0 I 0.0 0.0 1:18.07 kworker/0:1H-kblockd
So in my top output I should see containerd memory and cpu utilisation increasing to cpu: "1" and memory: "4Gi" specified in the limits?
Comments
-
Oh I just realised I didn't check the top on the worker node. The above output was only from the control plane!
0 -
After ex. 4.3 I have two processes for stress running on the worker node:
My instances have 8GB RAM and the limit we specified in the namespace/container is
limits:
cpu: "1"
memory: 500MiThe memory limit corresponds to (500/8000)*100 = 6.25 % of memory utilisation.
So shouldn't each container be using only 6.25 % of memory and not 11.7% as shown in the top output?top - 09:52:28 up 5 days, 13:15, 3 users, load average: 5.67, 4.84, 3.27
Tasks: 148 total, 3 running, 145 sleeping, 0 stopped, 0 zombie
%Cpu(s): 41.4 us, 58.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : ** 7.8 **total, 1.4 free, 2.3 used, 4.1 buff/cache
GiB Swap: 0.0 total, 0.0 free, 0.0 used. 5.2 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3147600 root 20 0 958540 954804 3180 R 99.0 11.7 77:56.60 stress 3169331 root 20 0 958540 954596 3180 S 99.0 11.7 9:54.14 stress
3135369 root 20 0 1787872 99932 66712 S 1.0 1.2 1:20.70 kubelet 3374 root 20 0 1795004 66296 38332 S 0.7 0.8 43:36.20 containerd
5607 root 20 0 1746420 61460 45488 S 0.7 0.8 114:09.80 calico-node 3168572 ubuntu 20 0 13940 6040 4572 S 0.3 0.1 0:00.14 sshd
1 root 20 0 171296 13028 8344 S 0.0 0.2 10:36.25 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.05 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns 7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_hig+
9 root 0 -20 0 0 0 I 0.0 0.0 0:33.92 kworker/0:1H-events_hig+ 10 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_rude_0 -
Hi @lakeasshutossh,
On an 8GB memory node and 1950Mi total memory arg on the stress container, a memory limit of 1G should OOM kill your container when it reaches ~11.7%, and it should run uninterrupted with a 2G memory limit reaching ~24.7%.
Regards,
-Chris0
Categories
- All Categories
- 164 LFX Mentorship
- 164 LFX Mentorship: Linux Kernel
- 724 Linux Foundation IT Professional Programs
- 368 Cloud Engineer IT Professional Program
- 161 Advanced Cloud Engineer IT Professional Program
- 69 DevOps IT Professional Program - Discontinued
- 1 DevOps & GitOps IT Professional Program
- 94 Cloud Native Developer IT Professional Program
- 33 Express Training Courses & Microlearning
- 31 Express Courses - Discussion Forum
- 2 Microlearning - Discussion Forum
- 7.4K Training Courses
- 25 LFC110 Class Forum - Discontinued
- 15 LFC131 Class Forum - DISCONTINUED
- 54 LFD102 Class Forum
- 254 LFD103 Class Forum
- 1 LFD103-JP クラス フォーラム
- 17 LFD110 Class Forum
- LFD114 Class Forum
- 54 LFD121 Class Forum
- 3 LFD123 Class Forum
- 2 LFD125 Class Forum
- 3 LFD133 Class Forum
- 4 LFD134 Class Forum
- 4 LFD137 Class Forum
- 1 LFD140 Class Forum
- 66 LFD201 Class Forum
- 7 LFD210 Class Forum
- 3 LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 1 LFD221 Class Forum
- 127 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum - Discontinued
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 759 LFD259 Class Forum
- 110 LFD272 Class Forum - Discontinued
- 2 LFD272-JP クラス フォーラム - Discontinued
- 22 LFD273 Class Forum
- 657 LFS101 Class Forum
- 4 LFS111 Class Forum - Discontinued
- 2 LFS112 Class Forum
- LFS114 Class Forum
- 4 LFS116 Class Forum
- 6 LFS118 Class Forum
- 2 LFS120 Class Forum
- 1 LFS140 Class Forum
- 11 LFS142 Class Forum
- 9 LFS144 Class Forum
- 5 LFS145 Class Forum
- 6 LFS146 Class Forum
- 7 LFS147 Class Forum
- 26 LFS148 Class Forum
- 22 LFS151 Class Forum - Discontinued
- 4 LFS157 Class Forum
- 167 LFS158 Class Forum
- 1 LFS158-JP クラス フォーラム
- 17 LFS162 Class Forum
- 1 LFS166 Class Forum - Discontinued
- 8 LFS167 Class Forum
- 4 LFS170 Class Forum
- 1 LFS171 Class Forum - Discontinued
- 3 LFS178 Class Forum - Discontinued
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 6 LFS183 Class Forum
- 2 LFS184 Class Forum
- 42 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム - Discontinued
- 23 LFS203 Class Forum
- 151 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 3 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum - Discontinued
- 55 LFS216 Class Forum - Discontinued
- 60 LFS241 Class Forum
- 51 LFS242 Class Forum
- 41 LFS243 Class Forum
- 18 LFS244 Class Forum
- 8 LFS245 Class Forum
- 1 LFS246 Class Forum
- 1 LFS248 Class Forum
- 164 LFS250 Class Forum
- 3 LFS250-JP クラス フォーラム
- 2 LFS251 Class Forum - Discontinued
- 164 LFS253 Class Forum
- 1 LFS254 Class Forum - Discontinued
- 3 LFS255 Class Forum
- 18 LFS256 Class Forum
- 2 LFS257 Class Forum
- 1.4K LFS258 Class Forum
- 12 LFS258-JP クラス フォーラム
- 149 LFS260 Class Forum
- 164 LFS261 Class Forum
- 45 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 25 LFS267 Class Forum
- 27 LFS268 Class Forum
- 38 LFS269 Class Forum
- 10 LFS270 Class Forum
- 202 LFS272 Class Forum - Discontinued
- 2 LFS272-JP クラス フォーラム - Discontinued
- 1 LFS274 Class Forum - Discontinued
- 4 LFS281 Class Forum - Discontinued
- 32 LFW111 Class Forum
- 265 LFW211 Class Forum - Discontinued
- 190 LFW212 Class Forum - Discontinued
- 18 SKF100 Class Forum
- 2 SKF200 Class Forum
- 3 SKF201 Class Forum
- 789 Hardware
- 202 Drivers
- 68 I/O Devices
- 37 Monitors
- 95 Multimedia
- 173 Networking
- 89 Printers & Scanners
- 86 Storage
- 764 Linux Distributions
- 81 Debian
- 67 Fedora
- 20 Linux Mint
- 13 Mageia
- 23 openSUSE
- 150 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 355 Ubuntu
- 459 Linux System Administration
- 31 Cloud Computing
- 72 Command Line/Scripting
- Github systems admin projects
- 94 Linux Security
- 78 Network Management
- 100 System Management
- 46 Web Management
- 67 Mobile Computing
- 18 Android
- 38 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 381 Off Topic
- 117 Introductions
- 174 Small Talk
- 29 Study Material
- 731 Programming and Development
- 309 Kernel Development
- 404 Software Development
- 893 Software
- 286 Applications
- 182 Command Line
- 5 Compiling/Installing
- 68 Games
- 316 Installation
- 62 All In Program
- 62 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)