Ex. 4.2 Working with CPU and Memory Constraints
I understand that we are deploying an app with one container dish/stress:
The container has a request of cpu: "0.5"
memory: "2500Mi" and limit of cpu: "1"
memory: "4Gi"
apiVersion: apps/v1
kind: Deployment
<..>
spec:
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources:
limits:
cpu: "1"
memory: "4Gi"
requests:
cpu: "0.5"
memory: "2500Mi"
Also we are specifying arguments to the stress application https://github.com/vishh/stress/blob/master/main.go running in the container:
args:
- -cpus
- "2"
- -mem-total
- "950Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- "1s"
In the log I see: kubectl logs hog-854946d4cb-mttm4
kubectl logs hog-854946d4cb-mttm4
I1117 08:34:15.502674 1 main.go:26] Allocating "950Mi" memory, in "100Mi" chunks, with a 1s sleep between allocations
I1117 08:34:15.502796 1 main.go:39] Spawning a thread to consume CPU
I1117 08:34:15.502839 1 main.go:39] Spawning a thread to consume CPU
I1117 08:34:28.344881 1 main.go:29] Allocated "950Mi" memory
Top on the control plan is showing:
top - 09:00:44 up 5 days, 12:23, 2 users, load average: 0.18, 0.18, 0.27
Tasks: 157 total, 1 running, 156 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.4 us, 1.9 sy, 0.0 ni, 94.2 id, 0.0 wa, 0.0 hi, 0.3 si, 0.2 st
MiB Mem : 7947.0 total, 548.7 free, 966.2 used, 6432.2 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 6683.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3191595 root 20 0 1185024 383236 76164 S 3.3 4.7 92:49.73 kube-apiserver
3191764 root 20 0 1863396 105784 66944 S 2.3 1.3 42:17.89 kubelet
3191519 root 20 0 10.7g 71320 25688 S 2.0 0.9 52:08.51 etcd
6293 root 20 0 1746420 60532 44848 S 1.7 0.7 110:13.71 calico-node
3374 root 20 0 1869056 70856 39032 S 1.3 0.9 52:05.57 containerd
3191091 root 20 0 826352 106800 62008 S 1.3 1.3 35:58.17 kube-controller
559 root 20 0 395552 13540 11368 S 0.3 0.2 6:53.98 udisksd
5695 root 20 0 712456 11236 8192 S 0.3 0.1 9:54.90 containerd-shim
224978 root 20 0 1305108 14448 9448 S 0.3 0.2 0:19.37 amazon-ssm-agen
3191557 root 20 0 759884 51768 35056 S 0.3 0.6 6:40.16 kube-scheduler
3946218 ubuntu 20 0 19168 9792 8196 S 0.3 0.1 0:10.71 systemd
3947027 root 20 0 752520 45700 34312 S 0.3 0.6 0:04.92 coredns
1 root 20 0 171712 13168 8356 S 0.0 0.2 17:03.94 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.06 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns
7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_highpri
9 root 0 -20 0 0 0 I 0.0 0.0 1:18.07 kworker/0:1H-kblockd
So in my top output I should see containerd memory and cpu utilisation increasing to cpu: "1" and memory: "4Gi" specified in the limits?
Comments
-
Oh I just realised I didn't check the top on the worker node. The above output was only from the control plane!
0 -
After ex. 4.3 I have two processes for stress running on the worker node:
My instances have 8GB RAM and the limit we specified in the namespace/container is
limits:
cpu: "1"
memory: 500MiThe memory limit corresponds to (500/8000)*100 = 6.25 % of memory utilisation.
So shouldn't each container be using only 6.25 % of memory and not 11.7% as shown in the top output?top - 09:52:28 up 5 days, 13:15, 3 users, load average: 5.67, 4.84, 3.27
Tasks: 148 total, 3 running, 145 sleeping, 0 stopped, 0 zombie
%Cpu(s): 41.4 us, 58.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : ** 7.8 **total, 1.4 free, 2.3 used, 4.1 buff/cache
GiB Swap: 0.0 total, 0.0 free, 0.0 used. 5.2 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3147600 root 20 0 958540 954804 3180 R 99.0 11.7 77:56.60 stress 3169331 root 20 0 958540 954596 3180 S 99.0 11.7 9:54.14 stress
3135369 root 20 0 1787872 99932 66712 S 1.0 1.2 1:20.70 kubelet 3374 root 20 0 1795004 66296 38332 S 0.7 0.8 43:36.20 containerd
5607 root 20 0 1746420 61460 45488 S 0.7 0.8 114:09.80 calico-node 3168572 ubuntu 20 0 13940 6040 4572 S 0.3 0.1 0:00.14 sshd
1 root 20 0 171296 13028 8344 S 0.0 0.2 10:36.25 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.05 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns 7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_hig+
9 root 0 -20 0 0 0 I 0.0 0.0 0:33.92 kworker/0:1H-events_hig+ 10 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_rude_0 -
Hi @lakeasshutossh,
On an 8GB memory node and 1950Mi total memory arg on the stress container, a memory limit of 1G should OOM kill your container when it reaches ~11.7%, and it should run uninterrupted with a 2G memory limit reaching ~24.7%.
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)