Setting Pod Resource Limits and Requirements
Hello,
All is fine but still one point is unclear for me.
- step one
I set a container resource like it
resources: limits: cpu: "2" # with a limit of 2 cpu memory: "2500Mi" requests: cpu: "1" # the container request 1cpu memory: "1950Mi" args: - -cpus - "1" # we set 1 cpu to the container - -mem-total - "1950Mi" - -mem-alloc-size - "100Mi" - -mem-alloc-sleep - "1s"
At run time i watch the describe of the worker node
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default stressmeout-797fb5bc6b-zqdwg 1 (50%) 2 (100%) 1950Mi (26%) 2500Mi (33%) 28s
kube-system calico-node-hzm52 250m (12%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-proxy-mvwvk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1250m (62%) 2 (100%)
memory 1950Mi (26%) 2500Mi (33%)
I can see that i use 50%/62% of the cpu, seems to make sens as there is 1 cpu requested and 1 set ('args:1') on a limit of 2 cpu
But when i run the top cmd on the worker node, i got 100% cpu used.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6913 root 20 0 2015532 1.918g 3116 S 99.7 26.3 7:01.63 stress
2.step 2
I drop down the cpu request of the container
resources: limits: cpu: "2" # with a limit of 2 cpu memory: "2500Mi" requests: cpu: "0.5" # the container request only 500millicpu memory: "1950Mi" args: - -cpus - "1" # we attribute 1 cpu to the container
i watch the describe of the worker node
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default stressmeout-7569595655-wjd9h 500m (25%) 2 (100%) 1950Mi (26%) 2500Mi (33%) 2m56s
kube-system calico-node-hzm52 250m (12%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-proxy-mvwvk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 2 (100%)
memory 1950Mi (26%) 2500Mi (33%)
ok, it's make sens the cpu is only 25% of the limit, and resource 750m
But here i don t understand:
i watch the process of the pod on the worker node running top, and it's still using 100% of the cpu... why ?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6913 root 20 0 2015532 1.918g 3116 S 99 26.3 7:01.63 stress
Thank you.
Comments
-
Hi @djedje,
Exceeding resource limits does not guarantee a container's termination and/or a pod's eviction. Memory and CPU limits are treated differently. While consuming more memory that the Limit may trigger an OOMKill, consuming more CPU than the Limit will not trigger a kill of the container.
The documentation offers more details.
Regards,
-Chris0 -
Hello @chrispokorni ,
Thank you for the help. The link is interesting but still my question remain.
A few other exemple
1-----------------
resources: limits: cpu: "1" memory: "500Mi" requests: cpu: "0.5" memory: "200Mi" args: - -cpus - "0.5" - -mem-total - "350Mi"
Here the pod does not even start, il report an Error status. Does the container need a minimum of 1 cpu tu run ?
2------------------
limits: cpu: "1" memory: "500Mi" requests: cpu: "0.5" memory: "200Mi" args: - -cpus - "1" - -mem-total - "350Mi"
Here the container run as expected. But when i 'top' the worker node, the pod process use 100% of cpus
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13313 root 20 0 324332 320464 3116 S 99.0 4.2 0:09.51 stress3------------------
resources: limits: cpu: "2" memory: "500Mi" requests: cpu: "1" memory: "200Mi" args: - -cpus - "1" - -mem-total - "350Mi"
Instead of requesting 500mcpu, i request 1cpu (but the args: is still 1 as previous), the 'top' still report 100%cpus
I feel that strange as even i request 500mcpus or 1 cpu, the 'top' still monitor 100% of cpu consuming ???4-------------------
resources: limits: cpu: "2" memory: "500Mi" requests: cpu: "1" memory: "200Mi" args: - -cpus - "2" - -mem-total - "350Mi"
I increase the attribution of cpus to 2, which now match the cpu's limit of the container, run 'top' and get 200% of consuming
cpus (the pod still run without issues).PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21720 root 20 0 324332 320724 3120 S 188.4 4.2 0:42.43 stressSo in my case the minimum cpu the pod is using is 100% (what ever the cpu's requests) ...?
and under an 'args[-cpus]' of 1 the pod report an error status...?
Reading the link does not answer it. What am i missing ?
Thank you.
0 -
Hello,
The limit exists to terminate a pod when it tries to use more resources than allowed. The request is how much to hold back from other pods such that it is always available to the pod. The pod could use more or less than the request, but not more than the limit.
Regards,
0 -
Hi @djedje,
It seems that the
stress
image does not like decimal CPU arguments, such as "0.5" and "1.5", but works well with integers like "1" and "2".
But once the container is running, thekubectl top
command returns the expected CPU and memory utilization for both the node and the pod.Regards,
-Chris0 -
Thank you.
0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 152 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)