Lab 2.6 open point + slice resource limit impact not visible
Hi all,
First - not sure if the forum is the right place - but wanted to feedback on some possible mistake on the lab pdf document for this section - The two figures "Figure 2.24:systemd-delta showing unit file override" and "Figure 2.27:systemd-delta showing unit file and dropin file" are identical if i am not mistaken..
Then I have problem to see any effects on the last exercice when using systemd slice with a low CPU quota.
I can confirm the service is launched using foo.slice - seen under cgroup with "using systemctl status foo" as well with the script "track-stress.sh" provided but eventually no visible effect on overall CPU consumption..
Anyone has seen this problem and what could be the causes ?
This was tested under Centos 8 and Ubuntu LTS 20.04 with similar results...
Thanks,
Sylvain
Comments
-
Thank you for the feed back ! Yes,2.24 and 2.27 are the same in the book but we know from adding in the "sropin file" the stress counters should show 1 process for each sectiion. This has been fixed in the version due to hit the streets very soon.
On the "slice " lab are you seein g"system.slice" or "foo.slice" ? I'm betting on "system.slice" You see CentOS8 moved the files areound a bit.
Add:
Slice=foo.slice
CPUQuota=30%As the last 2 lines in your:
/etc/systemd/system/foo.service.d/00-foo.confThe test should show the foo.slice and the 30% limit will be enforced.
Yes, this fix is in the new verion.
Sorry for having some difficulties , we test the labs often and appreciate feedback to plug any problems that surface..
Regards Lee
1 -
Thanks Lee for your feedback,
I can confirm "foo.slice" is used - as printed from the script provided and also seen with 'systemctl status foo' command showing "CGroup: /foo.slice/foo.service"
I assume the CPUQuota is not enforced here in my case and something missing.
Looking at output of this command : "systemctl show foo |grep CPU" listed below, I assume parameter setting is not correct.CPUUsageNSec=[not set]
EffectiveCPUs=
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
AllowedCPUs=
LimitCPU=infinity
LimitCPUSoft=infinity
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinity=
CPUSchedulingResetOnFork=no0 -
Now that is interesting. But it is working, look at the %CPU in the track-STRESS.sh , it adds up to 30%. If you wish change the percentage in the service file, it will reflect in the track-STRESS.sh.
As to the systemctl show command, and seeing the parameter, I'll have to look into that as it is not obvious where the value is.Lee
0 -
I'm reasonabley certain the value is stored in:
/sys/fs/cgroup/cpu,cpuacct/foo.slice/foo.service/cpu.cfs_quota_us
Mine is set to 33% and the above link is 33000.
As for why we cannot see it, that is another investigation.Regards Lee
0 -
Think I found it. My value of CPUQuotaPerSecUSec=330ms looks right as that would be 0.33 seconds or 33% of a second.
0 -
Well, my initial concern is that the overall CPU consumption seen is not changing whatever the foo.slice is on or not and with CPUQUOTA as low as 10% - see below output of track-STRESS script for both cases.
I guess CPU quota is not enforced.. and I cannot see any foo.slice folder under /sys/fs/cgroup/cpu,cpuacct/ when foo.slice is running.
$ sudo find /sys/fs/cgroup/ -name "foo.slice"
/sys/fs/cgroup/memory/foo.slice
/sys/fs/cgroup/pids/foo.slice
/sys/fs/cgroup/devices/foo.slice
/sys/fs/cgroup/systemd/foo.sliceCan you show what parameter do you use with your foo.slice config file ?
Also confirm on your working case what value do you get with command "systemctl show foo |grep -i quota" when foo.slice is running ?My outputs below.
With system.slice :
./track-STRESS.sh is running
The pid for stress is 738953
PID PPID COMMAND VSZ %CPU PSR SLICE
738953 1 stress 7952 0.0 3 system.slice
738954 738953 stress 7952 98.5 3 system.slice
738955 738953 stress 7952 97.1 0 system.slice
738956 738953 stress 139028 98.4 1 system.slicewith foo.slice + CPUQuota at 10%
./track-STRESS.sh is running
The pid for stress is 736929
PID PPID COMMAND VSZ %CPU PSR SLICE
736929 1 stress 7952 0.0 2 foo.slice
736930 736929 stress 7952 98.5 2 foo.slice
736931 736929 stress 7952 97.5 0 foo.slice
736932 736929 stress 139028 98.6 3 foo.sliceMy config files under /etc/systemd/system/foo.service.d
$ cat 00-foo.conf
[Service]
ExecStart=
ExecStart=/usr/bin/stress --cpu 1 --vm 1 --io 1 --vm-bytes 128M
Slice=foo.slice$ cat foo.slice
[Unit]
Description=stress slice
[Slice]
CPUQuota=10%0 -
1 delete the file "foo.slice"
2 Add:
Slice=foo.slice
CPUQuota=30%As the last 2 lines in your:
/etc/systemd/system/foo.service.d/00-foo.confThis is assuming you are using CentOS-8. We used the file foo.slice with CentOS-7 and do not need it for CentOS-8.
0 -
Thanks...Indeed, tested initially and not working with Centos 8 and Ubuntu LTS 20.04 as written on my first post..
Now tested and working under Centos 8 with your last comment. where overall %CPU is now in line with CPUQuota.
Not sure however how to check CPU quota is enforced looking at service once running. I was assuming CPUQouta would have been reflected somewhere using command like 'systemctl show foo'
Correction not tested with Ubuntu 20.04
0 -
Other than the CPUQuotaPerSecUSec value that should track with the CPUQuota percentage, I'm looking for an alternative.
The script track-STRESS.sh was created to collect the cpu utilization used by the primary and spawned proceses so we could see that the quota is working.Lee
0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 735 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 5.9K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 221 LFD103 Class Forum
- 17 LFD110 Class Forum
- 33 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 109 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 14 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 117 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 50 LFS241 Class Forum
- 43 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 145 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 155 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 7 LFW111 Class Forum
- 257 LFW211 Class Forum
- 178 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.1K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)