Lab 2.6 open point + slice resource limit impact not visible
Hi all,
First - not sure if the forum is the right place - but wanted to feedback on some possible mistake on the lab pdf document for this section - The two figures "Figure 2.24:systemd-delta showing unit file override" and "Figure 2.27:systemd-delta showing unit file and dropin file" are identical if i am not mistaken..
Then I have problem to see any effects on the last exercice when using systemd slice with a low CPU quota.
I can confirm the service is launched using foo.slice - seen under cgroup with "using systemctl status foo" as well with the script "track-stress.sh" provided but eventually no visible effect on overall CPU consumption..
Anyone has seen this problem and what could be the causes ?
This was tested under Centos 8 and Ubuntu LTS 20.04 with similar results...
Thanks,
Sylvain
Comments
-
Thank you for the feed back ! Yes,2.24 and 2.27 are the same in the book but we know from adding in the "sropin file" the stress counters should show 1 process for each sectiion. This has been fixed in the version due to hit the streets very soon.
On the "slice " lab are you seein g"system.slice" or "foo.slice" ? I'm betting on "system.slice" You see CentOS8 moved the files areound a bit.
Add:
Slice=foo.slice
CPUQuota=30%As the last 2 lines in your:
/etc/systemd/system/foo.service.d/00-foo.confThe test should show the foo.slice and the 30% limit will be enforced.
Yes, this fix is in the new verion.
Sorry for having some difficulties , we test the labs often and appreciate feedback to plug any problems that surface..
Regards Lee
1 -
Thanks Lee for your feedback,
I can confirm "foo.slice" is used - as printed from the script provided and also seen with 'systemctl status foo' command showing "CGroup: /foo.slice/foo.service"
I assume the CPUQuota is not enforced here in my case and something missing.
Looking at output of this command : "systemctl show foo |grep CPU" listed below, I assume parameter setting is not correct.CPUUsageNSec=[not set]
EffectiveCPUs=
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
AllowedCPUs=
LimitCPU=infinity
LimitCPUSoft=infinity
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinity=
CPUSchedulingResetOnFork=no0 -
Now that is interesting. But it is working, look at the %CPU in the track-STRESS.sh , it adds up to 30%. If you wish change the percentage in the service file, it will reflect in the track-STRESS.sh.
As to the systemctl show command, and seeing the parameter, I'll have to look into that as it is not obvious where the value is.Lee
0 -
I'm reasonabley certain the value is stored in:
/sys/fs/cgroup/cpu,cpuacct/foo.slice/foo.service/cpu.cfs_quota_us
Mine is set to 33% and the above link is 33000.
As for why we cannot see it, that is another investigation.Regards Lee
0 -
Think I found it. My value of CPUQuotaPerSecUSec=330ms looks right as that would be 0.33 seconds or 33% of a second.
0 -
Well, my initial concern is that the overall CPU consumption seen is not changing whatever the foo.slice is on or not and with CPUQUOTA as low as 10% - see below output of track-STRESS script for both cases.
I guess CPU quota is not enforced.. and I cannot see any foo.slice folder under /sys/fs/cgroup/cpu,cpuacct/ when foo.slice is running.
$ sudo find /sys/fs/cgroup/ -name "foo.slice"
/sys/fs/cgroup/memory/foo.slice
/sys/fs/cgroup/pids/foo.slice
/sys/fs/cgroup/devices/foo.slice
/sys/fs/cgroup/systemd/foo.sliceCan you show what parameter do you use with your foo.slice config file ?
Also confirm on your working case what value do you get with command "systemctl show foo |grep -i quota" when foo.slice is running ?My outputs below.
With system.slice :
./track-STRESS.sh is running
The pid for stress is 738953
PID PPID COMMAND VSZ %CPU PSR SLICE
738953 1 stress 7952 0.0 3 system.slice
738954 738953 stress 7952 98.5 3 system.slice
738955 738953 stress 7952 97.1 0 system.slice
738956 738953 stress 139028 98.4 1 system.slicewith foo.slice + CPUQuota at 10%
./track-STRESS.sh is running
The pid for stress is 736929
PID PPID COMMAND VSZ %CPU PSR SLICE
736929 1 stress 7952 0.0 2 foo.slice
736930 736929 stress 7952 98.5 2 foo.slice
736931 736929 stress 7952 97.5 0 foo.slice
736932 736929 stress 139028 98.6 3 foo.sliceMy config files under /etc/systemd/system/foo.service.d
$ cat 00-foo.conf
[Service]
ExecStart=
ExecStart=/usr/bin/stress --cpu 1 --vm 1 --io 1 --vm-bytes 128M
Slice=foo.slice$ cat foo.slice
[Unit]
Description=stress slice
[Slice]
CPUQuota=10%0 -
1 delete the file "foo.slice"
2 Add:
Slice=foo.slice
CPUQuota=30%As the last 2 lines in your:
/etc/systemd/system/foo.service.d/00-foo.confThis is assuming you are using CentOS-8. We used the file foo.slice with CentOS-7 and do not need it for CentOS-8.
0 -
Thanks...Indeed, tested initially and not working with Centos 8 and Ubuntu LTS 20.04 as written on my first post..
Now tested and working under Centos 8 with your last comment. where overall %CPU is now in line with CPUQuota.
Not sure however how to check CPU quota is enforced looking at service once running. I was assuming CPUQouta would have been reflected somewhere using command like 'systemctl show foo'
Correction not tested with Ubuntu 20.04
0 -
Other than the CPUQuotaPerSecUSec value that should track with the CPUQuota percentage, I'm looking for an alternative.
The script track-STRESS.sh was created to collect the cpu utilization used by the primary and spawned proceses so we could see that the quota is working.Lee
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)