Welcome to the Linux Foundation Forum!

Lab 2.6 open point + slice resource limit impact not visible

Hi all,

First - not sure if the forum is the right place - but wanted to feedback on some possible mistake on the lab pdf document for this section - The two figures "Figure 2.24:systemd-delta showing unit file override" and "Figure 2.27:systemd-delta showing unit file and dropin file" are identical if i am not mistaken..

Then I have problem to see any effects on the last exercice when using systemd slice with a low CPU quota.
I can confirm the service is launched using foo.slice - seen under cgroup with "using systemctl status foo" as well with the script "track-stress.sh" provided but eventually no visible effect on overall CPU consumption..

Anyone has seen this problem and what could be the causes ?
This was tested under Centos 8 and Ubuntu LTS 20.04 with similar results...



  • lee42x
    lee42x Posts: 380

    Thank you for the feed back ! Yes,2.24 and 2.27 are the same in the book but we know from adding in the "sropin file" the stress counters should show 1 process for each sectiion. This has been fixed in the version due to hit the streets very soon.

    On the "slice " lab are you seein g"system.slice" or "foo.slice" ? I'm betting on "system.slice" You see CentOS8 moved the files areound a bit.



    As the last 2 lines in your:

    The test should show the foo.slice and the 30% limit will be enforced.

    Yes, this fix is in the new verion.

    Sorry for having some difficulties , we test the labs often and appreciate feedback to plug any problems that surface..

    Regards Lee

  • smarques
    smarques Posts: 10

    Thanks Lee for your feedback,

    I can confirm "foo.slice" is used - as printed from the script provided and also seen with 'systemctl status foo' command showing "CGroup: /foo.slice/foo.service"

    I assume the CPUQuota is not enforced here in my case and something missing.
    Looking at output of this command : "systemctl show foo |grep CPU" listed below, I assume parameter setting is not correct.

    CPUUsageNSec=[not set]
    CPUWeight=[not set]
    StartupCPUWeight=[not set]
    CPUShares=[not set]
    StartupCPUShares=[not set]

  • lee42x
    lee42x Posts: 380

    Now that is interesting. But it is working, look at the %CPU in the track-STRESS.sh , it adds up to 30%. If you wish change the percentage in the service file, it will reflect in the track-STRESS.sh.
    As to the systemctl show command, and seeing the parameter, I'll have to look into that as it is not obvious where the value is.


  • lee42x
    lee42x Posts: 380

    I'm reasonabley certain the value is stored in:
    Mine is set to 33% and the above link is 33000.
    As for why we cannot see it, that is another investigation.

    Regards Lee

  • lee42x
    lee42x Posts: 380

    Think I found it. My value of CPUQuotaPerSecUSec=330ms looks right as that would be 0.33 seconds or 33% of a second.

  • smarques
    smarques Posts: 10

    Well, my initial concern is that the overall CPU consumption seen is not changing whatever the foo.slice is on or not and with CPUQUOTA as low as 10% - see below output of track-STRESS script for both cases.

    I guess CPU quota is not enforced.. and I cannot see any foo.slice folder under /sys/fs/cgroup/cpu,cpuacct/ when foo.slice is running.

    $ sudo find /sys/fs/cgroup/ -name "foo.slice"

    Can you show what parameter do you use with your foo.slice config file ?
    Also confirm on your working case what value do you get with command "systemctl show foo |grep -i quota" when foo.slice is running ?

    My outputs below.

    With system.slice :
    ./track-STRESS.sh is running
    The pid for stress is 738953
    738953 1 stress 7952 0.0 3 system.slice
    738954 738953 stress 7952 98.5 3 system.slice
    738955 738953 stress 7952 97.1 0 system.slice
    738956 738953 stress 139028 98.4 1 system.slice

    with foo.slice + CPUQuota at 10%
    ./track-STRESS.sh is running
    The pid for stress is 736929
    736929 1 stress 7952 0.0 2 foo.slice
    736930 736929 stress 7952 98.5 2 foo.slice
    736931 736929 stress 7952 97.5 0 foo.slice
    736932 736929 stress 139028 98.6 3 foo.slice

    My config files under /etc/systemd/system/foo.service.d
    $ cat 00-foo.conf
    ExecStart=/usr/bin/stress --cpu 1 --vm 1 --io 1 --vm-bytes 128M

    $ cat foo.slice
    Description=stress slice

  • lee42x
    lee42x Posts: 380

    1 delete the file "foo.slice"

    2 Add:


    As the last 2 lines in your:

    This is assuming you are using CentOS-8. We used the file foo.slice with CentOS-7 and do not need it for CentOS-8.

  • smarques
    smarques Posts: 10

    Thanks...Indeed, tested initially and not working with Centos 8 and Ubuntu LTS 20.04 as written on my first post..

    Now tested and working under Centos 8 with your last comment. where overall %CPU is now in line with CPUQuota.

    Not sure however how to check CPU quota is enforced looking at service once running. I was assuming CPUQouta would have been reflected somewhere using command like 'systemctl show foo'

    Correction not tested with Ubuntu 20.04

  • lee42x
    lee42x Posts: 380

    Other than the CPUQuotaPerSecUSec value that should track with the CPUQuota percentage, I'm looking for an alternative.
    The script track-STRESS.sh was created to collect the cpu utilization used by the primary and spawned proceses so we could see that the quota is working.



Upcoming Training