Welcome to the Linux Foundation Forum!

Lab 8 review how evaluate the good settings ?

in the file I have this :

  1. resources:
  2. limits:
  3. cpu: "1"
  4. memory: "1Gi"
  5. requests:
  6. cpu: "2.5"
  7. memory: "500Mi"
  8. args:
  9. - -cpus
  10. - "2"
  11. - -mem-total
  12. - "1950Mi"

if I have to fix something.. there is a way to identify if the user did a mistake in the args, or limits or requests ?

let's say I have to fix that... I could change the values in all those 3 options.. which one should I consider that is the good one ?

here we can quickly see that the requested CPU > limits CPU, we have a problem here.. but I could reduce the requests value and the args values to match the CPU Limits.

so my question.. there is a way to identify what I can't change ?

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • In my setup, I have 1 master, 2 workers. but each worker have 2 CPU. if I deploy the app with 2 CPU, I got insufficient CPU.. so I reduce it to 1.5.. but if I have to do something like that in the exam.. can I play with the values in args too.. I just have to make it work..that's it ?

  • here my setup on my node :

    1. Capacity:
    2. cpu: 2
    3. ephemeral-storage: 30435260Ki
    4. hugepages-1Gi: 0
    5. hugepages-2Mi: 0
    6. memory: 1975380Ki
    7. pods: 110
    8. Allocatable:
    9. cpu: 2
    10. ephemeral-storage: 28049135570
    11. hugepages-1Gi: 0
    12. hugepages-2Mi: 0
    13. memory: 1872980Ki
    14. pods: 110
    15. ...
    16. Allocated resources:
    17. (Total limits may be over 100 percent, i.e., overcommitted.)
    18. Resource Requests Limits
    19. -------- -------- ------
    20. cpu 1550m (77%) 1700m (85%)
    21. memory 790Mi (43%) 1736Mi (94%)
    22. ephemeral-storage 0 (0%) 0 (0%)
    23. hugepages-1Gi 0 (0%) 0 (0%)
    24. hugepages-2Mi 0 (0%) 0 (0%)
    25. Events:
    26. Type Reason Age From Message
    27. ---- ------ ---- ---- -------
    28. Warning SystemOOM 29s (x11 over 12m) kubelet (combined from similar events): System OOM encountered, victim process: stress, pid: 27825

    I'm using those settings

    1. resources:
    2. limits:
    3. cpu: "1.5"
    4. memory: "1.5Gi"
    5. requests:
    6. cpu: "1"
    7. memory: "500Mi"
    8. args:
    9. - -cpus
    10. - "1"
    11. - -mem-total
    12. - "1350Mi"
    13. - -mem-alloc-size
    14. - "100Mi"
    15. - -mem-alloc-sleep
    16. - "1s"
    17.  
  • Hi @SebastienDionne,

    First you would need to identify if there are any issues, at all. What are the best commands that display possible problems?

    Then, when trying to resolve the issues, digging into all API resources involved would reveal which properties are not correctly set, or not following the rules of a policy.

    In your case, what is the significance of limit and request. What is their relationship with the Node's resources? Which ones can you quickly edit in the Kubernetes CLI to fix a possible resource issue?

    Regards,
    -Chris

  • ok thank, so I was OK. my pod doesn't crash. so I completed the class.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training