Welcome to the Linux Foundation Forum!

Lab 4.2 - 4.3 What does setting the resource limits really change?

Posts: 1
edited June 2022 in LFS258 Class Forum

I have just completed chapter 4, but after doing the labs I am still confused about the actual impact on setting resource constraints. There is nothing shown about pods being evicted or node allocation or whatever this is actually for. If I set the limits or not, I will have the exact same impact when I deploy the hog application.

What is the actual impact of setting resource limits?

Answers

  • Posts: 57

    @vmayer At 4.2. you set limit in 1CPU and requested load in 2CPU as a result you can see just 100% (not 200% what is equal to 2CPU) of CPU usage via top. It means that limits work.:

    1. limits:
    2. cpu: "1"
    3. memory: "4Gi"
    4. requests:
    5. cpu: "0.5"
    6. memory: "2500Mi"
    7.  
    8. args:
    9. - -cpus
    10. - "2"
    11. - -mem-total
    12. - "950Mi"
    13. - -mem-alloc-size
    14. - "100Mi"
    15. - -mem-alloc-sleep
    16. - "1s"

    In Lab 4.3. you set limits via deployment manifest and via namespace, but because "Per-deployment settings override the global namespace settings" there is no limitations and we eat 950MB. But in case you remove per-deployment limitations new pod will not be able to start. E.g., do following:

    1. kubectl delete -n low-usage-limit deployments.apps hog
    2. cp hog2.yaml hog3.yaml &&\
    3. vim hog3.yaml

    Remove following section and add {} after resources:

    1. limits:
    2. cpu: "1"
    3. memory: "4Gi"
    4. requests:
    5. cpu: "0.5"
    6. memory: "2500Mi"

    Then:

    1. kubectl create -f hog3.yaml

    Monitor nodes load with top and check the status of new pod with kubectl get pod -A. It will be killed every time it exceeds 500Mi limit set on namespace.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training