Welcome to the Linux Foundation Forum!

Lab 4.2 - 4.3 What does setting the resource limits really change?

vmayer Posts: 1
edited June 2022 in LFS258 Class Forum

I have just completed chapter 4, but after doing the labs I am still confused about the actual impact on setting resource constraints. There is nothing shown about pods being evicted or node allocation or whatever this is actually for. If I set the limits or not, I will have the exact same impact when I deploy the hog application.

What is the actual impact of setting resource limits?


  • oleksazhel
    oleksazhel Posts: 57

    @vmayer At 4.2. you set limit in 1CPU and requested load in 2CPU as a result you can see just 100% (not 200% what is equal to 2CPU) of CPU usage via top. It means that limits work.:

                  cpu: "1"
                  memory: "4Gi"
                  cpu: "0.5"
                  memory: "2500Mi"
                - -cpus
                - "2"
                - -mem-total
                - "950Mi"
                - -mem-alloc-size
                - "100Mi"
                - -mem-alloc-sleep
                - "1s"

    In Lab 4.3. you set limits via deployment manifest and via namespace, but because "Per-deployment settings override the global namespace settings" there is no limitations and we eat 950MB. But in case you remove per-deployment limitations new pod will not be able to start. E.g., do following:

    kubectl delete -n low-usage-limit deployments.apps hog
    cp hog2.yaml hog3.yaml &&\
    vim hog3.yaml

    Remove following section and add {} after resources:

                cpu: "1"
                memory: "4Gi"
                cpu: "0.5"
                memory: "2500Mi"


    kubectl create -f hog3.yaml

    Monitor nodes load with top and check the status of new pod with kubectl get pod -A. It will be killed every time it exceeds 500Mi limit set on namespace.


Upcoming Training