Welcome to the Linux Foundation Forum!

Exercise 9.4 - some confusing things?

Going through the exercise aforementioned, I see some rather confusing conclusions being drawn and I'm not sure if I misunderstand something myself or the wording is not very accurate.

Namely:
Step 27 says "As we were able to deploy more pods even with apparent hard quota set..." which seems to imply that the quota isn't being respected during pod deployment.
However, isn't the quota actually supposed to enforce the restrictions during PVC creation (i.e. earlier than pod deployment)?
Why would the quota be considered "apparent" here?

In this exercise:
1. At step 18 the PV was created at 1GiB capacity.
2. At step 21 the PVC was created with a claim of 200MiB.
3. Then at steps 22-23 the quota of 500 MiB is removed and a new one of 100 MiB is added.
4. Then at step 25 we recreate the deployment. However, at this stage the PVC was already created and bound before the smaller quota was enforced. Wasn't it? At this point, diverging from the lab path a bit, I deleted and recreated the PV and PVC inside small namespace, and I got the expected (by me) error:

  1. $ kubectl -n small create -f pvc.yaml
  2. Error from server (Forbidden): error when creating "pvc.yaml": persistentvolumeclaims "pvc-one" is forbidden: exceeded quota: storagequota, requested: requests.storage=200Mi, used: requests.storage=0, limited: requests.storage=100Mi

So the quota seems to not be "apparent" but rather enforced only on newly created PVCs in the namespace and not retroactive (not sure if it being retroactive is an expected behavior).
5. Then step 34 goes ahead and says "The quota only takes effect if there is also a resource limit in effect" but the result above seems to disprove that.
6. To confirm that, after step 38 I went a bit off-track and didn't delete immediately the PV and PVC but instead reduced again the storage quota to 100 MiB and re-deployed again the nfs-pod.yaml and it again deployed without any issues (as I expected, since the PVC was already created and bound).

If anything, the only conclusion that I could summarize, related to this particular aspect of the quota enforcing, wasn't at all that if LimitRange is missing the hard quotas are "apparent" or not respected but rather that: storage quotas don't apply retroactively on PVCs that were already existing when the quota was applied/enforced.

Not sure if I am missing something very obvious here or this exercise need some amending as far as some of its stated conclusions go.

I just thought that I'd put this out there, since for newbies trying to burn in our memory the right concepts, the last thing we need is a wrong one in the mix.
So, maybe someone can confirm or dispell my deductions.

Best regards,
Adrian

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 2,444

    Hi @admusin,

    When in doubt about certain topics I highly recommend inspecting the official Kubernetes documentation to clarify any apparent ambiguity in the lab guide.

    However, you are correct in your findings that the quota does not impact existing resources, it impacts any resource created after the quota was put in place.

    Keep in mind however the clear distinction between the ResourceQuota and the LimitRange. The ResourceQuota helps to limit combined resource consumption by all applications in a Namespace, while the LimitRange helps to set and control per Pod resource constraints in the Namespace.

    Regards,
    -Chris

  • Hi Chris,

    Thanks for confirming this and for the elaboration on the distinction between the two resources.

    Best regards,
    Adrian

  • Posts: 3

    Hi!
    Sorry for returning to older discussion but it helped me. I got same results.

    Also one note about step 34 as it says "The quota only takes effect if there is also a resource limit in effect". Both absence and presence of LimitRange object are makes no sense here as I can understand. All steps works as expected even if I don't create LimitRange.
    I guess author refers to particular case when ResourceQuota limit cpu/ram. It this case LimitRange helps to start a pod without limits/requests in its spec setting it to some default value (https://kubernetes.io/docs/concepts/policy/resource-quotas/). But this string in step makes me confused and may be misleading.

  • Posts: 2,444

    Hi @lioneyes,

    For clarification:

    In a Pod's spec, the container resources define resource constraints only for the container they belong to - meaning that if three distinct container images are declared by a pod spec, with only the second container having resources.requests and/or resources.limits declared, these constraints only apply to the second container. A feature that is currently alpha, will eventually allow declaring these resource constraints for the entire pod.

    The LimitRange is a policy that sets resource requests and limits to all pods or containers in a namespace. The LimitRange extends beyond the resources definition. It also defines default constraints that apply to all pod containers launched in the namespace without explicit resources.requests and/or resources.limits defined. The LimitRange also validates the explicit resources of a pod spec, and prevents the launching of the pod if any violations are found.

    The ResourceQuota sets hard limits on the combined resource of the namespace, that applies to all applications launched in the namespace. The ResourceQuota system is effective when either individual pod resources are defined, or LimitRanges are in place to enforce defaults.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training