Welcome to the Linux Foundation Forum!

Exercise 9.4 - some confusing things?

Going through the exercise aforementioned, I see some rather confusing conclusions being drawn and I'm not sure if I misunderstand something myself or the wording is not very accurate.

Namely:
Step 27 says "As we were able to deploy more pods even with apparent hard quota set..." which seems to imply that the quota isn't being respected during pod deployment.
However, isn't the quota actually supposed to enforce the restrictions during PVC creation (i.e. earlier than pod deployment)?
Why would the quota be considered "apparent" here?

In this exercise:
1. At step 18 the PV was created at 1GiB capacity.
2. At step 21 the PVC was created with a claim of 200MiB.
3. Then at steps 22-23 the quota of 500 MiB is removed and a new one of 100 MiB is added.
4. Then at step 25 we recreate the deployment. However, at this stage the PVC was already created and bound before the smaller quota was enforced. Wasn't it? At this point, diverging from the lab path a bit, I deleted and recreated the PV and PVC inside small namespace, and I got the expected (by me) error:

$ kubectl -n small create -f pvc.yaml
Error from server (Forbidden): error when creating "pvc.yaml": persistentvolumeclaims "pvc-one" is forbidden: exceeded quota: storagequota, requested: requests.storage=200Mi, used: requests.storage=0, limited: requests.storage=100Mi

So the quota seems to not be "apparent" but rather enforced only on newly created PVCs in the namespace and not retroactive (not sure if it being retroactive is an expected behavior).
5. Then step 34 goes ahead and says "The quota only takes effect if there is also a resource limit in effect" but the result above seems to disprove that.
6. To confirm that, after step 38 I went a bit off-track and didn't delete immediately the PV and PVC but instead reduced again the storage quota to 100 MiB and re-deployed again the nfs-pod.yaml and it again deployed without any issues (as I expected, since the PVC was already created and bound).

If anything, the only conclusion that I could summarize, related to this particular aspect of the quota enforcing, wasn't at all that if LimitRange is missing the hard quotas are "apparent" or not respected but rather that: storage quotas don't apply retroactively on PVCs that were already existing when the quota was applied/enforced.

Not sure if I am missing something very obvious here or this exercise need some amending as far as some of its stated conclusions go.

I just thought that I'd put this out there, since for newbies trying to burn in our memory the right concepts, the last thing we need is a wrong one in the mix.
So, maybe someone can confirm or dispell my deductions.

Best regards,
Adrian

Comments

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @admusin,

    When in doubt about certain topics I highly recommend inspecting the official Kubernetes documentation to clarify any apparent ambiguity in the lab guide.

    However, you are correct in your findings that the quota does not impact existing resources, it impacts any resource created after the quota was put in place.

    Keep in mind however the clear distinction between the ResourceQuota and the LimitRange. The ResourceQuota helps to limit combined resource consumption by all applications in a Namespace, while the LimitRange helps to set and control per Pod resource constraints in the Namespace.

    Regards,
    -Chris

  • admusin
    admusin Posts: 7

    Hi Chris,

    Thanks for confirming this and for the elaboration on the distinction between the two resources.

    Best regards,
    Adrian

Categories

Upcoming Training