Welcome to the Linux Foundation Forum!

Lab 12.1 purpose unclear

I'm working my way through lab exercise 12.1 and finding myself extremely confused about its purpose. For whatever reason, it's using a single pod containing 4 containers to demonstrate the effects of nodeSelector. There's also a line saying "Determine where the new containers have been deployed. They should be more evenly spread this time."

To the best of my understanding, given everything this and previous chapters have said, the containers will all be on the same node. But this lab exercise is doing a very good job of making me question that. It cannot demonstrate spread between nodes because there's only a single pod, but the multiple containers and wording implies the containers could be running on different nodes. Is there something I'm missing, or is this just confusing?

I understand that, likely because a bunch of system stuff is running on the cp node, the vip pod will likely run on the worker node once the selector is removed, but this isn't what the lab says is happening, and it's not really demonstrated by the exercise.

Comments

  • My understanding is: this lab is demonstrating the usage of "nodeSelector" to define which node(s) pods can be running on.

    Actually, I found that, even if I remove "nodeSelector", all 4 containers always run from master-1 node. Only when I define "other" in nodeSelector as following, those 4 containers will run under work-1 node,
    nodeSelector:
    status: other
    The lab says 4 containers can spread out running on both nodes with the deletion of "nodeSelector", but it is not I have found when I was doing the lab.

  • Hi @caishaoping,

    Without a nodeSelector property found in the Pod spec, the Scheduler assigns Pods to nodes while attempting to even out the resource (CPU and memory) utilization across the nodes of the cluster - keep in mind that the scheduling algorithm takes into account the resources consumed by the existing workload of the cluster at scheduling time.

    If one node is underutilized in comparison to other nodes of the cluster, that node will be targeted with several new workload Pods until its resources utilization reach a matching level of utilization to other nodes.

    Regards,
    -Chris

  • Hi @chrispokorni,
    Thanks for your explanation.
    Regards
    Shao

  • jsm3031
    jsm3031 Posts: 16

    I came to ask about this as well. This lab claims that containers will spread out when the node selector is removed, but pod containers will always be co-located with the pod will they not? I think the lab needs to be updated, should we not be spawning multiple pods to illustrated how the load is spread? Or at least not saying the containers will spread out?

    From the PDF "9. Create the pod again. Containers should now be spawning on either node."

  • serewicz
    serewicz Posts: 1,000

    Hello,

    You are correct. I will update the lab to create multiple pods instead, and fix the wording. IIRC the YAML changed to fix another issues five or six versions ago and it no longer makes sense.

    Thank you for sending this in.

    Regards,

  • Was wondering if anyone else ran across this. The lab has not been resolved as of 11/1/2022.

  • Lab 12.2: Using Taints to Control Pod Deployment also needs to be reviewed and corrected

    Thanks,

    Gilbert

  • Napsty
    Napsty Posts: 10

    @grios123 said:
    Was wondering if anyone else ran across this. The lab has not been resolved as of 11/1/2022.

    Yep. Still the same problem/wrong description. All containers running on the same node as you described instead of "spawning on either node" as written in the lab.

  • dianaa
    dianaa Posts: 2
    edited January 2023

    Hi @serewicz,as of today January 5, 2023 LAB 12 is still not updated - It still tries to schedule containers from a single pod on different nodes.
    So, yes, @grios123, I still see this issue as not resolved.

  • dianaa
    dianaa Posts: 2

    To add up on my previous post
    In my opinion, instead of counting the number of containers on each node (sudo crictl ps | wc -l), better solution is to do kubectl get pods -o wide -n project-tiger and look at NODE column

Categories

Upcoming Training