Welcome to the new Linux Foundation Forum!

Lab 11.1 has errors

williamkoontzwilliamkoontz Posts: 11
edited August 16 in LFS258 Class Forum

In step 6, the PDF shows a "vip.yaml" that creates a SINGLE POD named "vip" that contains 4 busybox containers and a nodeselector that only allows the pod to run on the node which was previously labeled with "status=vip".
So far so good, the pod only runs on the "vip" node.
Step 8 has us delete the pod and change the pod spec to remove the nodeSelector.
Step 9 is where the correction is needed... it says that after recreatingc the pod "Containers should now be spawning on both nodes.".
IMPOSSIBLE! The smallest unit Kubernetes can schedule is a POD!
This lab needs to be changed so that more than one pod is being created so we can actually see something spawning on both nodes.

Comments

  • chrispokornichrispokorni Posts: 86
    edited August 16

    Hi,
    Aside from understanding the scheduling of pods, and the impact of node selectors, understanding the concept of pod vs container is equally important.
    In short, a pod encapsulates one or many containers.
    The following links may help in clarifying the concepts:
    https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/
    https://kubernetes.io/docs/concepts/workloads/pods/pod/
    To your point, in Kubernetes we manage and verify the pods with kubectl get pods, and in Docker we verify the containers with docker ps.
    Also, when a pod is scheduled on a node in Kubernetes, the end result is one or more running containers on that same node.

    As you mentioned, in step 6 the vip.yaml file defines a Pod with 4 containers. In step 7, when the vip Pod is scheduled on the master because of the node selector, it creates the 4 containers on the master, which are then verified to be running on the master by issuing the docker ps command.
    In step 8 by deleting the vip Pod, the 4 containers are also terminated and deleted. This can be verified again by running the docker ps command.
    When editing the vip.yaml file and commenting out the node selector, the scheduler is no longer ignoring the worker, therefore the vip Pod and its 4 containers could be scheduled on either one of the available nodes in the cluster: master or worker.
    In step 10 we are determining on which of the 2 nodes (master or worker) the 4 containers have been deployed, again with the use of docker ps command.

    I hope this helps.
    Regards,
    -Chris

  • Chris, sorry i must not have made my point clear.

    I know what is supposed to happen and fully understand the difference between pods and containers.

    I have issue with the specific sentence in step 9 which contradicts your interpretation of the lab above.
    Below is a quote from the lab, I have **bolded **the incorrect statement.

    1. Delete the pod then edit the file, commenting out the nodeSelector lines. It may take a while for the containers to fully
      terminate.
      [email protected]:~$ kubectl delete pod vip
      pod "vip" deleted
      [email protected]:~$ vim vip.yaml
      ....

    nodeSelector:

    status: vip

    1. Create the pod again. Containers should now be spawning on both nodes. You may see pods for the daemonsets as
      well.
      [email protected]:~$ kubectl get pods
      NAME READY STATUS RESTARTS AGE
      vip 0/4 Terminating 0 5m
      [email protected]:~$ kubectl get pods
      No resources found.
      [email protected]:~$ kubectl create -f vip.yaml
      pod/vip created
  • We can make a suggestion to the course author and maintainer to revise that sentence prior to the next course update release.
    Regards,
    -Chris

Sign In or Register to comment.