Welcome to the Linux Foundation Forum!

Lab 3.21 replica added to master node

Is this expected behavior from the scale function? One new service is added to the worker node and one is added to the master node.

student@lfcs-controller:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7848d4b86f-dfb58 1/1 Running 0 16m 192.168.119.199 lfcs-worker
nginx-7848d4b86f-l4mcp 1/1 Running 0 46s 192.168.248.80 lfcs-controller
nginx-7848d4b86f-pdxbr 1/1 Running 0 46s 192.168.119.200 lfcs-worker

Comments

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @creedperry,

    When deploying a multi-instance application, or when scaling an existing application to multiple replicas, the pod replicas are assigned to nodes by the kube-scheduler. The scheduler compares nodes' resources and ranks nodes based on their state in order to match nodes with the new pods to be deployed.

    Regards,
    -Chris

  • Thank you for the reply. I assumed the kube-scheduler would only deploy services only to worker nodes, not to the controller itself. Is this typical behavior of the kube-scheduler and is it potentially cause for concern (assigning running services and workloads to the controller)?

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @creedperry,

    You are correct, the scheduler would distribute workload between worker nodes in a production environment. However, in our learning environment, the control-plane/master node has been converted into a worker node as well, by removing the master taint in step 3 of lab exercise 3.3.

    Regards,
    -Chris

Categories

Upcoming Training