Welcome to the Linux Foundation Forum!

Kubernetes Architecture - Kubelet in control plane?


I just started the The Linux Foundation's LFS258 Kubernetes Fundamentals class and I am a little confused regarding the kubelets and the control plane.
Because in the official Kubernetes overview (https://kubernetes.io/docs/concepts/overview/components/) there are no kubelets in the control plane, but in the course architecture overview, there is a kubelet in the control plane.

I thought kubelets would only be necessary on worker nodes.

So, what is correct?
Is it possible to have kubelets in the control plane, because the control plane is also a node? Or is there another reason?

Thank you for your help!

Best Answer

  • chrispokorni
    chrispokorni Posts: 2,176
    Answer ✓

    Hi @foehlschlaeger,

    I agree that the documentation's ambiguous description of the kubelet node agent does not quite clarify its purpose.

    Both kubelet and kube-proxy node agents are found on every node of the cluster, that includes control-plane node(s) and worker nodes. One of kubelet's tasks is to coordinate with the container runtime the container lifecycle events: to start, stop, delete... Since the control-plane agents (api server, scheduler, controller-manager, etcd) are in fact containers, it is easy to see the need for a kubelet agent on a control-plane node as well.



  • anprkvc

    Looks like you have inconsistent documentation in

    Kubernetes Architecture / Worker Nodes

    "All worker nodes run the kubelet and kube-proxy, as well as the container engine, such as containerd or cri-o".

    The kubelet interacts with the underlying Docker Engine also installed on all the nodes, and makes sure that the containers that need to run are actually running.

    Should it be Container Engine? As Docker just one (most popular) of the containers that could be used with Kubernetes?

  • pnts
    pnts Posts: 33
    edited November 2022

    I'm posting a reply to try to communicate my current understanding of the architecture.
    It's to be interpreted as me wanting to participate in discussions about Kubernetes architecture rather than me believing I have the right answer to your question.

    I consider kube-apiserver, kube-controller-manager and kube-scheduler to be the control plane agents. I consider kubelet and kube-proxy to be worker node agents.

    When we deploy with kubeadm init, the control plane agents are run as pods, and so they need the worker node agents and a container runtime on the node.

    But that depends on how we're running the agents. We could just as well download the binaries and run the control plane agents as systemd services. That's the approach taken by Kelsey Hightower in Kubernetes the hard way: https://github.com/kelseyhightower/kubernetes-the-hard-way/

    When following Kelseys approach, the control plane nodes will not show up in kubectl get nodes. Only the worker nodes run kubelet and register as nodes. Only the worker nodes have a container runtime. Public access to applications will be going via the worker nodes. Perhaps a reverse proxy with all worker nodes as upstream servers.

    I wonder what a typical Kubernetes setup looks like in the wild. Is kubeadm heavily used as a deployment tool? Is it uncommon to see the agents run as systemd services? Is it common to put "application" pods running on control plane nodes?


Upcoming Training