Welcome to the Linux Foundation Forum!

Worker Nodes roles as Master and control plane in version 1.20

Options
varooran
varooran Posts: 9
edited April 2021 in LFS258 Class Forum

I was working on LFS258 labs starting from 3.1 to create a Kubernetes cluster using 1.19 version. It was all looking fine and consistent with lab notes and didn't do labs for a long time. I restarted doing labs and it was complaining about .kube/config folders missing, for some reason I couldn't debug/troubleshoot the problem as it was consuming time. Hence decided to create a new Kube cluster with 1.20 version (I switched to 1.2 to match the current CKA exam environment).

But this time when added the worker nodes, it's roles were automatically assigned to master, control plane as shown below but the lab notes says "None". My question is Does anyone know why the roles were different in 1.20? Is this a new change in 1.20 or have I configured it incorrectly?

I get two etcds, two api servers, controllers running in a cluser. How to make sure a nodes is worker node?

varooran@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 2d13h v1.20.0
worker Ready control-plane,master 41h v1.20.0

Please help as I couldn't find in the kube docs

Comments

  • varooran
    varooran Posts: 9
    edited April 2021
    Options

    I repeated for 1.19 , I get the same - any help or hint is much appreciated

  • varooran
    varooran Posts: 9
    Options

    resolved.

  • chrispokorni
    chrispokorni Posts: 2,165
    Options

    Hi @varooran,

    The kubeadm join command can be used to add both, control-plane and worker nodes to the cluster.

    Regards,
    -Chris

Categories

Upcoming Training