Worker Nodes roles as Master and control plane in version 1.20
I was working on LFS258 labs starting from 3.1 to create a Kubernetes cluster using 1.19 version. It was all looking fine and consistent with lab notes and didn't do labs for a long time. I restarted doing labs and it was complaining about .kube/config folders missing, for some reason I couldn't debug/troubleshoot the problem as it was consuming time. Hence decided to create a new Kube cluster with 1.20 version (I switched to 1.2 to match the current CKA exam environment).
But this time when added the worker nodes, it's roles were automatically assigned to master, control plane as shown below but the lab notes says "None". My question is Does anyone know why the roles were different in 1.20? Is this a new change in 1.20 or have I configured it incorrectly?
I get two etcds, two api servers, controllers running in a cluser. How to make sure a nodes is worker node?
[email protected]:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 2d13h v1.20.0
worker Ready control-plane,master 41h v1.20.0
Please help as I couldn't find in the kube docs