Welcome to the new Linux Foundation Forum!
issue w/ k8sMaster.sh on azure VM

I executed step by step the script and everything was fine except the node status: never get ready.
I assume the issue is the following "NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" see below
That's the output of:
kubectl describe node k8s-master Name: k8s-master Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8s-master node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 03 Jan 2019 18:13:03 +0000 Taints: node.kubernetes.io/not-ready:NoExecute node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses: InternalIP: 10.0.0.4 Hostname: k8s-master Capacity: attachable-volumes-azure-disk: 16 cpu: 1 ephemeral-storage: 30428648Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 944140Ki pods: 110 Allocatable: attachable-volumes-azure-disk: 16 cpu: 1 ephemeral-storage: 28043041951 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 841740Ki pods: 110 System Info: Machine ID: 654ff64976f040a6acae661503aa9786 System UUID: A3E82C61-AE64-BA4D-AD00-F1B4C059EF48 Boot ID: c74801bd-e52f-4daa-92bd-d1993b4cc89c Kernel Version: 4.15.0-1036-azure OS Image: Ubuntu 16.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.1 Kubelet Version: v1.12.1 Kube-Proxy Version: v1.12.1 PodCIDR: 192.168.0.0/24 Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system coredns-869f847d58-jvbl7 100m (10%) 0 (0%) 70Mi (8%) 170Mi (20%) kube-system etcd-k8s-master 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-k8s-master 250m (25%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-k8s-master 200m (20%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-proxy-jzj5x 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-k8s-master 100m (10%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (65%) 0 (0%) memory 70Mi (8%) 170Mi (20%) attachable-volumes-azure-disk 0 0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m31s kubelet, k8s-master Starting kubelet. Normal NodeAllocatableEnforced 8m31s kubelet, k8s-master Updated Node Allocatable limit across pods Normal NodeHasSufficientDisk 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m30s (x5 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientPID Normal Starting 6m2s kube-proxy, k8s-master Starting kube-proxy.
0
Comments
Hello,
The labs have not been tested on Azure, so there are some unknowns. I would start with looking at a few things:
Please view the state of all of your pods. Are there some who are not running or other issues?
Have you ensured there are no firewalls or restrictions between nodes?
What does kubectl get events show, if anything?
If you look in the log files on the node, with journalctl, are there errors or messages to help troubleshoot?
Should you be able to find an error, or perhaps a pod which is not running, it can be useful to start the troubleshooting process.
Regards,
Hi @crixo ,
Looking at your output, lines 12 - 14 indicate that you seem to be experiencing an issue which was supposed to be fixed, but I guess it was only fixed temporarily - where the nodes had an extra taint that prevented them to become Ready.
In Lab 2.1 section [Deploy a Master Node using Kubeadm], at the end of step 2, the master was expected to be in a NotReady state, just like shown in the lab output:
kubectl get node
NAME STATUS ROLE ...
ckad-1 NotReady master ...
After completing the section [Deploy a Minion Node], and continuing with [Configure the Master Node], in step 7 both nodes should show NotReady. At this point, steps 8 -12 will guide you to remove the taints which prevent the nodes from going into ready state, and at the end of step 12 both your master and minion should be in Ready state.
Regards,
-Chris
Hi @chrispokorni,
removing the Taints solved the problem, but I have also to upgrade (master and worker) k8s tools to version 1.13.0-00 as suggested here: https://forum.linuxfoundation.org/discussion/855684/section-2-1-5-cannot-get-resource-error-thrown-during-sudo-kubeadm-join
I changed both k8sMaster.sh and k8sSecond.sh as following
sudo apt-get install -y kubeadm=1.13.0-00 kubelet=1.13.0-00 kubectl=1.13.0-00
now with the following azure VMs sizes:
I'm able to create the cluster and continue the lab.
Thanks a lot for the support
Hi @crixo ,
The labs have been only tested as released, on K8s v1.12.1. The kubeadm issue between the 2 versions 1.12.1 and 1.13 can be resolved with a fix posted earlier in the forum, without an upgrade to 1.13, and that way you can complete the labs on 1.12:
sudo kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
There may be a chance that the labs work fine on 1.13, but in case they don't, that's the fix.
Regards,
-Chris
Hi @chrispokorni,
adding --kubernetes-version 1.12.1 solve the problem and now I'm able to run the first part of the lab w/o upgrading to 1.13.
I wonder if there's any option to avoid the Taints on both nodes (it saves some time after VMs provisioning).
who's adding the Taints? My guess is kubeadm; if so, is there any option to avoiding it?
An other option could be adding an additional tolerations on the calico.yaml in order to tolerate the Taints if those cannot be removed. Am I on the right track?
@crixo
I am glad it worked and you are able to continue with the labs on 1.12.
The taints have been going thru changes lately. Before 1.12 there was only 1 taint, then when 1.12 was released there were 2 taints, then immediately after it went back to 1 taint. Now it changed again.
Kubernetes is a fast-moving project and features could change within a week.
Tolerations would not necessarily work, or would work as long as you remembered to add them to any deployment/pod definition because taints affect pod scheduling in general, not only calico. By removing taints once, you don't have to worry about them in the future
-Chris