issue w/ k8sMaster.sh on azure VM
I executed step by step the script and everything was fine except the node status: never get ready.
I assume the issue is the following "NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" see below
That's the output of:
kubectl describe node k8s-master Name: k8s-master Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8s-master node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 03 Jan 2019 18:13:03 +0000 Taints: node.kubernetes.io/not-ready:NoExecute node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Thu, 03 Jan 2019 18:20:53 +0000 Thu, 03 Jan 2019 18:13:02 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses: InternalIP: 10.0.0.4 Hostname: k8s-master Capacity: attachable-volumes-azure-disk: 16 cpu: 1 ephemeral-storage: 30428648Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 944140Ki pods: 110 Allocatable: attachable-volumes-azure-disk: 16 cpu: 1 ephemeral-storage: 28043041951 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 841740Ki pods: 110 System Info: Machine ID: 654ff64976f040a6acae661503aa9786 System UUID: A3E82C61-AE64-BA4D-AD00-F1B4C059EF48 Boot ID: c74801bd-e52f-4daa-92bd-d1993b4cc89c Kernel Version: 4.15.0-1036-azure OS Image: Ubuntu 16.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.1 Kubelet Version: v1.12.1 Kube-Proxy Version: v1.12.1 PodCIDR: 192.168.0.0/24 Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system coredns-869f847d58-jvbl7 100m (10%) 0 (0%) 70Mi (8%) 170Mi (20%) kube-system etcd-k8s-master 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-k8s-master 250m (25%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-k8s-master 200m (20%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-proxy-jzj5x 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-k8s-master 100m (10%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (65%) 0 (0%) memory 70Mi (8%) 170Mi (20%) attachable-volumes-azure-disk 0 0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m31s kubelet, k8s-master Starting kubelet. Normal NodeAllocatableEnforced 8m31s kubelet, k8s-master Updated Node Allocatable limit across pods Normal NodeHasSufficientDisk 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m30s (x5 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m30s (x6 over 8m31s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientPID Normal Starting 6m2s kube-proxy, k8s-master Starting kube-proxy.
Comments
-
Hello,
The labs have not been tested on Azure, so there are some unknowns. I would start with looking at a few things:
Please view the state of all of your pods. Are there some who are not running or other issues?
Have you ensured there are no firewalls or restrictions between nodes?
What does kubectl get events show, if anything?
If you look in the log files on the node, with journalctl, are there errors or messages to help troubleshoot?Should you be able to find an error, or perhaps a pod which is not running, it can be useful to start the troubleshooting process.
Regards,
0 -
Hi @crixo ,
Looking at your output, lines 12 - 14 indicate that you seem to be experiencing an issue which was supposed to be fixed, but I guess it was only fixed temporarily - where the nodes had an extra taint that prevented them to become Ready.
In Lab 2.1 section [Deploy a Master Node using Kubeadm], at the end of step 2, the master was expected to be in a NotReady state, just like shown in the lab output:kubectl get node
NAME STATUS ROLE ...
ckad-1 NotReady master ...
After completing the section [Deploy a Minion Node], and continuing with [Configure the Master Node], in step 7 both nodes should show NotReady. At this point, steps 8 -12 will guide you to remove the taints which prevent the nodes from going into ready state, and at the end of step 12 both your master and minion should be in Ready state.
Regards,
-Chris1 -
Hi @chrispokorni,
removing the Taints solved the problem, but I have also to upgrade (master and worker) k8s tools to version 1.13.0-00 as suggested here: https://forum.linuxfoundation.org/discussion/855684/section-2-1-5-cannot-get-resource-error-thrown-during-sudo-kubeadm-joinI changed both k8sMaster.sh and k8sSecond.sh as following
sudo apt-get install -y kubeadm=1.13.0-00 kubelet=1.13.0-00 kubectl=1.13.0-00
now with the following azure VMs sizes:
MASTER_SKU='Standard_B2s' AGENT_SKU='Standard_B1s'
I'm able to create the cluster and continue the lab.
Thanks a lot for the support0 -
Hi @crixo ,
The labs have been only tested as released, on K8s v1.12.1. The kubeadm issue between the 2 versions 1.12.1 and 1.13 can be resolved with a fix posted earlier in the forum, without an upgrade to 1.13, and that way you can complete the labs on 1.12:sudo kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
There may be a chance that the labs work fine on 1.13, but in case they don't, that's the fix.
Regards,
-Chris0 -
Hi @chrispokorni,
adding --kubernetes-version 1.12.1 solve the problem and now I'm able to run the first part of the lab w/o upgrading to 1.13.I wonder if there's any option to avoid the Taints on both nodes (it saves some time after VMs provisioning).
who's adding the Taints? My guess is kubeadm; if so, is there any option to avoiding it?
An other option could be adding an additional tolerations on the calico.yaml in order to tolerate the Taints if those cannot be removed. Am I on the right track?0 -
@crixo
I am glad it worked and you are able to continue with the labs on 1.12.
The taints have been going thru changes lately. Before 1.12 there was only 1 taint, then when 1.12 was released there were 2 taints, then immediately after it went back to 1 taint. Now it changed again.
Kubernetes is a fast-moving project and features could change within a week.
Tolerations would not necessarily work, or would work as long as you remembered to add them to any deployment/pod definition because taints affect pod scheduling in general, not only calico. By removing taints once, you don't have to worry about them in the future
-Chris0 -
Hi @pistle,
As you can tell, the course does not make use of AKS. The shell script installs Kubernetes components and initializes the control plane node for you.
Regards,
-Chris0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 734 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 166 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 5.9K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 222 LFD103 Class Forum
- 17 LFD110 Class Forum
- 34 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 110 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 111 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 17 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 118 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 50 LFS241 Class Forum
- 44 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 146 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 156 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 7 LFW111 Class Forum
- 257 LFW211 Class Forum
- 180 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.2K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 312 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)