core-dns is get getting ready on worker / minion node
Hi,
CoreDNS is not getting to the ready state on minoin / work node. This is causing kubeadm
to fail while upgrading the cluster. Please find the following logs for more details:
baqai@k8smaster:~/util/LFS258/SOLUTIONS/s_03$ kubectl get po -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-7dbc97f587-72gcf 1/1 Running 0 78m 192.168.16.130 k8smaster <none> <none> calico-node-tmxjx 1/1 Running 0 78m 192.168.159.145 k8smaster <none> <none> calico-node-xftlm 1/1 Running 0 4m57s 192.168.159.146 node <none> <none> coredns-66bff467f8-dgtvc 0/1 Running 0 4m17s 192.168.167.129 node <none> <none> coredns-66bff467f8-l8m74 1/1 Running 0 7m4s 192.168.16.132 k8smaster <none> <none> etcd-k8smaster 1/1 Running 0 83m 192.168.159.145 k8smaster <none> <none> kube-apiserver-k8smaster 1/1 Running 0 83m 192.168.159.145 k8smaster <none> <none> kube-controller-manager-k8smaster 1/1 Running 0 83m 192.168.159.145 k8smaster <none> <none> kube-proxy-jtlpt 1/1 Running 0 83m 192.168.159.145 k8smaster <none> <none> kube-proxy-n5zt9 1/1 Running 0 4m57s 192.168.159.146 node <none> <none> kube-scheduler-k8smaster 1/1 Running 1 83m 192.168.159.145 k8smaster <none> <none>
baqai@k8smaster:~/util/LFS258/SOLUTIONS/s_03$ kubectl logs coredns-66bff467f8-dgtvc .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I0126 10:49:28.943386 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-01-26 10:48:58.942842255 +0000 UTC m=+0.024568617) (total time: 30.000427822s): Trace[2019727887]: [30.000427822s] [30.000427822s] END E0126 10:49:28.943419 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0126 10:49:28.943821 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-01-26 10:48:58.94341507 +0000 UTC m=+0.025141409) (total time: 30.000395886s): Trace[1427131847]: [30.000395886s] [30.000395886s] END E0126 10:49:28.943828 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0126 10:49:28.944259 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-01-26 10:48:58.943549088 +0000 UTC m=+0.025275450) (total time: 30.000697364s): Trace[939984059]: [30.000697364s] [30.000697364s] END E0126 10:49:28.944269 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes"
Comments
-
Hi @furqanbaqai,
It seems the connection attempts are timing out, when accessing the
kubernetes
Service ClusterIP.
Did you try deleting the Pod, and force the controller to replace it?
What happens when you runcurl https://10.96.0.1:443
andkubectl describe svc kubernetes
?Regards,
-Chris0 -
Hi @chrispokorni ,
Thanks for the response. All coreDNS pods are running perfectly on the master node, when i delete any one of the pod, it is scheduled on the secondary node and this error comes. For the other questions:
1. I did curl and i am getting following result:baqai@oftl-ub180464:/var/log$ curl https://10.96.0.1:443 -k { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 }
- Service
kubernetes
description:
baqai@k8smaster:/var/log/calico/cni$ kubectl describe svc kubernetes Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 192.168.159.145:6443 Session Affinity: None Events: <none>
Just to highlight, the same pattern is observed in
v.19.5
Thanks in advance
0 - Service
-
Hi @furqanbaqai,
Thanks for the detailed outputs. There are a few things here that seem to be causing your issues.
- The IP addresses of your hosts/Nodes/VMs are overlapping with the default Pod IP network managed by Calico, which is
192.168.0.0/16
. There should be no overlap of any kind between Node IP network and Pod network. The recommendation is to either provision new VMs with IP addresses that do not overlap the Pod network192.168.0.0/16
, OR to re-deploy your cluster while re-configuring Calico and thekubeadm-config.yaml
file with a new Pod network, in order to avoid overlaps. - You may run into issues later on because of your host naming convention.
k8smaster
is intended to be used only as an alias for the control plane (which in early labs is represented bymaster1
node, and later by an entire cluster of 3 masters and an HAProxy server). You seem to have introduced in your environmentk8smaster
as the hostname of your master node also.
Regards,
-Chris0 - The IP addresses of your hosts/Nodes/VMs are overlapping with the default Pod IP network managed by Calico, which is
-
Hi @chrispokorni ,
Thanks for your response. Let me try this out and provide you the feedback.
0 -
Hi @chrispokorni ,
Thank you for your help and support. This is to confirm that by changing the cidr IP in my
kubeadm-config.yaml
file with an IP not conflicting with the IP range of the local vms,coreDNS
is getting scheduled in second node as well. I'll proceed and upgrade the cluster to newer version according to the exercise.My
kubeadm-config.yaml
for reference along with the output ofkubectl get po --all-namespaces
:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-7dbc97f587-9p6v8 1/1 Running 0 16m 10.6.0.67 lfs25801 <none> <none> calico-node-9qfrw 1/1 Running 0 16m 192.168.159.145 lfs25801 <none> <none> calico-node-djdch 1/1 Running 0 7m17s 192.168.159.146 lfs25802 <none> <none> coredns-66bff467f8-2ljdn 1/1 Running 0 4m2s 10.6.0.193 lfs25802 <none> <none> coredns-66bff467f8-sp5pc 1/1 Running 0 18m 10.6.0.66 lfs25801 <none> <none> etcd-lfs25801 1/1 Running 0 18m 192.168.159.145 lfs25801 <none> <none> kube-apiserver-lfs25801 1/1 Running 0 18m 192.168.159.145 lfs25801 <none> <none> kube-controller-manager-lfs25801 1/1 Running 0 18m 192.168.159.145 lfs25801 <none> <none> kube-proxy-kxlht 1/1 Running 0 7m17s 192.168.159.146 lfs25802 <none> <none> kube-proxy-w9djn 1/1 Running 0 18m 192.168.159.145 lfs25801 <none> <none> kube-scheduler-lfs25801 1/1 Running 0 18m 192.168.159.145 lfs25801 <none> <none>
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.18.15 controlPlaneEndpoint: "lfs25801:6443" networking: podSubnet: 10.6.0.0/24
0 -
Hi @furqanbaqai,
The IP addresses of your pods and nodes look good this time.
However, you may have missed 2 steps in Lab 3.1:
1. Step 12, where an alias is mapped to the Private IP address of the Master Node in the/etc/hosts
file. The same alias and IP pair is expected to be used later in Lab 3.2 Step 6 in the/etc/hosts
file of the Minion Node.
2. Step 13 where the alias (not the hostname of the Master Node) from Step 12 is included in thekubeadm-config.yaml
manifest.For consistency, I presume the
calico.yaml
manifest has been updated with:- name: CALICO_IPV4POOL_CIDR value: "10.6.0.0/24"
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 145 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)