Welcome to the Linux Foundation Forum!

coreDNS CrashLoopBackoff

I have set up the cluster without error and am running through Lab 2.1. I noticed that the pods for coreDNS are failing.

I am running the nodes on bare metal.

Debug info is:

**kubectl get pod -n kube-system

  1. NAME READY STATUS RESTARTS AGE
  2. calico-etcd-wr2cf 1/1 Running 3 13h
  3. calico-kube-controllers-57c8947c94-g2lbc 1/1 Running 3 13h
  4. calico-node-lsjm9 2/2 Running 17 13h
  5. calico-node-zhgnd 2/2 Running 9 13h
  6. coredns-576cbf47c7-56thg 0/1 CrashLoopBackOff 54 13h
  7. coredns-576cbf47c7-nmznf 0/1 CrashLoopBackOff 54 13h
  8. etcd-nuc1 1/1 Running 4 13h
  9. kube-apiserver-nuc1 1/1 Running 4 13h
  10. kube-controller-manager-nuc1 1/1 Running 3 13h
  11. kube-proxy-ct89j 1/1 Running 3 13h
  12. kube-proxy-lbdxr 1/1 Running 5 13h
  13. kube-scheduler-nuc1 1/1 Running 3 13h

kubectl describe pods -n kube-system coredns-576cbf47c7-56thg

  1. Name: coredns-576cbf47c7-56thg
  2. Namespace: kube-system
  3. Priority: 0
  4. PriorityClassName: <none>
  5. Node: nuc1/10.10.0.53
  6. Start Time: Sat, 29 Dec 2018 23:06:32 +1100
  7. Labels: k8s-app=kube-dns
  8. pod-template-hash=576cbf47c7
  9. Annotations: <none>
  10. Status: Running
  11. IP: 192.168.21.71
  12. Controlled By: ReplicaSet/coredns-576cbf47c7
  13. Containers:
  14. coredns:
  15. Container ID: docker://5491ac6a53be7f653036af7baaecfb318679882d3ad4b60c7c02b8846f3a4f9d
  16. Image: k8s.gcr.io/coredns:1.2.2
  17. Image ID: docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a
  18. Ports: 53/UDP, 53/TCP, 9153/TCP
  19. Host Ports: 0/UDP, 0/TCP, 0/TCP
  20. Args:
  21. -conf
  22. /etc/coredns/Corefile
  23. State: Waiting
  24. Reason: CrashLoopBackOff
  25. Last State: Terminated
  26. Reason: Error
  27. Exit Code: 1
  28. Started: Sun, 30 Dec 2018 12:39:49 +1100
  29. Finished: Sun, 30 Dec 2018 12:39:50 +1100
  30. Ready: False
  31. Restart Count: 54
  32. Limits:
  33. memory: 170Mi
  34. Requests:
  35. cpu: 100m
  36. memory: 70Mi
  37. Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
  38. Environment: <none>
  39. Mounts:
  40. /etc/coredns from config-volume (ro)
  41. /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-zwdp6 (ro)
  42. Conditions:
  43. Type Status
  44. Initialized True
  45. Ready False
  46. ContainersReady False
  47. PodScheduled True
  48. Volumes:
  49. config-volume:
  50. Type: ConfigMap (a volume populated by a ConfigMap)
  51. Name: coredns
  52. Optional: false
  53. coredns-token-zwdp6:
  54. Type: Secret (a volume populated by a Secret)
  55. SecretName: coredns-token-zwdp6
  56. Optional: false
  57. QoS Class: Burstable
  58. Node-Selectors: <none>
  59. Tolerations: CriticalAddonsOnly
  60. node-role.kubernetes.io/master:NoSchedule
  61. node.kubernetes.io/not-ready:NoExecute for 300s
  62. node.kubernetes.io/unreachable:NoExecute for 300s
  63. Events:
  64. Type Reason Age From Message
  65. ---- ------ ---- ---- -------
  66. Normal Scheduled 13h default-scheduler Successfully assigned kube-system/coredns-576cbf47c7-56thg to nuc1
  67. Warning NetworkNotReady 13h (x8 over 13h) kubelet, nuc1 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
  68. Normal Pulled 13h (x4 over 13h) kubelet, nuc1 Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  69. Normal Created 13h (x4 over 13h) kubelet, nuc1 Created container
  70. Normal Started 13h (x4 over 13h) kubelet, nuc1 Started container
  71. Warning BackOff 12h (x255 over 13h) kubelet, nuc1 Back-off restarting failed container
  72. Normal SandboxChanged 3h31m (x2 over 3h31m) kubelet, nuc1 Pod sandbox changed, it will be killed and re-created.
  73. Warning BackOff 3h31m (x3 over 3h31m) kubelet, nuc1 Back-off restarting failed container
  74. Normal Pulled 3h30m (x2 over 3h31m) kubelet, nuc1 Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  75. Normal Created 3h30m (x2 over 3h31m) kubelet, nuc1 Created container
  76. Normal Started 3h30m (x2 over 3h31m) kubelet, nuc1 Started container
  77. Normal Pulled 3h29m (x4 over 3h30m) kubelet, nuc1 Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  78. Normal Created 3h29m (x4 over 3h30m) kubelet, nuc1 Created container
  79. Normal Started 3h29m (x4 over 3h30m) kubelet, nuc1 Started container
  80. Warning BackOff 3h5m (x124 over 3h30m) kubelet, nuc1 Back-off restarting failed container
  81. Warning FailedMount 92m kubelet, nuc1 MountVolume.SetUp failed for volume "coredns-token-zwdp6" : couldn't propagate object cache: timed out waiting for the condition
  82. Normal SandboxChanged 91m (x2 over 92m) kubelet, nuc1 Pod sandbox changed, it will be killed and re-created.
  83. Normal Pulled 90m (x4 over 91m) kubelet, nuc1 Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  84. Normal Created 90m (x4 over 91m) kubelet, nuc1 Created container
  85. Normal Started 90m (x4 over 91m) kubelet, nuc1 Started container
  86. Warning BackOff 57m (x169 over 91m) kubelet, nuc1 Back-off restarting failed container
  87. Normal SandboxChanged 49m (x2 over 50m) kubelet, nuc1 Pod sandbox changed, it will be killed and re-created.
  88. Normal Pulled 47m (x4 over 49m) kubelet, nuc1 Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  89. Normal Created 47m (x4 over 49m) kubelet, nuc1 Created container
  90. Normal Started 47m (x4 over 49m) kubelet, nuc1 Started container
  91. Warning BackOff 4m49s (x214 over 49m) kubelet, nuc1 Back-off restarting failed container

Comments

  • @bryonbaker ,
    You can try to delete the 2 coredns pods, and they will be re-created.
    Are you in Lab 2.1 of LFD259?
    Thanks,
    -Chris

  • Posts: 28
    edited January 2019

    Hi,
    The issue is actually thoroughly documented in the CoreDNS web site. It is caused because CoreDNS is detecting a loopback and it terminates. It is expected behaviour.

    The solution is to change the DNS setting in /etc/resolv.conf. For those using Ubuntu I have documented what to do here as it can be tricky - especially with Ubuntu Desktop edition.
    There are other ways to solve it but in the end I set up an external DNS server with bind9 for resolving hostnames. Overkill I know.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training