Welcome to the new Linux Foundation Forum!

Exercise 3.3: Finish Cluster Setup - Problem with coreDNS

Hello folks,

I am following Ex 3.3 step 6 to determine if the DNS and Calico pods are ready for use, and I am getting an error as below. Any ideas are welcome please!

~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-bdqnk 1/2 CrashLoopBackOff 1905 6d18h
kube-system calico-node-q7xxq 1/2 CrashLoopBackOff 753 2d16h
kube-system calico-node-z9hzj 1/2 Error 1904 6d18h
kube-system coredns-86c58d9df4-677v8 0/1 ContainerCreating 0 13m
kube-system coredns-86c58d9df4-t7b2h 0/1 ContainerCreating 0 13m
kube-system etcd-cicd 1/1 Running 0 6d18h
kube-system kube-apiserver-cicd 1/1 Running 0 6d18h
kube-system kube-controller-manager-cicd 1/1 Running 0 6d18h
kube-system kube-proxy-6bq6x 1/1 Running 0 2d16h
kube-system kube-proxy-jvk7w 1/1 Running 0 6d18h
kube-system kube-proxy-rkvms 1/1 Running 0 6d18h
kube-system kube-scheduler-cicd 1/1 Running 0 6d18h

Comments

  • serewiczserewicz Posts: 506

    Hello,

    There are a few reasons the pods are not fully running. It depends on how far along in the labs you may be as there are some other steps to run in order that you have two nodes and both are willing to run all pods.

    When you ran kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16 did you have any errors?
    Did you find any errors when you did the kubeadm join from the worker node?
    Were you able to remove the taints on both nodes?

    Regards,

  • Hi, some of these errors are expected in step 6, then step 7 helps with fixing them.
    Are your nodes local VMs or cloud VM instances?
    -Chris

  • I have a similar problem. coreDNS is not starting up.
    I am running the two nodes on bare metal. I am still in LAB 2.1 but everything installed and set up without error.

    kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
    default       firstpod-7d88d7b6cf-bj8js                  1/1     Running            0          10m
    kube-system   calico-etcd-wr2cf                          1/1     Running            3          13h
    kube-system   calico-kube-controllers-57c8947c94-g2lbc   1/1     Running            3          13h
    kube-system   calico-node-lsjm9                          2/2     Running            17         13h
    kube-system   calico-node-zhgnd                          2/2     Running            9          13h
    kube-system   coredns-576cbf47c7-56thg                   0/1     CrashLoopBackOff   53         13h
    kube-system   coredns-576cbf47c7-nmznf                   0/1     CrashLoopBackOff   53         13h
    kube-system   etcd-nuc1                                  1/1     Running            4          13h
    kube-system   kube-apiserver-nuc1                        1/1     Running            4          13h
    kube-system   kube-controller-manager-nuc1               1/1     Running            3          13h
    kube-system   kube-proxy-ct89j                           1/1     Running            3          13h
    kube-system   kube-proxy-lbdxr                           1/1     Running            5          13h
    kube-system   kube-scheduler-nuc1                        1/1     Running            3          13h
    

    When I describe the coreDNS pod I have the followin in the logs:

          Warning  FailedMount      83m                     kubelet, nuc1      MountVolume.SetUp failed for volume "coredns-token-zwdp6" : couldn't propagate object cache: timed out waiting for the condition
          Normal   SandboxChanged   83m (x2 over 83m)       kubelet, nuc1      Pod sandbox changed, it will be killed and re-created.
          Normal   Pulled           81m (x4 over 83m)       kubelet, nuc1      Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
          Normal   Created          81m (x4 over 83m)       kubelet, nuc1      Created container
          Normal   Started          81m (x4 over 83m)       kubelet, nuc1      Started container
          Warning  BackOff          48m (x166 over 83m)     kubelet, nuc1      Back-off restarting failed container
          Normal   SandboxChanged   40m (x2 over 41m)       kubelet, nuc1      Pod sandbox changed, it will be killed and re-created.
          Normal   Pulled           39m (x4 over 40m)       kubelet, nuc1      Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
          Normal   Created          39m (x4 over 40m)       kubelet, nuc1      Created container
          Normal   Started          39m (x4 over 40m)       kubelet, nuc1      Started container
          Warning  BackOff          59s (x194 over 40m)     kubelet, nuc1      Back-off restarting failed container
    
  • @bryonbaker ,
    You can try to delete the 2 coredns pods, and they will be re-created.
    Are you in Lab 2.1 of LFD259?
    Thanks,
    -Chris

Sign In or Register to comment.