Welcome to the Linux Foundation Forum!

nginx-ingress-controller CrashLoopBackOff (Lab 9)

AngeloC 1
AngeloC 1 Posts: 2
edited December 2017 in LFS258 Class Forum

Hi All,

When I use multi cluster nodes (e.g. 1 master and 1 worker node), I’m getting the following error with the nginx-ingress-controller Pod:

 

$ kubectl get po -w

NAME                                                 READY     STATUS 

nginx-ingress-controller-p86r8  1/1       Running

nginx-ingress-controller-p86r8   0/1       Error     

nginx-ingress-controller-p86r8   0/1       CrashLoopBackOff

nginx-ingress-controller-p86r8   1/1       Running

nginx-ingress-controller-p86r8   0/1       Error

nginx-ingress-controller-p86r8   0/1       CrashLoopBackOff

 

$ kubectl get po,rc,svc

NAME                                                  READY     STATUS

po/default-http-backend-29mvr       1/1       Running

po/nginx-ingress-controller-p86r8   0/1       Error

 

NAME                                       DESIRED   CURRENT   READY

rc/default-http-backend        1                   1                 1

rc/nginx-ingress-controller   1                   1                 1

 

NAME                                     TYPE          CLUSTER-IP     EXTERNAL-IP   PORT(S)

svc/default-http-backend   ClusterIP   10.106.8.239   <none>           80/TCP

svc/kubernetes                     ClusterIP   10.96.0.1          <none>          443/TCP

 

 

I’m using the backend.yaml manifest provided in Lab9.

(I have to mention that the Pod-to-Pod network has been configured with Weave Net add-on and all works fine. So, I don’t think the issue is related to Weave Net).

 

However, when using minikube all works fine.

 

Does anyone had the same issue? Any solution?

 

Thanks,

Angelo

Comments

  • $ kubectl get nodes -o wide

    NAME                        STATUS     ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME

    ubuntu-master.example.com   Ready      master    4d        v1.8.4    <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://1.13.1

    ubuntu-node1.example.com    Ready      <none>    4d        v1.8.4    <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://1.13.1

  • reifnir
    reifnir Posts: 2
    edited January 2018

    Have the same issue on minikube and server version 1.9.0 and 1.8.0.

    If you delete and create a fresh platform and then follow the instructions in lab9, you see this exact issue.

  • fcioanca
    fcioanca Posts: 1,886
    edited January 2018
    Please make sure you have access to the latest lab exercises, which went live Thursday. All older labs were replaced with new and improved ones. Clear your cache to make sure you have access to these new labs.
  • reifnir
    reifnir Posts: 2
    edited January 2018

    The manifests that I was creating caused the creation of an ingress controller and default backend. In 1.9.0, it appears that there are workloads for that out of the box in kube-system namespace.

  • serewicz
    serewicz Posts: 1,000
    edited January 2018

    Hello,

    Just wondering if you have had the issue with v1.9.1, the version the updated labs were written for, and using flannel or Calico. Just trying to find out if this remains an issue and narrow down differences to start the troubleshooting process. 

    Regards,

Categories

Upcoming Training