Welcome to the Linux Foundation Forum!

lab 6.2 pod of haproxy daemonset in crashloop due to missing of ingressClass

haproxy ingress was created by , as instructed by lab 6.2
kubectl create -f https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml

But the pod will be in crashloop, due to the missing of ingressClass

star@ubuntu-2cpu-8gmem:~/LFS260/lab/lab6$ kubectl logs haproxy-ingress-q6dsx -n ingress-controller
I0630 07:34:20.991995       7 launch.go:218]
Name:       HAProxy
Release:    v0.12.5
Build:      git-7b9aacd
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0630 07:34:20.992471       7 launch.go:221] watching for ingress resources with 'kubernetes.io/ingress.class' annotation: haproxy
I0630 07:34:20.992483       7 launch.go:228] watching for ingress resources with IngressClass' controller name: haproxy-ingress.github.io/controller
I0630 07:34:20.992499       7 launch.go:233] ignoring ingress resources without any class reference - --watch-ingress-without-class is false
I0630 07:34:20.993219       7 launch.go:499] Creating API client for https://10.96.0.1:443
I0630 07:34:21.021565       7 launch.go:511] Running in Kubernetes Cluster version v1.20 (v1.20.1) - git (clean) commit c4d752765b3bbac2237bf87cf0b1c2e307844666 - platform linux/amd64
I0630 07:34:21.653929       7 listers.go:134] loading object cache...
E0630 07:34:21.662997       7 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.11/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0630 07:34:22.958354       7 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.11/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0630 07:34:26.074623       7 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.11/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0630 07:34:31.426082       7 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.11/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0630 07:34:40.692972       7 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.11/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I0630 07:34:49.283554       7 main.go:47] Shutting down with signal terminated
I0630 07:34:49.283594       7 controller.go:208] shutting down controller queues
E0630 07:34:49.283712       7 listers.go:132] initial cache sync has timed out or shutdown has requested
I0630 07:34:49.283744       7 controller.go:87] HAProxy Ingress successfully initialized
I0630 07:34:49.283756       7 main.go:40] Exiting (0)

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    It looks like an RBAC error. The HA proxy site Get Started with HAProxy now suggests using Helm to install the proxy. Until I can update the lab to use Helm you could either follow the steps mentioned on their website, or add a service account to allow the ingress controller to run. If HAProxy has been updated, and is much different then the previous version, there may be further hiccups.

    Best case would be to create a new role that allows list access to ingressclasses in networking.k8s.io as the error mentions. Once list access is granted there may be other permissions to keep adding to the role. It may be easier for the lab to assign the cluster-admin clusterrole to system:serviceaccount:ingress-controller:ingress-controller but as this would give full admin privileges, it would be a non-production step.

    Regards,

  • Thank you! I added the permission to ingressclasses and it works.

    Few things to note that may cause issues is https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml

    in this yaml, need to remove (or add label to nodes) the nodeSelector of the daemonset, otherwise no pods will be launched.
    May need to invalidate the cache after recreate the daemonset/clusterrole to reflect the change.

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Thanks for the feedback! I'll be working on the update shortly and will check it out.

    Regards,

  • I hit the same snag with this lab exercise today also, and added a ClusterRoleBinding to 'cluster-admin' to workaround it as suggested, but as per the previous comments, it's probably not advisable to suggest this as people might get the idea that this is OK for production use

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @craig.jones,

    If the ingress class annotation is in place, I noticed that the haproxy-ingress container may still fail because of a liveness probe failure. I reconfigured the probe to target port 10254 instead of 10253 and it no longer failed.

    Regards,
    -Chris

  • dnx
    dnx Posts: 32

    This excercise generally did not work out for me.

    I ended up using helm to get haproxy working, as it was originally erroring about connection refused to it's liveness probe. Once I used the example from the haproxy ingress site with helm, the pod and service started.

    But, actually following the exposing of the 'tester' pod, and accessing it via curl, did not work for me using a host IP.

    If I look the 'tester' pod and service:

    pod/tester-77f475f4f4-qmn2q      1/1     Running   0          98m   172.16.10.202   k8sworker1   <none>           <none>
    service/tester       ClusterIP   10.105.233.255   <none>        80/TCP     97m    app=tester
    

    The ingress controller service:

    ingress-controller   haproxy-ingress   LoadBalancer   10.109.96.38     <pending>     80:32422/TCP,443:32462/TCP   28m
    

    10.109.96.38 is what I ended up being able to curl to get to the nginx page, but this is not an exposed IP, it's a cluster IP only.

    I'm a little lost on how to expose via a host IP, I think I can work it out - I've done nginx ingress controllers in the past that are exposed externally - but maybe the lab is giving some misdirection?

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @dnx,

    The Service does get configured with a node port. Have you tried that with the host IP?

    Regards,
    -Chris

  • dnx
    dnx Posts: 32

    Sorry @chrispokorni I don't understand your response. Which Service - the Ingress Controller? What is 'that'?

    I've tried specifying various host IPs in the ingress manifest, as well as connecting to ingress with the host IP, and none have worked.

  • dnx
    dnx Posts: 32

    Ahhh worked it out, could connect on a node port when specifying the port 32422 as seen in the above service haproxy-ingress. It's been a while since I've set up ingress and services, the lab not specifying a port (must have some load balancer in front???) threw me off.

Categories

Upcoming Training