Welcome to the Linux Foundation Forum!

11.2 Ingress Controller lab issue

Hi, I am trying to use helm to deploy the ingress controller as indicated on step 5, but I get this error:

~/ingress-nginx$ helm install myingress .
Error: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "myingress"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

I've checked all the resources out and haven't found such a resource. What can I do?

Answers

  • Hi @paristiz,

    It seem that the installation process found one of its defined resource already installed in the cluster. Was the helm install step executed earlier? Did it produce any errors then?

    You can use the helm uninstall command to remove a prior release.

    Regards,
    -Chris

  • Hi Chris, thank you for your answer. I followed the instructions, specifically steps 4 and 5 regarding modify the values.yaml file and try to install the myingress. The issue raised after try to install it and nothing remains in helm:

    pablo@pablo-VirtualBox:~/ingress-nginx$ helm install myingress .
    Error: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "myingress"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

    pablo@pablo-VirtualBox:~/ingress-nginx$ helm list
    NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
    pablo@pablo-VirtualBox:~/ingress-nginx$

    As there isn't anything under helm, I can't use helm uninstall.

    Best regards,

    Pablo A.

  • serewicz
    serewicz Posts: 1,000

    Hello,

    I'm unaware of what the issue is in particular, perhaps we can do some troubleshooting. What version of helm are you using? Had you run this lab before, perhaps removed helm and re-added it? Do you have an ingress controller running in any namespace?

    Regards,

  • melchior
    melchior Posts: 16
    edited September 2021

    Found another issue related with that LAB that I will post on a separate entry.

  • Hi,
    Thank you all for the answers. I am sorry about the lack of feedback but I was on vacation for some weeks. After I came back, I read the other 11.2 post and used the solution proposed by Proliant:

    "The problem is solved after pulling an older version: helm fetch --version 3.36.0 ingress/ingress-nginx"

    Indeed, now after try of install the deployment is not showing the previous error, but after some time failed:

    pablo@pablo-VirtualBox:~/ingress-nginx$ helm install myingress .
    Error: failed pre-install: timed out waiting for the condition
    pablo@pablo-VirtualBox:~/ingress-nginx$ helm list
    NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
    myingress default 1 2021-10-17 23:10:43.034228936 -0300 -03 failed ingress-nginx-3.36.0 0.49.0

    These are the pods in all the namespaces:

    pablo@pablo-VirtualBox:~/ingress-nginx$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    default myingress-ingress-nginx-admission-create-bjc2l 0/1 CrashLoopBackOff 5 7m36s
    default myingress-ingress-nginx-admission-patch-ccvq6 0/1 Completed 0 126m
    default web-one-fb4bdb45d-99hpn 1/1 Running 0 35m
    default web-one-fb4bdb45d-jsbbc 0/1 NodeAffinity 0 33h
    default web-two-fb4bdb45d-jfctd 1/1 Running 0 35m
    default web-two-fb4bdb45d-sk46h 0/1 NodeAffinity 0 33h
    kube-system calico-kube-controllers-7dbc97f587-855rq 1/1 Running 27 197d
    kube-system calico-node-stgg9 1/1 Running 30 203d
    kube-system calico-node-xpzpf 1/1 Running 12 203d
    kube-system coredns-f9fd979d6-hltnx 1/1 Running 13 98d
    kube-system coredns-f9fd979d6-wppvr 1/1 Running 13 98d
    kube-system etcd-pablo-virtualbox 1/1 Running 34 203d
    kube-system kube-apiserver-pablo-virtualbox 1/1 Running 37 203d
    kube-system kube-controller-manager-pablo-virtualbox 1/1 Running 56 203d
    kube-system kube-proxy-2hmfn 1/1 Running 30 203d
    kube-system kube-proxy-gbqtf 1/1 Running 12 203d
    kube-system kube-scheduler-pablo-virtualbox 1/1 Running 55 203d
    linkerd-viz grafana-b48ddb5d8-4njb7 1/1 Running 0 35m
    linkerd-viz grafana-b48ddb5d8-sn6wd 0/1 NodeAffinity 0 117m
    linkerd-viz metrics-api-9d86bf5f7-4nprg 0/1 CrashLoopBackOff 13 35m
    linkerd-viz metrics-api-9d86bf5f7-gvgdr 0/1 NodeAffinity 0 114m
    linkerd-viz prometheus-5b49dcd6bf-7mncz 0/1 NodeAffinity 0 114m
    linkerd-viz prometheus-5b49dcd6bf-bthvd 1/1 Running 0 35m
    linkerd-viz tap-77c87d66f4-95wv8 0/1 NodeAffinity 0 117m
    linkerd-viz tap-77c87d66f4-s2fsc 0/1 Running 13 35m
    linkerd-viz tap-injector-54d45cb47-btk7d 0/1 NodeAffinity 0 114m
    linkerd-viz tap-injector-54d45cb47-h6n5s 0/1 Running 13 35m
    linkerd-viz web-74c6bbd948-5ln9m 0/1 NodeAffinity 0 113m
    linkerd-viz web-74c6bbd948-rv77l 0/1 CrashLoopBackOff 13 35m
    linkerd linkerd-controller-75d67d694c-4j5rt 0/2 NodeAffinity 0 114m
    linkerd linkerd-controller-75d67d694c-qvzwn 2/2 Running 1 35m
    linkerd linkerd-destination-85894f6879-r9zll 0/2 NodeAffinity 0 114m
    linkerd linkerd-destination-85894f6879-vx2md 1/2 CrashLoopBackOff 13 35m
    linkerd linkerd-identity-c685df5b-fj6rb 0/2 NodeAffinity 0 114m
    linkerd linkerd-identity-c685df5b-nhpr2 2/2 Running 0 35m
    linkerd linkerd-proxy-injector-55977c4876-c689j 2/2 Running 0 35m
    linkerd linkerd-proxy-injector-55977c4876-kp9dl 0/2 NodeAffinity 0 115m
    linkerd linkerd-sp-validator-c9d67fc88-dmvg5 2/2 Running 1 35m
    linkerd linkerd-sp-validator-c9d67fc88-h94pt 0/2 NodeAffinity 0 114m

    I don't know how to move on with this new error. Any help will be appreciated.

    Pablo A.

  • to complement previous post, these are the last events of the pod relating to "my-ingress" (myingress-ingress-nginx-admission-patch-ccvq6):

    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling 122m (x18 over 129m) default-scheduler 0/2 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.
    Normal Scheduled 121m default-scheduler Successfully assigned default/myingress-ingress-nginx-admission-patch-ccvq6 to pablo-virtualbox
    Warning FailedMount 121m kubelet, pablo-virtualbox MountVolume.SetUp failed for volume "myingress-ingress-nginx-admission-token-xg8zf" : failed to sync secret cache: timed out waiting for the condition
    Normal Pulling 121m kubelet, pablo-virtualbox Pulling image "docker.io/jettech/kube-webhook-certgen:v1.5.1"
    Normal Pulled 119m kubelet, pablo-virtualbox Successfully pulled image "docker.io/jettech/kube-webhook-certgen:v1.5.1" in 1m8.115190027s
    Normal Created 119m kubelet, pablo-virtualbox Created container patch
    Normal Started 119m kubelet, pablo-virtualbox Started container patch
    Normal SandboxChanged 119m kubelet, pablo-virtualbox Pod sandbox changed, it will be killed and re-created.
    Warning FailedCreatePodSandBox 119m kubelet, pablo-virtualbox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "myingress-ingress-nginx-admission-patch-ccvq6": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"\"": unknown

  • chrispokorni
    chrispokorni Posts: 2,349

    Hi @paristiz,

    It seems one of your nodes is not accepting workload because of disk-pressure and unreachable. Is one of your nodes running low on disk space? How is vbox managing the virtual disk space for your nodes - statically or dynamically?

    The Events of the pods with CrashLoopBackOff and NodeAffinity statuses may provide more helpful details about the failure causes and error messages.

    What do you see with kubectl get nodes and kubectl describe node <node-name>?

    Regards,
    -Chris

Categories

Upcoming Training