Welcome to the Linux Foundation Forum!

Lab 13.3. How to fix metrics-server "no matches for kind "APIService"" error?

Hi!

I'm stuck on lab 13.3, step 3:

[08:32]user@ubuntu-vbox-k8s-master[metrics-server]$ kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
error: unable to recognize "https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"

I've seen "Be aware as new versions are released there may be some changes to the process and
the created objects" but could not fix it anyway.

Could anybody help?

Answers

  • serewicz
    serewicz Posts: 1,000

    Hello,

    What version of the software are you using? This is what I see when I run the steps with a setup as declared in the lab:

    root@cp:~# cd metrics-server/ ; less README.md
    root@cp:~/metrics-server# kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    serviceaccount/metrics-server created
    deployment.apps/metrics-server created
    service/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    root@cp:~/metrics-server#

  • I got the same error and the following seemed to fix it:
    1) run kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml to delete the components created by the 0.3.7 file
    2) run kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml as suggested in the project README.

    The problem seems to be that apiregistration.k8s.io is out of beta, and fully deprecated as of kubernetes version 1.22, as noted in serewicz's output. The version I'm on is 1.22 and I believe that's what the course is expecting us to be on at this point. So it should probably be updated to not tell us to use a components file that won't work with our version.

  • goosecoder
    goosecoder Posts: 2
    edited November 2021

    I agree, it seems the lessons need some updates, in one previous lesson lab we upgraded our instances/nodes to kubernetes 1.22.1, at this point that yaml is expected to be compatible with that k8s version. Thanks for posting the url of the compatible yaml.

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Step 2 of exercise 13.3 mentions the software changes and to reference the current README.md file for updated information.

    Step 3 of exercise 13.3 says

    "Be aware as new versions are released there may be some changes to the process and
    the created objects. Use the components.yaml to create the objects."

    Should one read the README and follow the most current directions the labs work. At least as of today they do. With dynamic software there are times when an update or change causes an issue, and one would need to revisit the steps when the new fix is produced.

    Regards,

  • cvrupesh
    cvrupesh Posts: 6

    Also use - --enable-aggregator-routing = true in the args, I had the issue resolved after using kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml and adding the above argument

  • dicalleson
    dicalleson Posts: 17
    edited March 2023

    I had this and had to delete the metrics server by running this:

    kubectl delete service/metrics-server -n kube-system
    kubectl delete deployment.apps/metrics-server -n kube-system
    kubectl delete apiservices.apiregistration.k8s.io v1beta1.metrics.k8s.io
    kubectl delete clusterroles.rbac.authorization.k8s.io system:aggregated-metrics-reader
    kubectl delete clusterroles.rbac.authorization.k8s.io system:metrics-server
    kubectl delete clusterrolebinding metrics-server:system:auth-delegator
    kubectl delete clusterrolebinding system:metrics-server
    kubectl delete rolebinding metrics-server-auth-reader -n kube-system
    kubectl delete serviceaccount metrics-server -n kube-system

    Then you can use the correct command for your current version of K8s (my version is 1.25.1)... so ie see the README.md at https://github.com/kubernetes-sigs/metrics-server which says in my case to install the "latest" so I run...

    kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    hope this helps someone...

  • I think we should use apply instead of create, go this from stack overflow:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    

    and it worked!

  • maybel
    maybel Posts: 45

    Hi guys. My kubectl version is: Client Version: v1.26.1
    Kustomize Version: v4.5.7
    Server Version: v1.26.1
    I used the @nicocerquera line of code, but I still have the same problem. I appreciate your help because I don't understand how to solve this problem.
    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    student@cp:~/metrics-server$ kubectl -n kube-system get pods
    NAME                                       READY   STATUS    RESTARTS       AGE
    calico-kube-controllers-74677b4c5f-8sp68   1/1     Running   32 (31m ago)   62d
    calico-node-fk86z                          1/1     Running   33 (31m ago)   63d
    calico-node-qnkpr                          1/1     Running   33 (31m ago)   63d
    coredns-787d4945fb-q7dm4                   1/1     Running   24 (31m ago)   50d
    coredns-787d4945fb-scn4t                   1/1     Running   24 (31m ago)   50d
    etcd-cp                                    1/1     Running   32 (31m ago)   62d
    kube-apiserver-cp                          1/1     Running   32 (31m ago)   62d
    kube-controller-manager-cp                 1/1     Running   32 (31m ago)   62d
    kube-proxy-8jhd4                           1/1     Running   32 (31m ago)   62d
    kube-proxy-vc97n                           1/1     Running   32 (31m ago)   62d
    kube-scheduler-cp                          1/1     Running   32 (31m ago)   62d
    metrics-server-6f6cdbf67d-rffkr            0/1     Running   0              8m38s
    
    
     kubectl -n kube-system describe po metrics-server-6f6cdbf67d-rffkr
    Name:                 metrics-server-6f6cdbf67d-rffkr
    Namespace:            kube-system
    Priority:             2000000000
    Priority Class Name:  system-cluster-critical
    Service Account:      metrics-server
    Node:                 worker/10.2.0.5
    Start Time:           Wed, 17 May 2023 19:32:10 +0000
    Labels:               k8s-app=metrics-server
                          pod-template-hash=6f6cdbf67d
    Annotations:          cni.projectcalico.org/containerID: 2a4379c31899206a82e701468440e55cc8ef69feb789ef16b1b8b025f97dd4a6
                          cni.projectcalico.org/podIP: 192.168.171.115/32
                          cni.projectcalico.org/podIPs: 192.168.171.115/32
    Status:               Running
    IP:                   192.168.171.115
    IPs:
      IP:           192.168.171.115
    Controlled By:  ReplicaSet/metrics-server-6f6cdbf67d
    Containers:
      metrics-server:
        Container ID:  containerd://12d3fcbf84104ccff99a582ae30824bf4fc331ee1808826aa07a2f0a0fd19f3f
        Image:         registry.k8s.io/metrics-server/metrics-server:v0.6.3
        Image ID:      registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10
        Port:          4443/TCP
        Host Port:     0/TCP
        Args:
          --cert-dir=/tmp
          --secure-port=4443
          --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
          --kubelet-use-node-status-port
          --metric-resolution=15s
        State:          Running
          Started:      Wed, 17 May 2023 19:32:12 +0000
        Ready:          False
        Restart Count:  0
        Requests:
          cpu:        100m
          memory:     200Mi
        Liveness:     http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
        Readiness:    http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
        Environment:  <none>
        Mounts:
          /tmp from tmp-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpcff (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      tmp-dir:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
        SizeLimit:  <unset>
      kube-api-access-rpcff:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              kubernetes.io/os=linux
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason     Age                 From               Message
      ----     ------     ----                ----               -------
      Normal   Scheduled  10m                 default-scheduler  Successfully assigned kube-system/metrics-server-6f6cdbf67d-rffkr to worker
      Normal   Pulling    10m                 kubelet            Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.6.3"
      Normal   Pulled     10m                 kubelet            Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" in 1.831113823s (1.831122166s including waiting)
      Normal   Created    10m                 kubelet            Created container metrics-server
      Normal   Started    10m                 kubelet            Started container metrics-server
      Warning  Unhealthy  36s (x66 over 10m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500
    
  • chrispokorni
    chrispokorni Posts: 2,376

    Hi @maybel,

    Have you tried the edits recommended in the comment linked below?

    https://forum.linuxfoundation.org/discussion/comment/33291/#Comment_33291

    For the most part they are found in step 5 of the lab exercise, and perhaps missed based on the describe output.

    Regards,
    -Chris

  • maybel
    maybel Posts: 45

    Hi @chrispokorni, nice to see you! As you can see in line 22 of my code, I have the latest metric-server version. So I don't know what else I can do. I deleted everything like Dicalleson and installed it again, but I got the same problem.
    I wonder if the lines below represent any clue about the problem.

    kubectl top pods
    Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
    
  • chrispokorni
    chrispokorni Posts: 2,376

    Hi @maybel,

    I meant the edits recommended in step 5 of the lab, which I added in the text box for clarity. They would appear somewhere after line 26 Args:.
    I did not see them reflected in you describe output. Only upgrading to the latest metrics-server release is not sufficient. Read step 5 again and perform the suggested edits, compare with my recommendation from the earlier comment (linked above) as well...

    Regards,
    -Chris

  • maybel
    maybel Posts: 45
    spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-insecure-tls
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            - --metric-resolution=15s
            image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
            imagePullPolicy: IfNotPresent
    

    I did steps 13.3: 5, and I made some improvements.

    NAME                                       READY   STATUS    RESTARTS        AGE
    calico-kube-controllers-74677b4c5f-8sp68   1/1     Running   33 (152m ago)   63d
    calico-node-fk86z                          1/1     Running   34 (152m ago)   64d
    calico-node-qnkpr                          1/1     Running   34 (152m ago)   64d
    coredns-787d4945fb-q7dm4                   1/1     Running   25 (152m ago)   51d
    coredns-787d4945fb-scn4t                   1/1     Running   25 (152m ago)   51d
    etcd-cp                                    1/1     Running   33 (152m ago)   63d
    kube-apiserver-cp                          1/1     Running   33 (152m ago)   63d
    kube-controller-manager-cp                 1/1     Running   33 (152m ago)   63d
    kube-proxy-8jhd4                           1/1     Running   33 (152m ago)   63d
    kube-proxy-vc97n                           1/1     Running   33 (152m ago)   63d
    kube-scheduler-cp                          1/1     Running   33 (152m ago)   63d
    metrics-server-6f6cdbf67d-z6c27            0/1     Running   0               56s
    metrics-server-7d4dc74cd9-d65cg            1/1     Running   0               2m42s
    
    

    The code below is an Events of a pod metrics-server-6f6cdbf67d-z6c27 above.

    Events:
      Type     Reason     Age               From               Message
      ----     ------     ----              ----               -------
      Normal   Scheduled  102s              default-scheduler  Successfully assigned kube-system/metrics-server-6f6cdbf67d-z6c27 to worker
      Normal   Pulled     101s              kubelet            Container image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" already present on machine
      Normal   Created    101s              kubelet            Created container metrics-server
      Normal   Started    101s              kubelet            Started container metrics-server
      Warning  Unhealthy  2s (x9 over 72s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500
    
    
  • maybel
    maybel Posts: 45

    @chrispokorni! it's working well!! I repeated step 13.3:5 because I did something wrong, and it worked. Thank you so much

Categories

Upcoming Training