Welcome to the Linux Foundation Forum!

Lab 13.3. How to fix metrics-server "no matches for kind "APIService"" error?

Hi!

I'm stuck on lab 13.3, step 3:

  1. [08:32]user@ubuntu-vbox-k8s-master[metrics-server]$ kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
  2. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  3. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  4. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  5. serviceaccount/metrics-server created
  6. deployment.apps/metrics-server created
  7. service/metrics-server created
  8. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  9. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  10. error: unable to recognize "https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"

I've seen "Be aware as new versions are released there may be some changes to the process and
the created objects" but could not fix it anyway.

Could anybody help?

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Answers

  • Posts: 1,000

    Hello,

    What version of the software are you using? This is what I see when I run the steps with a setup as declared in the lab:

    root@cp:~# cd metrics-server/ ; less README.md
    root@cp:~/metrics-server# kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    serviceaccount/metrics-server created
    deployment.apps/metrics-server created
    service/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    root@cp:~/metrics-server#

  • I got the same error and the following seemed to fix it:
    1) run kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml to delete the components created by the 0.3.7 file
    2) run kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml as suggested in the project README.

    The problem seems to be that apiregistration.k8s.io is out of beta, and fully deprecated as of kubernetes version 1.22, as noted in serewicz's output. The version I'm on is 1.22 and I believe that's what the course is expecting us to be on at this point. So it should probably be updated to not tell us to use a components file that won't work with our version.

  • Posts: 2
    edited November 2021

    I agree, it seems the lessons need some updates, in one previous lesson lab we upgraded our instances/nodes to kubernetes 1.22.1, at this point that yaml is expected to be compatible with that k8s version. Thanks for posting the url of the compatible yaml.

  • Posts: 1,000

    Hello,

    Step 2 of exercise 13.3 mentions the software changes and to reference the current README.md file for updated information.

    Step 3 of exercise 13.3 says

    "Be aware as new versions are released there may be some changes to the process and
    the created objects. Use the components.yaml to create the objects."

    Should one read the README and follow the most current directions the labs work. At least as of today they do. With dynamic software there are times when an update or change causes an issue, and one would need to revisit the steps when the new fix is produced.

    Regards,

  • Posts: 6

    Also use - --enable-aggregator-routing = true in the args, I had the issue resolved after using kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml and adding the above argument

  • Posts: 17
    edited March 2023

    I had this and had to delete the metrics server by running this:

    kubectl delete service/metrics-server -n kube-system
    kubectl delete deployment.apps/metrics-server -n kube-system
    kubectl delete apiservices.apiregistration.k8s.io v1beta1.metrics.k8s.io
    kubectl delete clusterroles.rbac.authorization.k8s.io system:aggregated-metrics-reader
    kubectl delete clusterroles.rbac.authorization.k8s.io system:metrics-server
    kubectl delete clusterrolebinding metrics-server:system:auth-delegator
    kubectl delete clusterrolebinding system:metrics-server
    kubectl delete rolebinding metrics-server-auth-reader -n kube-system
    kubectl delete serviceaccount metrics-server -n kube-system

    Then you can use the correct command for your current version of K8s (my version is 1.25.1)... so ie see the README.md at https://github.com/kubernetes-sigs/metrics-server which says in my case to install the "latest" so I run...

    kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    hope this helps someone...

  • I think we should use apply instead of create, go this from stack overflow:

    1. kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    and it worked!

  • Posts: 45

    Hi guys. My kubectl version is: Client Version: v1.26.1
    Kustomize Version: v4.5.7
    Server Version: v1.26.1
    I used the @nicocerquera line of code, but I still have the same problem. I appreciate your help because I don't understand how to solve this problem.
    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    1. student@cp:~/metrics-server$ kubectl -n kube-system get pods
    2. NAME READY STATUS RESTARTS AGE
    3. calico-kube-controllers-74677b4c5f-8sp68 1/1 Running 32 (31m ago) 62d
    4. calico-node-fk86z 1/1 Running 33 (31m ago) 63d
    5. calico-node-qnkpr 1/1 Running 33 (31m ago) 63d
    6. coredns-787d4945fb-q7dm4 1/1 Running 24 (31m ago) 50d
    7. coredns-787d4945fb-scn4t 1/1 Running 24 (31m ago) 50d
    8. etcd-cp 1/1 Running 32 (31m ago) 62d
    9. kube-apiserver-cp 1/1 Running 32 (31m ago) 62d
    10. kube-controller-manager-cp 1/1 Running 32 (31m ago) 62d
    11. kube-proxy-8jhd4 1/1 Running 32 (31m ago) 62d
    12. kube-proxy-vc97n 1/1 Running 32 (31m ago) 62d
    13. kube-scheduler-cp 1/1 Running 32 (31m ago) 62d
    14. metrics-server-6f6cdbf67d-rffkr 0/1 Running 0 8m38s
    15.  
    1. kubectl -n kube-system describe po metrics-server-6f6cdbf67d-rffkr
    2. Name: metrics-server-6f6cdbf67d-rffkr
    3. Namespace: kube-system
    4. Priority: 2000000000
    5. Priority Class Name: system-cluster-critical
    6. Service Account: metrics-server
    7. Node: worker/10.2.0.5
    8. Start Time: Wed, 17 May 2023 19:32:10 +0000
    9. Labels: k8s-app=metrics-server
    10. pod-template-hash=6f6cdbf67d
    11. Annotations: cni.projectcalico.org/containerID: 2a4379c31899206a82e701468440e55cc8ef69feb789ef16b1b8b025f97dd4a6
    12. cni.projectcalico.org/podIP: 192.168.171.115/32
    13. cni.projectcalico.org/podIPs: 192.168.171.115/32
    14. Status: Running
    15. IP: 192.168.171.115
    16. IPs:
    17. IP: 192.168.171.115
    18. Controlled By: ReplicaSet/metrics-server-6f6cdbf67d
    19. Containers:
    20. metrics-server:
    21. Container ID: containerd://12d3fcbf84104ccff99a582ae30824bf4fc331ee1808826aa07a2f0a0fd19f3f
    22. Image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
    23. Image ID: registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10
    24. Port: 4443/TCP
    25. Host Port: 0/TCP
    26. Args:
    27. --cert-dir=/tmp
    28. --secure-port=4443
    29. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    30. --kubelet-use-node-status-port
    31. --metric-resolution=15s
    32. State: Running
    33. Started: Wed, 17 May 2023 19:32:12 +0000
    34. Ready: False
    35. Restart Count: 0
    36. Requests:
    37. cpu: 100m
    38. memory: 200Mi
    39. Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
    40. Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
    41. Environment: <none>
    42. Mounts:
    43. /tmp from tmp-dir (rw)
    44. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpcff (ro)
    45. Conditions:
    46. Type Status
    47. Initialized True
    48. Ready False
    49. ContainersReady False
    50. PodScheduled True
    51. Volumes:
    52. tmp-dir:
    53. Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    54. Medium:
    55. SizeLimit: <unset>
    56. kube-api-access-rpcff:
    57. Type: Projected (a volume that contains injected data from multiple sources)
    58. TokenExpirationSeconds: 3607
    59. ConfigMapName: kube-root-ca.crt
    60. ConfigMapOptional: <nil>
    61. DownwardAPI: true
    62. QoS Class: Burstable
    63. Node-Selectors: kubernetes.io/os=linux
    64. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    65. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    66. Events:
    67. Type Reason Age From Message
    68. ---- ------ ---- ---- -------
    69. Normal Scheduled 10m default-scheduler Successfully assigned kube-system/metrics-server-6f6cdbf67d-rffkr to worker
    70. Normal Pulling 10m kubelet Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.6.3"
    71. Normal Pulled 10m kubelet Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" in 1.831113823s (1.831122166s including waiting)
    72. Normal Created 10m kubelet Created container metrics-server
    73. Normal Started 10m kubelet Started container metrics-server
    74. Warning Unhealthy 36s (x66 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
  • Posts: 2,435

    Hi @maybel,

    Have you tried the edits recommended in the comment linked below?

    https://forum.linuxfoundation.org/discussion/comment/33291/#Comment_33291

    For the most part they are found in step 5 of the lab exercise, and perhaps missed based on the describe output.

    Regards,
    -Chris

  • Posts: 45

    Hi @chrispokorni, nice to see you! As you can see in line 22 of my code, I have the latest metric-server version. So I don't know what else I can do. I deleted everything like Dicalleson and installed it again, but I got the same problem.
    I wonder if the lines below represent any clue about the problem.

    1. kubectl top pods
    2. Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
  • Posts: 2,435

    Hi @maybel,

    I meant the edits recommended in step 5 of the lab, which I added in the text box for clarity. They would appear somewhere after line 26 Args:.
    I did not see them reflected in you describe output. Only upgrading to the latest metrics-server release is not sufficient. Read step 5 again and perform the suggested edits, compare with my recommendation from the earlier comment (linked above) as well...

    Regards,
    -Chris

  • Posts: 45
    1. spec:
    2. containers:
    3. - args:
    4. - --cert-dir=/tmp
    5. - --secure-port=4443
    6. - --kubelet-insecure-tls
    7. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    8. - --kubelet-use-node-status-port
    9. - --metric-resolution=15s
    10. image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
    11. imagePullPolicy: IfNotPresent

    I did steps 13.3: 5, and I made some improvements.

    1. NAME READY STATUS RESTARTS AGE
    2. calico-kube-controllers-74677b4c5f-8sp68 1/1 Running 33 (152m ago) 63d
    3. calico-node-fk86z 1/1 Running 34 (152m ago) 64d
    4. calico-node-qnkpr 1/1 Running 34 (152m ago) 64d
    5. coredns-787d4945fb-q7dm4 1/1 Running 25 (152m ago) 51d
    6. coredns-787d4945fb-scn4t 1/1 Running 25 (152m ago) 51d
    7. etcd-cp 1/1 Running 33 (152m ago) 63d
    8. kube-apiserver-cp 1/1 Running 33 (152m ago) 63d
    9. kube-controller-manager-cp 1/1 Running 33 (152m ago) 63d
    10. kube-proxy-8jhd4 1/1 Running 33 (152m ago) 63d
    11. kube-proxy-vc97n 1/1 Running 33 (152m ago) 63d
    12. kube-scheduler-cp 1/1 Running 33 (152m ago) 63d
    13. metrics-server-6f6cdbf67d-z6c27 0/1 Running 0 56s
    14. metrics-server-7d4dc74cd9-d65cg 1/1 Running 0 2m42s
    15.  

    The code below is an Events of a pod metrics-server-6f6cdbf67d-z6c27 above.

    1. Events:
    2. Type Reason Age From Message
    3. ---- ------ ---- ---- -------
    4. Normal Scheduled 102s default-scheduler Successfully assigned kube-system/metrics-server-6f6cdbf67d-z6c27 to worker
    5. Normal Pulled 101s kubelet Container image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" already present on machine
    6. Normal Created 101s kubelet Created container metrics-server
    7. Normal Started 101s kubelet Started container metrics-server
    8. Warning Unhealthy 2s (x9 over 72s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
    9.  
  • Posts: 45

    @chrispokorni! it's working well!! I repeated step 13.3:5 because I did something wrong, and it worked. Thank you so much

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training