Lab 13.3. How to fix metrics-server "no matches for kind "APIService"" error?
Hi!
I'm stuck on lab 13.3, step 3:
[08:32]user@ubuntu-vbox-k8s-master[metrics-server]$ kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created error: unable to recognize "https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"
I've seen "Be aware as new versions are released there may be some changes to the process and
the created objects" but could not fix it anyway.
Could anybody help?
Answers
-
Hello,
What version of the software are you using? This is what I see when I run the steps with a setup as declared in the lab:
root@cp:~# cd metrics-server/ ; less README.md
root@cp:~/metrics-server# kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
root@cp:~/metrics-server#0 -
I got the same error and the following seemed to fix it:
1) runkubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
to delete the components created by the 0.3.7 file
2) runkubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
as suggested in the project README.The problem seems to be that
apiregistration.k8s.io
is out of beta, and fully deprecated as of kubernetes version 1.22, as noted in serewicz's output. The version I'm on is 1.22 and I believe that's what the course is expecting us to be on at this point. So it should probably be updated to not tell us to use a components file that won't work with our version.2 -
I agree, it seems the lessons need some updates, in one previous lesson lab we upgraded our instances/nodes to kubernetes 1.22.1, at this point that yaml is expected to be compatible with that k8s version. Thanks for posting the url of the compatible yaml.
0 -
Hello,
Step 2 of exercise 13.3 mentions the software changes and to reference the current README.md file for updated information.
Step 3 of exercise 13.3 says
"Be aware as new versions are released there may be some changes to the process and
the created objects. Use the components.yaml to create the objects."Should one read the README and follow the most current directions the labs work. At least as of today they do. With dynamic software there are times when an update or change causes an issue, and one would need to revisit the steps when the new fix is produced.
Regards,
0 -
Serewicz that's a completely unacceptable answer. Yes we can google it, read readme's and search around but having to do this a lot for problems in the instructions does not make for a great learning experience.
You should update the instructions for this exercise or others.
5 -
Also use - --enable-aggregator-routing = true in the args, I had the issue resolved after using kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml and adding the above argument
0 -
I had this and had to delete the metrics server by running this:
kubectl delete service/metrics-server -n kube-system
kubectl delete deployment.apps/metrics-server -n kube-system
kubectl delete apiservices.apiregistration.k8s.io v1beta1.metrics.k8s.io
kubectl delete clusterroles.rbac.authorization.k8s.io system:aggregated-metrics-reader
kubectl delete clusterroles.rbac.authorization.k8s.io system:metrics-server
kubectl delete clusterrolebinding metrics-server:system:auth-delegator
kubectl delete clusterrolebinding system:metrics-server
kubectl delete rolebinding metrics-server-auth-reader -n kube-system
kubectl delete serviceaccount metrics-server -n kube-systemThen you can use the correct command for your current version of K8s (my version is 1.25.1)... so ie see the README.md at https://github.com/kubernetes-sigs/metrics-server which says in my case to install the "latest" so I run...
kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
hope this helps someone...
0 -
I think we should use apply instead of create, go this from stack overflow:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
and it worked!
0 -
Hi guys. My kubectl version is: Client Version: v1.26.1
Kustomize Version: v4.5.7
Server Version: v1.26.1
I used the @nicocerquera line of code, but I still have the same problem. I appreciate your help because I don't understand how to solve this problem.kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
student@cp:~/metrics-server$ kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE calico-kube-controllers-74677b4c5f-8sp68 1/1 Running 32 (31m ago) 62d calico-node-fk86z 1/1 Running 33 (31m ago) 63d calico-node-qnkpr 1/1 Running 33 (31m ago) 63d coredns-787d4945fb-q7dm4 1/1 Running 24 (31m ago) 50d coredns-787d4945fb-scn4t 1/1 Running 24 (31m ago) 50d etcd-cp 1/1 Running 32 (31m ago) 62d kube-apiserver-cp 1/1 Running 32 (31m ago) 62d kube-controller-manager-cp 1/1 Running 32 (31m ago) 62d kube-proxy-8jhd4 1/1 Running 32 (31m ago) 62d kube-proxy-vc97n 1/1 Running 32 (31m ago) 62d kube-scheduler-cp 1/1 Running 32 (31m ago) 62d metrics-server-6f6cdbf67d-rffkr 0/1 Running 0 8m38s
kubectl -n kube-system describe po metrics-server-6f6cdbf67d-rffkr Name: metrics-server-6f6cdbf67d-rffkr Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: metrics-server Node: worker/10.2.0.5 Start Time: Wed, 17 May 2023 19:32:10 +0000 Labels: k8s-app=metrics-server pod-template-hash=6f6cdbf67d Annotations: cni.projectcalico.org/containerID: 2a4379c31899206a82e701468440e55cc8ef69feb789ef16b1b8b025f97dd4a6 cni.projectcalico.org/podIP: 192.168.171.115/32 cni.projectcalico.org/podIPs: 192.168.171.115/32 Status: Running IP: 192.168.171.115 IPs: IP: 192.168.171.115 Controlled By: ReplicaSet/metrics-server-6f6cdbf67d Containers: metrics-server: Container ID: containerd://12d3fcbf84104ccff99a582ae30824bf4fc331ee1808826aa07a2f0a0fd19f3f Image: registry.k8s.io/metrics-server/metrics-server:v0.6.3 Image ID: registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 Port: 4443/TCP Host Port: 0/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s State: Running Started: Wed, 17 May 2023 19:32:12 +0000 Ready: False Restart Count: 0 Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /tmp from tmp-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpcff (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-rpcff: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned kube-system/metrics-server-6f6cdbf67d-rffkr to worker Normal Pulling 10m kubelet Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" Normal Pulled 10m kubelet Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" in 1.831113823s (1.831122166s including waiting) Normal Created 10m kubelet Created container metrics-server Normal Started 10m kubelet Started container metrics-server Warning Unhealthy 36s (x66 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
0 -
Hi @maybel,
Have you tried the edits recommended in the comment linked below?
https://forum.linuxfoundation.org/discussion/comment/33291/#Comment_33291
For the most part they are found in step 5 of the lab exercise, and perhaps missed based on the
describe
output.Regards,
-Chris0 -
Hi @chrispokorni, nice to see you! As you can see in line 22 of my code, I have the latest metric-server version. So I don't know what else I can do. I deleted everything like Dicalleson and installed it again, but I got the same problem.
I wonder if the lines below represent any clue about the problem.kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
0 -
Hi @maybel,
I meant the edits recommended in step 5 of the lab, which I added in the text box for clarity. They would appear somewhere after line 26
Args:
.
I did not see them reflected in youdescribe
output. Only upgrading to the latest metrics-server release is not sufficient. Read step 5 again and perform the suggested edits, compare with my recommendation from the earlier comment (linked above) as well...Regards,
-Chris0 -
spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s image: registry.k8s.io/metrics-server/metrics-server:v0.6.3 imagePullPolicy: IfNotPresent
I did steps 13.3: 5, and I made some improvements.
NAME READY STATUS RESTARTS AGE calico-kube-controllers-74677b4c5f-8sp68 1/1 Running 33 (152m ago) 63d calico-node-fk86z 1/1 Running 34 (152m ago) 64d calico-node-qnkpr 1/1 Running 34 (152m ago) 64d coredns-787d4945fb-q7dm4 1/1 Running 25 (152m ago) 51d coredns-787d4945fb-scn4t 1/1 Running 25 (152m ago) 51d etcd-cp 1/1 Running 33 (152m ago) 63d kube-apiserver-cp 1/1 Running 33 (152m ago) 63d kube-controller-manager-cp 1/1 Running 33 (152m ago) 63d kube-proxy-8jhd4 1/1 Running 33 (152m ago) 63d kube-proxy-vc97n 1/1 Running 33 (152m ago) 63d kube-scheduler-cp 1/1 Running 33 (152m ago) 63d metrics-server-6f6cdbf67d-z6c27 0/1 Running 0 56s metrics-server-7d4dc74cd9-d65cg 1/1 Running 0 2m42s
The code below is an Events of a pod metrics-server-6f6cdbf67d-z6c27 above.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 102s default-scheduler Successfully assigned kube-system/metrics-server-6f6cdbf67d-z6c27 to worker Normal Pulled 101s kubelet Container image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" already present on machine Normal Created 101s kubelet Created container metrics-server Normal Started 101s kubelet Started container metrics-server Warning Unhealthy 2s (x9 over 72s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
0 -
@chrispokorni! it's working well!! I repeated step 13.3:5 because I did something wrong, and it worked. Thank you so much
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)