Welcome to the Linux Foundation Forum!

[metrics-server] 403 Forbidden

Hello,
The metrics server doesn't work:

[email protected]:~$ k get po --all-namespaces 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS       AGE
kube-system   calico-kube-controllers-65898446b5-t57p7   1/1     Running   17 (82m ago)   22d
kube-system   calico-node-775nt                          1/1     Running   20 (82m ago)   28d
kube-system   calico-node-wc56w                          1/1     Running   19 (82m ago)   28d
kube-system   coredns-64897985d-q4zrk                    1/1     Running   3 (82m ago)    14d
kube-system   coredns-64897985d-t9fxt                    1/1     Running   3 (82m ago)    14d
kube-system   etcd-cpnode                                1/1     Running   19 (82m ago)   22d
kube-system   kube-apiserver-cpnode                      1/1     Running   25 (82m ago)   22d
kube-system   kube-controller-manager-cpnode             1/1     Running   19 (82m ago)   22d
kube-system   kube-proxy-8ph65                           1/1     Running   17 (82m ago)   22d
kube-system   kube-proxy-z6rn9                           1/1     Running   17 (82m ago)   22d
kube-system   kube-scheduler-cpnode                      1/1     Running   18 (82m ago)   22d
kube-system   metrics-server-75b6774694-ph58h            1/1     Running   0              27m

[email protected]:~$ kubectl -n kube-system edit deployment metrics-server
Edit cancelled, no changes made.

[email protected]:~$ kubectl -n kube-system logs metrics-server-75b6774694-ph58h 
I0605 16:28:46.583155       1 secure_serving.go:116] Serving securely on [::]:4443

[email protected]:~$ kubectl top pod --all-namespaces
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

[email protected]:~$ kubectl -n kube-system logs metrics-server-75b6774694-ph58h 
I0605 16:28:46.583155       1 secure_serving.go:116] Serving securely on [::]:4443
E0605 16:29:46.620009       1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:workernode: unable to fetch metrics from Kubelet workernode (192.168.122.11): request failed - "403 Forbidden", response: "Forbidden (user=system:serviceaccount:kube-system:metrics-server, verb=get, resource=nodes, subresource=stats)", unable to fully scrape metrics from source kubelet_summary:cpnode: unable to fetch metrics from Kubelet cpnode (192.168.122.100): request failed - "403 Forbidden", response: "Forbidden (user=system:serviceaccount:kube-system:metrics-server, verb=get, resource=nodes, subresource=stats)"]

Any hint? In the past posts, I didn't manage to find the solution.
Calico is using Cidr 192.168.0.0/24 and the nodes are on 192.168.122.xy
The options --kubelet-insecure-tls --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname are set, and the version is v0.3.7
The Cri is Docker

Best Answers

  • leopastorsdg
    leopastorsdg Posts: 14
    edited June 2022 Answer ✓

    Hi @thomas.bucaioni, I took similar approach weeks ago, and it worked fine for me - now I checked and my cluster is at v1.22.1 (I surely used the previous Lab guide)

    Anyway, the error message is now clearly different.

    It might be the case of version compatibility. Take a look at:
    Metrics-Server Compatibility Matrix. Compatibility of metrics-server v0.3x is officially supported up to kubernetes v1.21 (even when it worked for me and previous version of Lab with kubernetes v1.22.1).

    Since you are reinstalling and redoing things, maybe trying a newer metrics-server version
    would be an interesting next step in troubleshooting.

  • chrispokorni
    chrispokorni Posts: 1,657
    Answer ✓

    Hi @thomas.bucaioni and @leopastorsdg,

    The metrics-server v0.3.7 installation does seem to throw an API error on Kubernetes v1.23.1, an error that was not there before. However, the latest metrics-server version does not. So, removing metrics-server v0.3.7:

    kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

    and then installing the latest metrics-server should work as expected:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    The metrics-server Deployment edit (in step 5) only needs the --kubelet-insecure-tls argument to be added.

    Regards,
    -Chris

Answers

  • @thomas.bucaioni

    Did you make a customized installation of Calico? ("/16" subnet mask is expected, unless you had to change - you mentioned it is using "/24")

    'Apparently' the metrics-server is working fine, and the system is denying access to the service account - which could be the due to privileges, or maybe not.

    Confirm that everything networking is ok before proceeding, including not only pod IP addresses, but also services, etc.

  • Hi @leopastorsdg
    Calico manifest is set voluntarily to /24 indeed, each time I tried /16, it all went absolutely wrong
    Let me try to redo a bunch of labs to check everything is ok, for example the Linkerd one

  • On a fresh install, I get:

    $ kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
    ...
    unable to recognize "https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"
    
    $ kubectl top pod --all-namespaces
    error: Metrics API not available
    

    Is there a way to remove the metrics-server to try to reinstall it?

  • Just remade an install from scratch, starting from v1.22.1-00 and upgrading to v1.23.1-00, and then jumped to Lab 13.3 but same Api message:

    $ kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    serviceaccount/metrics-server created
    deployment.apps/metrics-server created
    service/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    error: unable to recognize "https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"
    

    What I did were labs 3.1, 3.2, 3.3, and 4.1 then 13.3

  • Ubuntu is 20.04, this must be it

  • Could be. I am using Ubuntu 18.04.6

    It is advisable to change just one thing at a time in troubleshooting. If you have time, please see if you can test a newer version of metrics server.

  • Hi @leopastorsdg
    Yes, good idea. First I'll retry under Ubuntu 18.04 and then try to install the metrics server in a newer version

  • Maybe you can go with current version of Ubuntu. Wouldn't be a good time to try?

  • Yep, let's go

  • chrispokorni
    chrispokorni Posts: 1,657

    Hi @thomas.bucaioni and @leopastorsdg,

    When re-provisioning the infrastructure and bootstrapping a new cluster, please keep in mind the following:

    • The labs are tested on Ubuntu 20.04 LTS to match the current version of the CKA exam environment. The prior version 18.04 LTS should still work. Ubuntu 22.04 LTS (latest) has not been tested yet for these lab exercises - it may very well work though...

    • The recommended CPU, memory and disk requirements should be implemented at the minimum.

    • The most important resource, however, the networking - is the trickiest to get right. The two video guides from the intro chapter point out configuration settings for cloud environments (AWS and GCP), but the same considerations need to be applied when working with local hypervisors. One key requirement: IP addresses should never overlap between VM/nodes, Pods, and Services. If VM/nodes are on 192.168.122.x/24 (ranging from 192.168.122.0 to 192.168.122.254), then Pods can safely operate on 192.168.0.0/24 (range from 192.168.0.0 to 192.168.0.254), while the Services are on the default 10.96.0.0/12 network (ranging from 10.96.0.0 to 10.111.255.254). In this scenario, allowing the Pods access to 192.168.0.0/16 would introduce an overlap that will eventually cause routing issues within the cluster. So, in order to keep the subnets distinct, the 192.168.122.x/24 for VM/nodes and the 192.168.0.0/24 for Pods should work without any issues (only the size of the pod network is impacted here, being smaller than the default /16). Please ensure that both calico.yaml and kubeadm-config.yaml are populated with the correct 192.168.0.0/24 Pod network, and the two lines defining it are uncommented in calico.yaml.

    • Second key networking requirement: firewalls should be disabled, or fully opened, to allow inbound/ingress traffic from all sources, to all ports, all protocols (as per the two video guides). When working with VirtualBox or other local hypervisors, ideally a single bridged NIC is sufficient per VM, and promiscuous mode is enabled to allow all traffic. This will allow all Kubernetes control plane agents to communicate with each other, all plugins (DNS, calico, etc...) and all our applications to do the same.

    • The initial Kubernetes version 1.22.1 (from Ch 3) and the upgrade to version 1.23.1 (in Ch 4) are relying on two earlier stable releases of Kubernetes. The latest release 1.24 introduced a ton of challenges, and it will be incorporated in the lab exercises once it becomes stable enough to do so.

    Both Kubernetes v1.22 and 1.23 should work with Docker as the container runtime.

    Going back to the metrics-server, version 0.3.7 worked without issues. However I will give it another try, to see if any upstream changes are impacting the compatibility between k8s 1.22/1.23 and metrics-server 0.3.7 (although not the latest, it is a release successfully tested in these lab exercises).

    Regards,
    -Chris

  • kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

    and then installing the latest metrics-server should work as expected:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    The metrics-server Deployment edit (in step 5) only needs the --kubelet-insecure-tls argument to be added.

    Hi @chrispokorni
    Top works as expected now, thank you for giving a look

Categories

Upcoming Training