Welcome to the Linux Foundation Forum!

Section 13.3 - Setting up the dashboard

nlp
nlp Posts: 8
edited November 2022 in LFS258 Class Forum

Summary: I would love some help setting up/viewing my Kubernetes Dashboard. Right now, no matter what I do, I get ERR_CONNECTION_REFUSED when attempting to access it.

Details --

  1. I used helm pull to grab the kubernetes-dashboard chart.
  2. I modified the chart's values.yaml to make it a NodePort, then installed with helm install...:
service:
  type: NodePort
  # Dashboard service port
  externalPort: 443

(I named the installation "kdash"; otherwise my setup seems identical to the instructions)

  1. I successfully added the role binding (reformatted for readability):
~$ kubectl get rolebindings,clusterrolebindings   --all-namespaces

NAME: clusterrolebinding.rbac.authorization.k8s.io/dashaccess                                             
ROLE: ClusterRole/cluster-admin 

NAME: clusterrolebinding.rbac.authorization.k8s.io/kdash-kubernetes-dashboard-metrics
ROLE: ClusterRole/kdash-kubernetes-dashboard-metrics      
  1. Other parameters check out too --
  • The service exists as a nodeport with an open port:
$ kubectl get svc
kdash-kubernetes-dashboard  NodePort  10.99.89.8  <none>  443:32599/TCP 

$ kubectl describe svc kdash-kubernetes-dashboard 
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.89.8
IPs:                      10.99.89.8
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32599/TCP
Endpoints:                192.168.169.236:8443
Session Affinity:         None
External Traffic Policy:  Cluster
  • The pod exists and is reachable:
$ kubectl get pods
kdash-kubernetes-dashboard-66446945c5-qhf8f   1/1     Running

$ kubectl describe pod kdash-kubernetes-dashboard-66446945c5-qhf8f 
Name:         kdash-kubernetes-dashboard-66446945c5-qhf8f
Node:         k8stest-raw-worker/10.2.0.5
Start Time:   Wed, 09 Nov 2022 18:25:04 +0000
Labels:       app.kubernetes.io/component=kubernetes-dashboard
              app.kubernetes.io/instance=kdash
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kubernetes-dashboard
              app.kubernetes.io/version=2.7.0
              helm.sh/chart=kubernetes-dashboard-5.11.0
              pod-template-hash=66446945c5
Annotations:  cni.projectcalico.org/containerID: 2d4d90457004e845d7d4f8111a6d11b1f9145a96fee7024e4093a71e1fb75a66
              cni.projectcalico.org/podIP: 192.168.169.236/32
              cni.projectcalico.org/podIPs: 192.168.169.236/32
              seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IP:           192.168.169.236
IPs:
  IP:           192.168.169.236
Controlled By:  ReplicaSet/kdash-kubernetes-dashboard-66446945c5
Containers:
  kubernetes-dashboard:
    Container ID:  containerd://1f8d2f2bb0a0a09e6d5cb6f0275cc1e0fcd9b77ee8acca40a8736bc520785f4c
    Image:         kubernetesui/dashboard:v2.7.0
    Image ID:      docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --namespace=default
      --auto-generate-certificates
      --metrics-provider=none
    State:          Running
      Started:      Wed, 09 Nov 2022 18:42:07 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 09 Nov 2022 18:41:17 +0000
      Finished:     Wed, 09 Nov 2022 18:41:17 +0000
    Ready:          True
    Restart Count:  4
  • Logs look normal:
$ kubectl logs kdash-kubernetes-dashboard-66446945c5-qhf8f 

2022/11/09 18:42:07 Starting overwatch
2022/11/09 18:42:07 Using namespace: default
2022/11/09 18:42:07 Using in-cluster config to connect to apiserver
2022/11/09 18:42:07 Using secret token for csrf signing
2022/11/09 18:42:07 Initializing csrf token from kubernetes-dashboard-csrf secret
2022/11/09 18:42:07 Successful initial request to the apiserver, version: v1.24.0
2022/11/09 18:42:07 Generating JWE encryption key
2022/11/09 18:42:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-default. Starting
2022/11/09 18:42:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace default
2022/11/09 18:42:07 Initializing JWE encryption key from synchronized object
2022/11/09 18:42:07 no metrics provider selected, will not check metrics.
2022/11/09 18:42:07 Auto-generating certificates
2022/11/09 18:42:07 Successfully created certificates
2022/11/09 18:42:07 Serving securely on HTTPS port: 8443

My CP IP:

$ curl ifconfig.io
34.134.162.12

But when I visit https://34.134.162.12/32599, I always get ERR_CONNECTION_REFUSED.

Other steps I've tried.
-Using the IP of my worker node: https://34.134.162.12/32599.
-Using different targetports: 80, 8080, 8443, etc.
-Using different browsers.
-Checking my firewall. All http and https requests are allowed on all ports (I know this because I had to change these settings in a previous exercise, to get the linkderd dashboard to appear)
-Deleting the dashboard pod
-Deleting and reinstalling with helm.

What am I missing? What steps should I take next?

Best Answers

  • pnts
    pnts Posts: 33
    Answer ✓

    @nlp
    Yes, but I can access your dashboard UI.
    So you've done something right.

    I'd look into:

    1. Client side/browser issues connecting on https to server with self signed cert. I'm using Firefox 105 which gives me a MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT warning that I have to accept before I'm allowed to connect to your dashboard.
    2. 34.134.162.12 is in the Google cloud ipv4 range. Are there any security groups / edge firewall rules that would allow me, but not allow you, to connect?
  • chrispokorni
    chrispokorni Posts: 2,155
    Answer ✓

    Hi @nlp,

    After reading that @pnts was successful in accessing your dash, I tried as well with both Chrome and Firefox. On Chrome I had to click on the "Advanced" button and then the "Proceed... (unsafe)" link at the bottom of the page. On Firefox similarly I had to "Accept" a certificate warning prior to being able to access the login page of the dash.

    So, your dash app works, NP service exposes the dash publicly, and all you are left with is to create the access token and possibly use another browser to access the dash.

    Regards,
    -Chris

Answers

  • pnts
    pnts Posts: 33
    edited November 2022

    Is this a problem with the dashboard or with external access in general?
    Have you been able to create a service of type NodePort for a simple nginx deployment?

    You should be able to do curl -k https://192.168.169.236:8443 on a worker node.
    You should also be able to do curl -k https://10.99.89.8:443 on a worker node.
    Finally, you should be able to do curl -k https://34.134.162.12:32599 from any computer.
    Because I just did from my computer :-)

    I used Bearer token for authentication with my dashboard.
    I was very simple to get going:
    1. Creating a Service Account
    2. Creating a ClusterRoleBinding
    3. Getting a Bearer Token
    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

  • Hi @nlp,

    The output of your kubectl describe is missing the Events section, where possible reasons for the 4 restarts, and any errors that may have caused the unexpected terminations should be listed.

    Regards,
    -Chris

  • nlp
    nlp Posts: 8

    @chrispokorni said:
    Hi @nlp,

    The output of your kubectl describe is missing the Events section, where possible reasons for the 4 restarts, and any errors that may have caused the unexpected terminations should be listed.

    Regards,
    -Chris

    Chris, good eye. Here's the rest of the output from kubectl describe...; unfortunately Events doesn't tell us much:

      kubernetes-dashboard:
        Container ID:  containerd://1f8d2f2bb0a0a09e6d5cb6f0275cc1e0fcd9b77ee8acca40a8736bc520785f4c
        Image:         kubernetesui/dashboard:v2.7.0
        Image ID:      docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
        Port:          8443/TCP
        Host Port:     0/TCP
        Args:
          --namespace=default
          --auto-generate-certificates
          --metrics-provider=none
        State:          Running
          Started:      Wed, 09 Nov 2022 18:42:07 +0000
        Last State:     Terminated
          Reason:       Error
          Exit Code:    2
          Started:      Wed, 09 Nov 2022 18:41:17 +0000
          Finished:     Wed, 09 Nov 2022 18:41:17 +0000
        Ready:          True
        Restart Count:  4
        Limits:
          cpu:     2
          memory:  200Mi
        Requests:
          cpu:        100m
          memory:     200Mi
        Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
        Environment:  <none>
        Mounts:
          /certs from kubernetes-dashboard-certs (rw)
          /tmp from tmp-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7r4l6 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      kubernetes-dashboard-certs:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kdash-kubernetes-dashboard-certs
        Optional:    false
      tmp-volume:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      kube-api-access-7r4l6:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:                      <none>
    
  • nlp
    nlp Posts: 8
    edited November 2022

    @pnts said:
    Is this a problem with the dashboard or with external access in general?
    Have you been able to create a service of type NodePort for a simple nginx deployment?

    You should be able to do curl -k https://192.168.169.236:8443 on a worker node.
    You should also be able to do curl -k https://10.99.89.8:443 on a worker node.
    Finally, you should be able to do curl -k https://34.134.162.12:32599 from any computer.
    Because I just did from my computer :-)

    I used Bearer token for authentication with my dashboard.
    I was very simple to get going:
    1. Creating a Service Account
    2. Creating a ClusterRoleBinding
    3. Getting a Bearer Token
    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

    @pnts, thanks for chiming in.

    Is this a problem with the dashboard or with external access in general?

    I would have said, "I don't know!", but I can indeed curl -k https://34.134.162.12:32599, so this must be an external access issue specific to this resource.

    Have you been able to create a service of type NodePort for a simple nginx deployment?

    Yes, I've been able to complete every exercise until now, including the linkerd dashboard, which presents similar challenges.

    I'll take a look at bearer tokens - good call.

    To the folks who maintain this course: are you planning to include a bearer token-based approach to this problem? Or any kind of troubleshooting help? It's frustrating to be left high and dry with instructions that don't work consistently, and no recourse. Even if bearer tokens do work, it would be instructive to understand why the default approach didn't. External access does tend to be trickier than internal, so exercises like this would benefit from greater depth.

  • pnts
    pnts Posts: 33

    @nlp
    You're fine.
    https://34.134.162.12:32599 is accessible on the public internet.

  • nlp
    nlp Posts: 8

    @pnts said:

    @nlp
    You're fine.
    https://34.134.162.12:32599 is accessible on the public internet.

    @pnts, you're right about the bearer token creation - it's dead simple. My problem is, I can't even access the UI in order to enter the bearer token. I appreciate the help; unfortunately this doesn't fix this issue.

  • nlp
    nlp Posts: 8

    Well this is an odd one.

    Over the last couple days I've tried accessing the dashboard in a "grid" fashion:

    • On my work laptop
    • On my personal laptop
    • On my phone

    On each of these devices, I tried multiple browsers:

    • Chome
    • Firefox
    • Brave
    • Edge

    I cleared browser caches to eliminate that possibility. And just to be sure, I tried these steps both at home and at the office. Every combination yielded the same "connection refused" error.

    The breakthrough? Using Safari on macOS 11.7 at home. Who knew.

    @pnts and @chrispokorni, thank you! This was a real help; hopefully it helps future students who get tripped on this exercise.

    To the people who maintain this course: this chapter has real deficiencies that need to be addressed:

    • The assignment instructions differ too sharply from those in the dashboard repo. Yes, the lesson advises "check the readme for updates," and that's fine for a free online mini course or a stackoverflow post. In a $300 class, the instructions need to be accurate - that's what we're paying for.
    • This lesson needs far more debugging advice, ideas for alternative access, and notes about common pitfalls. I've dealt with cert issues, caching issues, and cross browser incompatibility before - none of that prepared me for how extraordinarily finicky this particular web app would be. You need to prepare students for this.
    • Help us out with stale details. The last time we discussed bearer tokens was in chapter 6, and if you follow this exercise, the bearer token you’ll need will be tied to the default service account, not the new dashboard account as the instructions suggest. I bypassed this issue easily only because of timely tips from @pnts - following the instructions would have sent me down yet another rabbit hole.

    Omissions, elisions and inaccuracies like these will turn an enlightening lesson into a painful, unedifying slog. Yes, web administration involves frustration and dead ends. We buy courses like this to distill the relevant material and bypass some of those fiddly gotchas. This lesson did the opposite. I would have been better off ignoring the instructions and relying on forum posts and readmes.

  • Napsty
    Napsty Posts: 10

    Just adding some notes here. Was stuck at Lab 13.3 as well. I was able to access the installed dashboard using https://nodeip:highport , however there was no token as indicated by the lab pdf.

    First created the clusterrolebinding "dashaccess" with clusterrole "cluster-admin" was created for the serviceaccount "default:kubernetes-dashboard" (in the current chart version, serviceaccount is called kubernetes-dashboard):

    ck@cka1:~/kubernetes-dashboard$ kubectl create clusterrolebinding dashaccess --clusterrole=cluster-admin --serviceaccount=default:kubernetes-dashboard
    clusterrolebinding.rbac.authorization.k8s.io/dashaccess created
    

    Needed to create a token manually as there was no token. A kind of similar post here in the forums already suggested that newer Kubernetes versions don't create tokens in secrets anymore (https://forum.linuxfoundation.org/discussion/862022/how-does-kubectl-create-token-work) and they need to be created manually:

    ck@cka1:~/kubernetes-dashboard$ kubectl create token kubernetes-dashboard
    eyJhbGciOiJSUzI1NiIsImtpZCI6IndVb0ZRM2FsSzFPaE5Sb0lOR29XMFNsVjhRblJMbEd3anhYdHl5YTVyb3cifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4NjEwNjY5LCJpYXQiOjE2Njg2MDcwNjksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6Imt1YmVybmV0ZXMtZGFzaGJvYXJkIiwidWlkIjoiZDI2MDgxMjctYTExNi00ZDljLTk2NTItZjRiNjdjMGQ5YTNkIn19LCJuYmYiOjE2Njg2MDcwNjksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVybmV0ZXMtZGFzaGJvYXJkIn0.fqIhUErlc2xGkVU9IqSHS1syH-YOG6FoOnxOFrGumDwFEXtD0rNDPtaMaq7rKcXoP-lA4EdU-bznq4q-mA2LiJU8Ymj7hmG891iyn9i5QA71A5BFqpjgC1FCRg5ta2sJMgbOw1GtOQeJJ-JNHv5L4SOuY8seFrfNApRdoV_IXBICLbpLvYbwBUfcP8dYnLUXwFEbeKxsAn02_mvVuXCVYEMOUGGimLd9ELod5RoZ-FBpHSCDy2qyvaOi916xlRfRFi0ppv3oDdlFP6oxbRLA1l49oPptVU7NaIfpJShQelwgbdPcb7rOGOJIKT3dtxcjk96MUx32lZ5xhK6VNjo2eQ
    
    

    Using this I was able to access the metrics dashboard in lab 13.3.

    But the course definitely needs a rework.

Categories

Upcoming Training