Welcome to the Linux Foundation Forum!

Lab 3.1 - Kubeadm init Error creating kube-proxy service account

Hey guys,

I'm having trouble getting the control plane up and running without issues. I've followed the steps so far without any errors or issues, but I'm unable to initialize the cluster successfully. I get the following output:

  1. root@cp:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out #<-- Save output for future review
  2. [init] Using Kubernetes version: v1.24.1
  3. [preflight] Running pre-flight checks
  4. [WARNING SystemVerification]: missing optional cgroups: blkio
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  9. [certs] Generating "ca" certificate and key
  10. [certs] Generating "apiserver" certificate and key
  11. [certs] apiserver serving cert is signed for DNS names [cp k8scp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.64.7]
  12. [certs] Generating "apiserver-kubelet-client" certificate and key
  13. [certs] Generating "front-proxy-ca" certificate and key
  14. [certs] Generating "front-proxy-client" certificate and key
  15. [certs] Generating "etcd/ca" certificate and key
  16. [certs] Generating "etcd/server" certificate and key
  17. [certs] etcd/server serving cert is signed for DNS names [cp localhost] and IPs [192.168.64.7 127.0.0.1 ::1]
  18. [certs] Generating "etcd/peer" certificate and key
  19. [certs] etcd/peer serving cert is signed for DNS names [cp localhost] and IPs [192.168.64.7 127.0.0.1 ::1]
  20. [certs] Generating "etcd/healthcheck-client" certificate and key
  21. [certs] Generating "apiserver-etcd-client" certificate and key
  22. [certs] Generating "sa" key and public key
  23. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  24. [kubeconfig] Writing "admin.conf" kubeconfig file
  25. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  26. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  27. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  28. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  29. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  30. [kubelet-start] Starting the kubelet
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. [control-plane] Creating static Pod manifest for "kube-scheduler"
  35. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  36. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  37. [apiclient] All control plane components are healthy after 4.503459 seconds
  38. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  41. [upload-certs] Using certificate key:
  42. ca3cbd7a4e61124ccb144d974230c018d842f1327e518d798e34047313ba6ae2
  43. [mark-control-plane] Marking the node cp as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
  45. [bootstrap-token] Using token: htas18.2rgk0f9hjb211pm1
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. error execution phase addon/kube-proxy: error when creating kube-proxy service account: unable to create serviceaccount: client rate limiter Wait returned an error: context deadline exceeded
  55. To see the stack trace of this error execute with --v=5 or higher

I've tried creating the .kube directory with my non-root user. Some of the kube-system pods start up, but calico and coreDns obviously don't work. kubeadm init works just fine, but then I won't have any of the networking set up.

I'm following all the steps on VM's running locally on my machine, so I'm assuming there might be some updates to the dependencies we installed earlier since the course materials were made. Any help troubleshooting this issue?

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Hi ! I have same issue ! Exactly same.
    My OS : ubuntu 22.04.1 LTS ( server edition)
    I've tried with the last debian and i've the same issue.

    I work on virtualbox 6.1. The promiscious mode is allow for all.
    Firewall on os is disabled.

    I am blocked ..

  • For complete, I went further.

    I skip the installation of kube-proxy like this

    1. kubeadm init --config=kubeadm-config.yaml --upload-certs --skip-phases=addon/kube-proxy \
    2. | tee kubeadm-init.out

    The installation will be fine.
    After that, i will install kube-proxy like this

    kubeadm init phase addon kube-proxy \
    --control-plane-endpoint="my-hostname:6443" \
    --pod-network-cidr="MY_CIDR"

    I'have that :

    1. I0116 21:50:32.971041 1370 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.24
    2. error execution phase addon/kube-proxy: error when creating kube-proxy service account: unable to create serviceaccount: Post "https://MY-HOSTNAME:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s": dial tcp MY-IP:6443: connect: connection refused
    3. To see the stack trace of this error execute with --v=5 or higher
  • Thanks for your comment @steve.decot. I was able to use your method to get the kubeadm init command to complete successfully. I think the kube-proxy is dependent upon something else starting up, because I was able to use your method by omitting the kube-proxy addon first, and then waiting a few moments before applying the addon. I think your error is coming from not specifying the correct version (I tried not using the config file, and adding the configuration explicitly as flags in the init command):

    kubeadm init --pod-network-cidr=10.10.0.0/16 --kubernetes-version=1.24.1 --control-plane-endpoint=k8scp:6443 --upload-certs --skip-phases=addon/kube-proxy | tee kubeadm-init.out

    After the first initialize was run I then tried adding the addon (and it worked):

    kubeadm init phase addon kube-proxy --pod-network-cidr=10.10.0.0/16 --kubernetes-version=1.24.1 --control-plane-endpoint=k8scp:6443

    I'm still experiencing the underlying issue, that the cluster wasn't spinning the system pods up successfully, and they are constantly taking turns going from a Running state to a CrashLoopBackOff state. I think there is something with the networking of the pods, since I noticed that almost all of the system pods are running on the same host IP (the same IP address I added to the hosts file with k8scp alias). I should also say that I had an issue with the control plane node not going into a Readystate due to some No Scheduletaints, and I was only able to get it working by removing those taints:

    kubectl taint nodes cp node-role.kubernetes.io/master- node-role.kubernetes.io/control-plane- node.kubernetes.io/not-ready-

    Here's my cluster trying to start up and the different pod's IPs:

    1. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kube-system calico-kube-controllers-55fc758c88-n4pbq 0/1 CrashLoopBackOff 4 (16s ago) 117s 10.10.242.78 cp <none> <none>
    3. kube-system calico-node-l2c85 1/1 Running 3 (82s ago) 117s 192.168.64.8 cp <none> <none>
    4. kube-system coredns-6d4b75cb6d-56kzv 1/1 Running 2 (96s ago) 4m47s 10.10.242.72 cp <none> <none>
    5. kube-system coredns-6d4b75cb6d-qj4qs 0/1 CrashLoopBackOff 3 (37s ago) 4m47s 10.10.242.77 cp <none> <none>
    6. kube-system etcd-cp 1/1 Running 28 (3m7s ago) 5m36s 192.168.64.8 cp <none> <none>
    7. kube-system kube-apiserver-cp 1/1 Running 30 (5m5s ago) 5m37s 192.168.64.8 cp <none> <none>
    8. kube-system kube-controller-manager-cp 1/1 Running 42 (3m23s ago) 4m 192.168.64.8 cp <none> <none>
    9. kube-system kube-proxy-k7rht 0/1 Error 3 (108s ago) 4m47s 192.168.64.8 cp <none> <none>
    10. kube-system kube-scheduler-cp 0/1 CrashLoopBackOff 35 (41s ago) 3m55s 192.168.64.8 cp <none> <none>
  • Hi @cbperkins .
    The issue comes with my version of ubuntu. The last one...
    I retry with this version :

    lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.5 LTS Release: 20.04 Codename: focal

    And it's work ! .

    I don't want you to go through what I went through.

    Here are all the commands to install your cluster ---> https://gitlab.com/steve.decot/k8s-install/-/blob/main/README.md

    I hope everything will be fine for you

    Have good nigh / day

    Steve Decot

  • Hey @steve.decot . Thanks for the tip! I was running Ubuntu 20.04.1 and that was indeed the problem. I upgraded to 20.04.5 like you suggested and everything is ready and running now. Appreciate the help.

    Chris

  • Posts: 2,436

    Hi @steve.decot,

    The lab guide calls for Ubuntu 20.04 LTS, and the labs exercises have been compiled and tested on that OS version. Other OS versions may introduce dependency issues which have not been tested and resolved just yet.

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training