Section 2.1.5 "cannot get resource" Error thrown during $ sudo kubeadm join...

When I attempt to join the minion to the cluster I am seeing:
[discovery] Trying to connect to API Server "xxx.xxx.xxx.xxx:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://xxx.xxx.xxx.xxx:6443"
[discovery] Requesting info from "https://xxx.xxx.xxx.xxx:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "xxx.xxx.xxx.xxx:6443"
[discovery] Successfully established connection with API Server "xxx.xxx.xxx.xxx:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:yr0p9a" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
Comments
-
I partially fixed this by editing k8sSecond.sh and changed the versions from 1.12 to 1.13:
sudo apt-get install -y kubeadm=1.13.0-00 kubelet=1.13.0-00 kubectl=1.13.0-00
And then re-ran the join command.
Now I get:
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
But when i do a kubectl get nodes I see that my master node is on 1.12 and my minion node is on 1.13.
Will look into modifying the k8sMaster.sh to migrate to 1.13 and start all over again with the lesson.
0 -
Hello,
To answer your second post first, you will need to use the same version of software on both nodes. Were you using 1.12.1 on both nodes before when the join failed? If you add a -v-10 to the join command you can see even more output which may help find the error.
Please show the kubeadm init command on the master node and the kubeadm join command on the worker node to help troubleshoot the issue. Also any errors that are in the init or join commands would help.Regards,
0 -
I encountered a very similar issue on Tuesday afternoon, with 1.12.1 installed on both master and worker nodes. It is interesting how on Monday 1.13 came out, also on Monday the installation worked fine for 1.12.1 nodes and I was able to join the cluster, then by Tuesday it was broken.
My error output was showing kubelet-config-1.13 though.
Today, similar attempt errored out with the kubelet-config-1.12 in the same output format.
Things are being changed in K8s...
Will try again this weekend and see if it got resolved, or I will troubleshoot it.-Chris
0 -
I fixed it by starting over using 1.13 for master and minion.
0 -
@jtronson ,
I hope labs work as expected on 1.13.However, if you run into any issues and you need to start over with 1.12.1, I posted a fix which resolves the "kubeadm join" permission error for the get of the "kubelet-config-1.12" configmap from the "kube-system" namespace.
https://github.com/chris-pok/k8s-1.12.1.gitGood luck!
-Chris0 -
There is a feature (which I just found) that the init process looks for an uses the newest version of software, regardless of what was installed. Once the 1.13 software was released the control plane no longer matches. This issues would come up every time a new version of software is released.
The fix is to include this into the command:kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
The process would also work if all software were on 1.13, until 1.14 is available.
Regards,
0 -
I believe this error is due to the Lab being written poorly. I was able to fix it by realising that there is a hidden command in the steps:
The problem occurs because expectations are set from earlier in the guide with multi-line blue outputs being shown from the commands being run. In the particular steps shown in the image, it at first appears like a multi-line output containing the contents of a bash runnable script. However this is just a one line output and then ANOTHER COMMAND that you have to run which is shown in the red box. It worked when I ran the command:
sudo apt-get update && sudo apt-get upgrade -ySoooooooooooo not happy with this. Sooooooooooo very not happy. My first impressions on the quality of this course are 2/10.
0 -
I finally decided to use the suggested approach, then I created 2 VM on Google Cloud and configured the cluster. It works just fine but deployed the first pod (basic.yaml) I'm not able to connect to it after configuring the containerPort.
Doing akubectl get pod -o wide
I get :NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES basicpod 1/1 Running 0 36m 192.168.184.130 lft-minion-1 <none> <none>
And doing a
kubectl describe pod basicpod
I get:Containers: webcont: Container ID: docker://f7acff6efc34df9cb328824f1f74c1a14596c127fd64d800658d63794145463f Image: nginx Image ID: docker-pullable://[email protected]:5d32f60db294b5deb55d078cd4feb410ad88e6fe77500c87d3970eca97f54dba Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 20 Dec 2018 10:24:13 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-9zqf4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-9zqf4: Type: Secret (a volume populated by a Secret) SecretName: default-token-9zqf4 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 35m default-scheduler Successfully assigned default/basicpod to lft-minion-1 Normal Pulling 35m kubelet, lft-minion-1 pulling image "nginx" Normal Pulled 35m kubelet, lft-minion-1 Successfully pulled image "nginx" Normal Created 35m kubelet, lft-minion-1 Created container Normal Started 35m kubelet, lft-minion-1 Started container
Then, everything seems ok but I can't connect to 192.168.184.130 doing
curl http://192.168.184.130
0 -
Hi @guglielmino ,
There seems to be an issue with you node to node networking, if you cannot curl from the master to the minion.
-Chris0 -
Hi @kozdog ,
In step 3 of the Lab, running cat against the k8sSecond.sh script provides an output of the entire script, all the commands that the script will run thru in order to initialize Kubernetes on the second node. The lines below the cat line are just a snippet of the output.
In step 4 however, you are running the shell script, which then executes on your behalf all the commands in it, including the one highlighted above.
Regards,
-Chris0
Categories
- 9.9K All Categories
- 29 LFX Mentorship
- 82 LFX Mentorship: Linux Kernel
- 463 Linux Foundation Boot Camps
- 266 Cloud Engineer Boot Camp
- 93 Advanced Cloud Engineer Boot Camp
- 43 DevOps Engineer Boot Camp
- 28 Cloud Native Developer Boot Camp
- 1 Express Training Courses
- 1 Express Courses - Discussion Forum
- 1.6K Training Courses
- 18 LFC110 Class Forum
- 3 LFC131 Class Forum
- 19 LFD102 Class Forum
- 130 LFD103 Class Forum
- 9 LFD121 Class Forum
- 60 LFD201 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 544 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 20 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 35 LFS207 Class Forum
- 295 LFS211 Class Forum
- 53 LFS216 Class Forum
- 45 LFS241 Class Forum
- 39 LFS242 Class Forum
- 33 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 962 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 85 LFS260 Class Forum
- 124 LFS261 Class Forum
- 29 LFS262 Class Forum
- 78 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 16 LFS268 Class Forum
- 14 LFS269 Class Forum
- 193 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 205 LFW211 Class Forum
- 148 LFW212 Class Forum
- 890 Hardware
- 212 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 99 Printers & Scanners
- 85 Storage
- 747 Linux Distributions
- 88 Debian
- 64 Fedora
- 13 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 468 Linux System Administration
- 38 Cloud Computing
- 67 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 77 Network Management
- 107 System Management
- 48 Web Management
- 61 Mobile Computing
- 22 Android
- 25 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 525 Off Topic
- 127 Introductions
- 211 Small Talk
- 19 Study Material
- 782 Programming and Development
- 256 Kernel Development
- 492 Software Development
- 919 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 46 All In Program
- 46 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)