[Lab 3.2][step 7] The cluster-info ConfigMap does not yet contain a JWS signature for token ID
Hi,
I recently started LFS258 training and I am stuck at the step 7 of lab 3.2.
I deployed my lab on Virtualbox and I currently have 2 VMs Ubuntu 18.04, with 2 network adapters Host-Only + NAT.
My cluster was deployed on the master node with the following configuration
root@master:~# cat kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.56.100 bindPort: 6443 --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.18.1 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
The kubeadm init and apply command completed succesfully (you can find output in attachment for init)
I completed steps in 3.2 to create CA and new token
student@master:~$ sudo kubeadm --v=5 token create --print-join-command
I0510 08:57:08.914031 27949 token.go:121] [token] validating mixed arguments
I0510 08:57:08.914123 27949 token.go:130] [token] getting Clientsets from kubeconfig file
I0510 08:57:08.914149 27949 cmdutil.go:79] Using kubeconfig file: /home/student/.kube/config
I0510 08:57:08.919229 27949 token.go:243] [token] loading configurations
I0510 08:57:08.920355 27949 interface.go:400] Looking for default routes with IPv4 addresses
I0510 08:57:08.921316 27949 interface.go:405] Default route transits interface "enp0s3"
I0510 08:57:08.922168 27949 interface.go:208] Interface enp0s3 is up
I0510 08:57:08.923092 27949 interface.go:256] Interface "enp0s3" has 2 addresses :[192.168.56.100/24 fe80::a00:27ff:fef8:907f/64].
I0510 08:57:08.923849 27949 interface.go:223] Checking addr 192.168.56.100/24.
I0510 08:57:08.924634 27949 interface.go:230] IP found 192.168.56.100
I0510 08:57:08.925321 27949 interface.go:262] Found valid IPv4 address 192.168.56.100 for interface "enp0s3".
I0510 08:57:08.925966 27949 interface.go:411] Found active IP 192.168.56.100
W0510 08:57:08.926707 27949 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0510 08:57:08.927506 27949 token.go:255] [token] creating token
kubeadm join k8smaster:6443 --token 8j77yo.vrv09tpb6wqpf2p9 --discovery-token-ca-cert-hash sha256:d21ca296b30091b304dfa03fb3b600e32eb67cef13c3f51badd835e25dfad1bastudent@master:~$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8j77yo.vrv09tpb6wqpf2p9 23h 2020-05-11T08:57:08Z authentication,signing system:bootstrappers:kubeadm:default-node-token
student@master:~$
However, When I join the cluster from the worker node, I get the error below:
root@worker:/home/student# kubeadm join --v=5 k8smaster:6443 --token 8j77yo.vrv09tpb6wqpf2p9 --discovery-token-ca-cert-hash sha256:d21ca296b30091b304dfa03fb3b600e32eb67cef13c3f51badd835e25dfad1ba
W0510 08:59:12.753901 1121 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0510 08:59:12.754736 1121 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0510 08:59:12.755402 1121 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0510 08:59:12.756629 1121 preflight.go:90] [preflight] Running general checks
I0510 08:59:12.757284 1121 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0510 08:59:12.758224 1121 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0510 08:59:12.758814 1121 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0510 08:59:12.759434 1121 checks.go:102] validating the container runtime
I0510 08:59:12.854290 1121 checks.go:128] validating if the service is enabled and active
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0510 08:59:13.009580 1121 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0510 08:59:13.009770 1121 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
[...]
I0510 08:59:13.139333 1121 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0510 08:59:13.141594 1121 checks.go:618] validating kubelet version
I0510 08:59:13.237892 1121 checks.go:128] validating if the service is enabled and active
I0510 08:59:13.262185 1121 checks.go:201] validating availability of port 10250
I0510 08:59:13.262628 1121 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0510 08:59:13.262712 1121 checks.go:432] validating if the connectivity type is via proxy or direct
I0510 08:59:13.262745 1121 join.go:441] [preflight] Discovering cluster-info
I0510 08:59:13.262774 1121 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "k8smaster:6443"
I0510 08:59:13.277069 1121 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "8j77yo", will try again
I0510 08:59:13.283618 1121 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "8j77yo", will try again
^C
root@worker:/home/student#
I cannot find any related information in the journalctl, I only have these error in loop:
master:
May 10 09:20:46 master kubelet[26506]: E0510 09:20:46.920201 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)
May 10 09:20:53 master kubelet[26506]: E0510 09:20:53.929496 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)
May 10 09:21:00 master kubelet[26506]: E0510 09:21:00.746098 26506 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: proto: VolumeMount: wiretype end group for non-group
May 10 09:21:00 master kubelet[26506]: E0510 09:21:00.938380 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)
worker:
May 10 08:57:28 worker systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
May 10 08:57:28 worker systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
Could you please help me to troubleshoot further ?
Comments
-
Hello,
You mention you have two network adapters. There are many reasons the join is not working. I would first try with only one adapter to see if it is a network configuration error.
I remember seeing the JWS token error long ago, version 1.6 or so, when the master node had too little resources. Does your master VM have enough resources to run all the pods? 2 cpus and 8G of memory? Does your worker have the same?
Do all the pods on the master show a Ready status before you try to join the worker?
Does the top command show you have available CPU and memory?If you are not having issues with enough resources, then I would next check:
When I see lease errors my first thought is a DHCP issue. If you hard-code the IP addresses, does the issue persist?
Another thing to check, does your VM ip addresses overlap the pod network of 192.168.1.0? They should not.
Does wireshark or tcpdump show the join leaving the worker and does it show it entering the master?
Have you configured each interface to be fully promiscuous, as by default not all traffic is allowed between VMs when using VirtualBox?Regards,
1 -
Thanks for your feedback and your advices, it was very helpfull.
I checked one by one all your comments and you were right about VM IP overlap on pod network.master
- 8 GB RAM / 4 CPU
- Promiscuous Mode for both adapter: Allow All
worker
- 8 GB RAM / 2 CPU
- Promiscuous Mode for both adapter: Allow All
I also checked:
- available CPU / memory: everything fine
- I already configured static IPs for my VMs
- I could ssh from master to worker, and worker to master
- I could telnet k8smaster 6443 from worker
I tried to list pods and I found this error:
student@master:~$ kubectl get pods --all-namespaces
Error from server: proto: VolumeMount: wiretype end group for non-groupIn order to check network overlap, I reset my cluster and change calico config to:
- name: IP_AUTODETECTION_METHOD value: "interface=enp0s3" - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
and kubeadm podSubnet config to:
networking: podSubnet: 10.244.0.0/16
After kubeadm init command, I could see my pods
student@master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-p7c8h 0/1 Init:0/3 0 40s
kube-system etcd-master 1/1 Running 34 7m37s
kube-system kube-apiserver-master 1/1 Running 40 7m37s
kube-system kube-controller-manager-master 1/1 Running 2 7m37s
kube-system kube-proxy-wvtv5 1/1 Running 0 40s
kube-system kube-scheduler-master 1/1 Running 12 7m37sAnd join the cluster from worker node succesfully.
Thanks a lot for your help !
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)