Welcome to the Linux Foundation Forum!

[Lab 3.2][step 7] The cluster-info ConfigMap does not yet contain a JWS signature for token ID

Hi,

I recently started LFS258 training and I am stuck at the step 7 of lab 3.2.
I deployed my lab on Virtualbox and I currently have 2 VMs Ubuntu 18.04, with 2 network adapters Host-Only + NAT.

My cluster was deployed on the master node with the following configuration

root@master:~# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.100
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.18.1
controlPlaneEndpoint: "k8smaster:6443"
networking:
  podSubnet: 192.168.0.0/16

The kubeadm init and apply command completed succesfully (you can find output in attachment for init)

I completed steps in 3.2 to create CA and new token

student@master:~$ sudo kubeadm --v=5 token create --print-join-command
I0510 08:57:08.914031 27949 token.go:121] [token] validating mixed arguments
I0510 08:57:08.914123 27949 token.go:130] [token] getting Clientsets from kubeconfig file
I0510 08:57:08.914149 27949 cmdutil.go:79] Using kubeconfig file: /home/student/.kube/config
I0510 08:57:08.919229 27949 token.go:243] [token] loading configurations
I0510 08:57:08.920355 27949 interface.go:400] Looking for default routes with IPv4 addresses
I0510 08:57:08.921316 27949 interface.go:405] Default route transits interface "enp0s3"
I0510 08:57:08.922168 27949 interface.go:208] Interface enp0s3 is up
I0510 08:57:08.923092 27949 interface.go:256] Interface "enp0s3" has 2 addresses :[192.168.56.100/24 fe80::a00:27ff:fef8:907f/64].
I0510 08:57:08.923849 27949 interface.go:223] Checking addr 192.168.56.100/24.
I0510 08:57:08.924634 27949 interface.go:230] IP found 192.168.56.100
I0510 08:57:08.925321 27949 interface.go:262] Found valid IPv4 address 192.168.56.100 for interface "enp0s3".
I0510 08:57:08.925966 27949 interface.go:411] Found active IP 192.168.56.100
W0510 08:57:08.926707 27949 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0510 08:57:08.927506 27949 token.go:255] [token] creating token
kubeadm join k8smaster:6443 --token 8j77yo.vrv09tpb6wqpf2p9 --discovery-token-ca-cert-hash sha256:d21ca296b30091b304dfa03fb3b600e32eb67cef13c3f51badd835e25dfad1ba

student@master:~$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8j77yo.vrv09tpb6wqpf2p9 23h 2020-05-11T08:57:08Z authentication,signing system:bootstrappers:kubeadm:default-node-token
student@master:~$

However, When I join the cluster from the worker node, I get the error below:

root@worker:/home/student# kubeadm join --v=5 k8smaster:6443 --token 8j77yo.vrv09tpb6wqpf2p9 --discovery-token-ca-cert-hash sha256:d21ca296b30091b304dfa03fb3b600e32eb67cef13c3f51badd835e25dfad1ba
W0510 08:59:12.753901 1121 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0510 08:59:12.754736 1121 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0510 08:59:12.755402 1121 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0510 08:59:12.756629 1121 preflight.go:90] [preflight] Running general checks
I0510 08:59:12.757284 1121 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0510 08:59:12.758224 1121 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0510 08:59:12.758814 1121 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0510 08:59:12.759434 1121 checks.go:102] validating the container runtime
I0510 08:59:12.854290 1121 checks.go:128] validating if the service is enabled and active
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0510 08:59:13.009580 1121 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0510 08:59:13.009770 1121 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
[...]
I0510 08:59:13.139333 1121 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0510 08:59:13.141594 1121 checks.go:618] validating kubelet version
I0510 08:59:13.237892 1121 checks.go:128] validating if the service is enabled and active
I0510 08:59:13.262185 1121 checks.go:201] validating availability of port 10250
I0510 08:59:13.262628 1121 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0510 08:59:13.262712 1121 checks.go:432] validating if the connectivity type is via proxy or direct
I0510 08:59:13.262745 1121 join.go:441] [preflight] Discovering cluster-info
I0510 08:59:13.262774 1121 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "k8smaster:6443"
I0510 08:59:13.277069 1121 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "8j77yo", will try again
I0510 08:59:13.283618 1121 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "8j77yo", will try again
^C
root@worker:/home/student#

I cannot find any related information in the journalctl, I only have these error in loop:

master:

May 10 09:20:46 master kubelet[26506]: E0510 09:20:46.920201 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)
May 10 09:20:53 master kubelet[26506]: E0510 09:20:53.929496 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)
May 10 09:21:00 master kubelet[26506]: E0510 09:21:00.746098 26506 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: proto: VolumeMount: wiretype end group for non-group
May 10 09:21:00 master kubelet[26506]: E0510 09:21:00.938380 26506 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: proto: Lease: illegal tag -633754067 (wire type 29289705834)

worker:

May 10 08:57:28 worker systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
May 10 08:57:28 worker systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.

Could you please help me to troubleshoot further ?

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    You mention you have two network adapters. There are many reasons the join is not working. I would first try with only one adapter to see if it is a network configuration error.
    I remember seeing the JWS token error long ago, version 1.6 or so, when the master node had too little resources. Does your master VM have enough resources to run all the pods? 2 cpus and 8G of memory? Does your worker have the same?
    Do all the pods on the master show a Ready status before you try to join the worker?
    Does the top command show you have available CPU and memory?

    If you are not having issues with enough resources, then I would next check:
    When I see lease errors my first thought is a DHCP issue. If you hard-code the IP addresses, does the issue persist?
    Another thing to check, does your VM ip addresses overlap the pod network of 192.168.1.0? They should not.
    Does wireshark or tcpdump show the join leaving the worker and does it show it entering the master?
    Have you configured each interface to be fully promiscuous, as by default not all traffic is allowed between VMs when using VirtualBox?

    Regards,

  • Chup4Chups
    Chup4Chups Posts: 6

    Thanks for your feedback and your advices, it was very helpfull.
    I checked one by one all your comments and you were right about VM IP overlap on pod network.

    master

    • 8 GB RAM / 4 CPU
    • Promiscuous Mode for both adapter: Allow All

    worker

    • 8 GB RAM / 2 CPU
    • Promiscuous Mode for both adapter: Allow All

    I also checked:

    • available CPU / memory: everything fine
    • I already configured static IPs for my VMs
    • I could ssh from master to worker, and worker to master
    • I could telnet k8smaster 6443 from worker

    I tried to list pods and I found this error:

    student@master:~$ kubectl get pods --all-namespaces
    Error from server: proto: VolumeMount: wiretype end group for non-group

    In order to check network overlap, I reset my cluster and change calico config to:

       - name: IP_AUTODETECTION_METHOD
         value: "interface=enp0s3"
       - name: CALICO_IPV4POOL_CIDR
         value: "10.244.0.0/16"
    

    and kubeadm podSubnet config to:

    networking:
      podSubnet: 10.244.0.0/16
    

    After kubeadm init command, I could see my pods

    student@master:~$ kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system calico-node-p7c8h 0/1 Init:0/3 0 40s
    kube-system etcd-master 1/1 Running 34 7m37s
    kube-system kube-apiserver-master 1/1 Running 40 7m37s
    kube-system kube-controller-manager-master 1/1 Running 2 7m37s
    kube-system kube-proxy-wvtv5 1/1 Running 0 40s
    kube-system kube-scheduler-master 1/1 Running 12 7m37s

    And join the cluster from worker node succesfully.

    Thanks a lot for your help !

Categories

Upcoming Training