Welcome to the Linux Foundation Forum!

GCE - configmaps "kubelet-config-1.12" is forbidden

davidurs1
davidurs1 Posts: 1
edited December 2018 in LFD259 Class Forum

Hi there,

I'm trying to set up a simple cluster using the examples at lab 2.1

I have created 2 VM instances.

I have run the master script on one vm, being able to see the master node when running 'kubectl get node'.

I run the k8sSecond.sh on the second vm and i receive the following logs

[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following requ
ired kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_v
s:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "10.154.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.154.0.2:6443"
[discovery] Requesting info from "https://10.154.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will
use API Server "10.154.0.2:6443"
[discovery] Successfully established connection with API Server "10.154.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system nam
espace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:q605qp" cannot get resource "configmaps" in A
PI group "" in the namespace "kube-system"

i added i firewall rule to accept all protocols/ports for my service account.

Name Type Description Targets Filters Protocols / ports Action Priority
kubeadmin Ingress Apply to all Service account: XX... all Allow 1

However, i cannot see the minion node on the first vm.

Running the get node command on the second vm will retrieve:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Is there anything else i'm needed to do?

Thanks

Comments

  • Hi, running kubectl is expected to work on the master node, but not on the second node, because that is how we are setting it up.
    This forbiddent access issue happens because kubeadm init for some unexpected reason downloads a copy of the latest release 1.13, even though we specifically downloaded 1.12 in a previous step.
    Try running this kubeadm again
    kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
    this should fix the problem.
    Regards,
    -Chris

  • oliveriom
    oliveriom Posts: 5
    edited December 2018

    I am running into the same issue using freshly installed Ubuntu 16.04 on Virtualbox.

    Your k8sMaster.sh script starts kubernetes version 1.13 and needs to be updated.

  • fcioanca
    fcioanca Posts: 1,887

    @oliveriom We are working on updating the labs to v1.13, but it does take time. In the meantime, please use the solution provided in the forum to stay on v1.12.1, and complete the labs as they are now.

  • Thanks for the anwser @fcioanca. I managed to get it to work with the --kubernetes-version 1.12.1 option after a while. Please keep in mind that people like me - that have not worked with kubeadm before - are taking this course.

    Running it with the option initially gave me other errors (a bunch of *.yaml already exists errors). So for anybody else running into this issue: After initially running the k8sMaster.sh that started 1.13 the following four lines fixed it for me:

    sudo kubeadm reset
    sudo kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/conf
    
  • ... forgot the fifth line kubectl apply -f calico.yaml

  • @oliveriom
    These steps have been tested and were working properly up until the 1.13 release. Since then, the additional --kubernetes-version 1.12.1 option needs to be added while running kubeadm init. The reset cleared all configuration set while running the k8sMaster.sh script, and allowed for those steps inside the script to be run again manually, without producing the duplicate yaml file errors.
    Regards,
    -Chris

  • Thank you for this @oliveriom , I was encountering an error after applying the kubectl reset -> kubectl init --kubernetes-version 1.12.1 steps:

    ronnie@ckad-1:~$ kubectl get nodes
    Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
    

    but looks like all I had to do was manually run the remaining steps in k8sMaster.sh like you did (copying the admin.conf and changing ownership) and now it works.

Categories

Upcoming Training