Welcome to the Linux Foundation Forum!

Kubernetes Fundamentals: Lab 3.1. Install Kubernetes: Username/Password Authentication Failed

johngeorge142 Posts: 6
edited August 2022 in LFS258 Class Forum

When I try to run this command I get an error: wget https://training.linuxfoundation.org/cm/LFS258/LFS258_V2022-03-22_SOLUTIONS.tar.xz \ --user=xxxx --password=xxxx

--2022-08-01 17:10:13-- https://training.linuxfoundation.org/cm/LFS258/LFS258_V2022-03-22_SOLUTIONS.tar.xz
Resolving training.linuxfoundation.org (training.linuxfoundation.org)...
Connecting to training.linuxfoundation.org (training.linuxfoundation.org)||:443... connected.
HTTP request sent, awaiting response... 401 Restricted

Username/Password Authentication Failed.
--2022-08-01 17:10:13-- http:// --user=xxxx/
Resolving --user=xxxx ( --user=xxxx)... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ‘ --user=xxxx’
22_SOLUTIONS.tar.xz \ --user=xxxx --password=xxxx

Best Answer


  • chrispokorni
    chrispokorni Posts: 2,220

    Hi @johngeorge142,

    Please remove the back slash from your command and pay close attention to the user and password as they are case sensitive.

    In addition, please familiarize yourself with best practices around sharing passwords in public forums, and their security implications. Understanding whether it is acceptable to share sensitive information this way is critical for novices and experts in the IT field.


  • johngeorge142

    @chrispokorni there is no chmod 400 LFS258.pem file that I can SSH into.

    [student@laptop ̃]$chmod 400 LFS258.pem[student@laptop ̃]$ssh -i LFS258.pem student@

  • chrispokorni
    chrispokorni Posts: 2,220

    Hi @johngeorge142,

    The private/public key pair needs to be created prior to accessing the VM, and the public key needs to be loaded on the VM while it is being provisioned.

    Chapter 1 includes video guides to help you to provision your cloud VM instances, and generate desired key pairs.


  • johngeorge142

    @chrispokorni Thanks for the response. I was able to setup the cp and worker nodes using gcp. unfortunately I was unable to find the LFS258.pem file when I extracted the tar file in my cp and worker nodes (Step1: [student@laptop ̃]$chmod 400 LFS258.pem[student@laptop ̃]
    error:chmod: cannot access 'LFS258.pem': No such file or directory

    I was able to do most of lab Excercise 3.1 except I could not move past step 16. I ran the following command and got this error:(command)
    kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review

    W0816 18:17:30.189136 14102 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "podSubnet"
    Found multiple CRI sockets, please use --cri-socket to select one: /run/containerd/containerd.sock, /var/run/crio/crio.sock
    To see the stack trace of this error execute with --v=5 or higher

  • johngeorge142

    @chrispokorni I also get this error when I run other commands mainly this one: kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out

    Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock
    To see the stack trace of this error execute with --v=5 or higher

  • johngeorge142

    @chrispokorni I have also outlined the steps I took from steps 16 through 20 for lab 3.1

    step 16: kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out (does not work)
    Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock
    To see the stack trace of this error execute with --v=5 or higher

    step 17: (does not work)
    student@cp:~/LFS258/SOLUTIONS/s_03$ mkdir -p $HOME/.kube
    student@cp:~/LFS258/SOLUTIONS/s_03$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
    student@cp:~/LFS258/SOLUTIONS/s_03$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    chown: cannot access '/home/student/.kube/config': No such file or directory
    student@cp:~/LFS258/SOLUTIONS/s_03$ less .kube/config
    .kube/config: No such file or directory

    step18: (does not work)
    student@cp:~/LFS258/SOLUTIONS/s_03$ sudo cp /root/calico.yaml .
    student@cp:~/LFS258/SOLUTIONS/s_03$ kubectl apply -f calico.yaml
    The connection to the server localhost:8080 was refused - did you specify the right host or port?

    Step 19: (works)
    student@cp:~/LFS258/SOLUTIONS/s_03$ sudo apt-get install bash-completion -y

    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    bash-completion is already the newest version (1:2.10-1ubuntu1).
    The following packages were automatically installed and are no longer required:
    libatasmart4 libblockdev-fs2 libblockdev-loop2 libblockdev-part-err2 libblockdev-part2
    libblockdev-swap2 libblockdev-utils2 libblockdev2 libmbim-glib4 libmbim-proxy libmm-glib0
    libnspr4 libnss3 libnuma1 libparted-fs-resize0 libqmi-glib5 libqmi-proxy libudisks2-0
    usb-modeswitch usb-modeswitch-data
    Use 'sudo apt autoremove' to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
    student@cp:~/LFS258/SOLUTIONS/s_03$ source <(kubectl completion bash)
    student@cp:~/LFS258/SOLUTIONS/s_03$ echo "source <(kubectl completion bash)" >> $HOME/.bashrc

    step 20: (tabs do not work)
    student@cp: ̃$ kubectl des n cp
    student@cp: ̃$ kubectl -n kube-s g po

    step 21: (works)
    student@cp:~/LFS258/SOLUTIONS/s_03$ sudo kubeadm config print init-defaults

    apiVersion: kubeadm.k8s.io/v1beta3

    • groups:
      • system:bootstrappers:kubeadm:default-node-token
        token: abcdef.0123456789abcdef
        ttl: 24h0m0s
      • signing
      • authentication
        kind: InitConfiguration
        bindPort: 6443
        criSocket: /var/run/dockershim.sock
        imagePullPolicy: IfNotPresent
        name: node
        taints: null

    timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    dataDir: /var/lib/etcd
    imageRepository: k8s.gcr.io
    kind: ClusterConfiguration
    kubernetesVersion: 1.22.0
    dnsDomain: cluster.local
    scheduler: {}

  • chrispokorni
    chrispokorni Posts: 2,220

    Hi @johngeorge142,

    From the first error at step 16 it seems there are two container runtimes installed on your node: docker and cri-o. If you revisit step 6, it instructs you install one or the other. With both runtimes present the installation panics, as it does not have the necessary configuration for kubelet to select one of the two runtimes. The following errors just build on top of the first one.

    One way to fix this is to remove one of the runtimes, but it is not a guaranteed approach as it may leave behind some residual configuration options.
    A cleaner approach would be to provision a new VM and attempt the installation again, this time paying close attention to the instructions in the lab guide and install only one of the two runtimes presented in step 6.

    Regards ,


Upcoming Training