Welcome to the Linux Foundation Forum!

Lab 3.1 Install Kubernetes - Registry look like is down

Hello, i am trying to setup the kubeadm cp on a VM, not on GCE or AWS

kubeadm init --config=kubeadm-crio.yaml | tee kubeadm-crio.out.

i got this error

kubeadm init -v=5 --config=kube/kubeadm-crio.yaml | tee kube/kubeadm-crio-init.out

I0129 16:42:46.439739    4510 initconfiguration.go:247] loading configuration from "kube/kubeadm-crio.yaml"
I0129 16:42:46.442130    4510 interface.go:431] Looking for default routes with IPv4 addresses
I0129 16:42:46.442187    4510 interface.go:436] Default route transits interface "enp1s0"
I0129 16:42:46.442297    4510 interface.go:208] Interface enp1s0 is up
I0129 16:42:46.442450    4510 interface.go:256] Interface "enp1s0" has 2 addresses :[192.168.122.10/24 fe80::5054:ff:fe24:504/64].
I0129 16:42:46.442569    4510 interface.go:223] Checking addr  192.168.122.10/24.
I0129 16:42:46.442731    4510 interface.go:230] IP found 192.168.122.10
I0129 16:42:46.442883    4510 interface.go:262] Found valid IPv4 address 192.168.122.10 for interface "enp1s0".
I0129 16:42:46.442969    4510 interface.go:442] Found active IP 192.168.122.10 
[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
I0129 16:42:46.448842    4510 checks.go:577] validating Kubernetes and kubeadm version
I0129 16:42:46.448899    4510 checks.go:170] validating if the firewall is enabled and active
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0129 16:42:46.460936    4510 checks.go:205] validating availability of port 6443
I0129 16:42:46.462526    4510 checks.go:205] validating availability of port 10259
I0129 16:42:46.462705    4510 checks.go:205] validating availability of port 10257
I0129 16:42:46.462964    4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0129 16:42:46.463095    4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0129 16:42:46.463223    4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0129 16:42:46.463332    4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0129 16:42:46.463678    4510 checks.go:432] validating if the connectivity type is via proxy or direct
I0129 16:42:46.463712    4510 checks.go:471] validating http connectivity to first IP address in the CIDR
I0129 16:42:46.463745    4510 checks.go:471] validating http connectivity to first IP address in the CIDR
I0129 16:42:46.463771    4510 checks.go:106] validating the container runtime
I0129 16:42:46.478017    4510 checks.go:372] validating the presence of executable crictl
I0129 16:42:46.478074    4510 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0129 16:42:46.478115    4510 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0129 16:42:46.478132    4510 checks.go:649] validating whether swap is enabled or not
I0129 16:42:46.478150    4510 checks.go:372] validating the presence of executable conntrack
I0129 16:42:46.478160    4510 checks.go:372] validating the presence of executable ip
I0129 16:42:46.478166    4510 checks.go:372] validating the presence of executable iptables
I0129 16:42:46.478174    4510 checks.go:372] validating the presence of executable mount
I0129 16:42:46.478187    4510 checks.go:372] validating the presence of executable nsenter
I0129 16:42:46.478201    4510 checks.go:372] validating the presence of executable ebtables
I0129 16:42:46.478208    4510 checks.go:372] validating the presence of executable ethtool
I0129 16:42:46.478215    4510 checks.go:372] validating the presence of executable socat
I0129 16:42:46.478259    4510 checks.go:372] validating the presence of executable tc
I0129 16:42:46.478267    4510 checks.go:372] validating the presence of executable touch
I0129 16:42:46.478319    4510 checks.go:520] running all checks
I0129 16:42:46.488925    4510 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0129 16:42:46.489007    4510 checks.go:618] validating kubelet version
I0129 16:42:46.549359    4510 checks.go:132] validating if the "kubelet" service is enabled and active
I0129 16:42:46.561319    4510 checks.go:205] validating availability of port 10250
I0129 16:42:46.561517    4510 checks.go:205] validating availability of port 2379
I0129 16:42:46.561711    4510 checks.go:205] validating availability of port 2380
I0129 16:42:46.561891    4510 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0129 16:42:46.562145    4510 checks.go:838] using image pull policy: IfNotPresent
I0129 16:42:46.581090    4510 checks.go:847] image exists: k8s.gcr.io/kube-apiserver:v1.22.4
I0129 16:42:46.599228    4510 checks.go:847] image exists: k8s.gcr.io/kube-controller-manager:v1.22.4
I0129 16:42:46.616527    4510 checks.go:847] image exists: k8s.gcr.io/kube-scheduler:v1.22.4
I0129 16:42:46.632748    4510 checks.go:847] image exists: k8s.gcr.io/kube-proxy:v1.22.4
I0129 16:42:46.648435    4510 checks.go:847] image exists: k8s.gcr.io/pause:3.5
I0129 16:42:46.665430    4510 checks.go:847] image exists: k8s.gcr.io/etcd:3.5.0-0
I0129 16:42:46.682597    4510 checks.go:855] pulling: k8s.gcr.io/coredns:v1.8.4
[preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.8.4: output: time="2022-01-29T16:42:49-04:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.4 in k8s.gcr.io/coredns: manifest unknown: Failed to fetch \"v1.8.4\" from request \"/v2/coredns/manifests/v1.8.4\"."
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/lib64/go/1.17/src/runtime/proc.go:255
runtime.goexit
        /usr/lib64/go/1.17/src/runtime/asm_amd64.s:1581

looks like coredns does not exist, so How can i list the images from k8s.gcr.io?

Comments

  • First, reset the kubeadm, then create kubeadm-config.yaml then run init command.
    sudo kubeadm reset
    nano kubeadm-config.yaml
    in kubeadm-config.yaml

    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: 1.21.1
    controlPlaneEndpoint: "k8scp:6443"
    networking:
      podSubnet: 192.168.0.0/16
    

    kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out

  • Hello, my issue is that the image coredns not exist on the repository k8s.gcr.io

  • Are you using docker or cri-o?

  • c> @alihasanahmedk said:

    Are you using docker or cri-o?

    cri-o

  • @devdorrejo sorry man I haven't used cri-o for this course.

  • Hi @devdorrejo,

    On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.

    For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.

    It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.

    Regards,
    -Chris

  • devdorrejo
    devdorrejo Posts: 24
    edited February 2022

    @chrispokorni said:
    Hi @devdorrejo,

    On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.

    For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.

    It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.

    Regards,
    -Chris

    Thanks for the answed, did the changes, now i progress little.

    but now i have the next issue
    kubelet.service: https://pastebin.com/eRQXe0pn
    kubeadm-init.out: https://pastebin.com/ZZv6ekTZ

    it can't found the node itself.

    my steps:

    swapoff -av
    sed -e '/^[^#]/ s/\(^.*swap.*$\)/#\ \1/' -i /etc/fstab
    
    wget -c https://training.linuxfoundation.org/cm/LFS258/LFS258_V2021-09-20_SOLUTIONS.tar.xz --user=xxxxxx --password=xxxxxx -O - | tar -xJv
    
    modprobe br_netfilter && modprobe overlay
    
    cat >/etc/sysctl.d/99-kubernetes-cri.conf <<EOF
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    EOF
    sysctl --system
    
    export OS=xUbuntu_18.04
    export VER=1.21
    
    echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/ /" | tee -a /etc/apt/sources.list.d/cri-0.list && curl -L http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/Release.key | apt-key add -
    
    echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee -a /etc/apt/sources.list.d/libcontainers.list && curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
    
    apt update && apt install -y cri-o cri-o-runc
    
    systemctl daemon-reload && systemctl enable --now crio
    
    echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
    
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt update
    
    apt install -y kubeadm=1.21.1-00 kubelet=1.21.1-00 kubectl=1.21.1-00
    
    apt-mark hold kubelet kubeadm kubectl
    
    systemctl enable --now kubelet
    
    wget https://docs.projectcalico.org/manifests/calico.yaml
    
    cp /etc/hosts /etc/hosts.old
    
    cat >/etc/hosts <<EOF
    192.168.122.20 k8scp
    127.0.0.1 localhost
    EOF
    
    find /home -name kubeadm-crio.yaml -exec cp {} . \;
    
    sed -i 's/1.20.0/1.21.1/' kubeadm-crio.yaml
    
    kubeadm -v=5 init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out
    
  • Hi @devdorrejo,

    There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
    In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).

    Regards,
    -Chris

  • devdorrejo
    devdorrejo Posts: 24
    edited February 2022

    @chrispokorni said:
    Hi @devdorrejo,

    There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
    In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).

    Regards,
    -Chris

    Hi Chris,

    The refuse connection is with the VM itself, the machine is the one with the ip 192.168.122.10 that is different to the one of 192.168.0.0/16 from calico.

    i opened the port 6443, which is the system itself.

    this table is created by doing the steps of the labs.

    VM iptables -L

    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination         
    KUBE-FIREWALL  all  --  anywhere             anywhere            
    
    Chain FORWARD (policy ACCEPT)
    target     prot opt source               destination         
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination         
    KUBE-FIREWALL  all  --  anywhere             anywhere            
    
    Chain KUBE-FIREWALL (2 references)
    target     prot opt source               destination         
    DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
    DROP       all  -- !localhost/8          localhost/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
    
    Chain KUBE-KUBELET-CANARY (0 references)
    target     prot opt source               destination         
    
    

    Host (virt-manager) iptables:

    ACCEPT     tcp  --  anywhere             192.168.122.20       tcp dpt:sun-sr-https 6443
    ACCEPT     tcp  --  anywhere             192.168.122.20       tcp dpt:10250
    ACCEPT     tcp  --  anywhere             192.168.122.20       tcp dpt:http-alt 8080
    
  • chrispokorni
    chrispokorni Posts: 2,372
    edited February 2022

    Hi @devdorrejo,

    For the IP address overlap I would encourage you to explore resources that may clarify the network size notation associated with the Calico network plugin, to help you to avoid such overlaps when working with local Kubernetes deployments.

    It seems to me that the hypervisor allows TCP traffic to a small number of ports. In doing so, traffic of different protocols (such as UDP) to other ports that are required by Kubernetes, Calico, coreDNS, and other plugins/addons will not be allowed, hence impacting the required functionality for container orchestration.

    EDIT: The "Overview" section of Lab Exercise 3.1 outlines the networking requirements set at the cloud VPC level or the local hypervisor for Kubernetes Node VMs, such as:

    ... allows all traffic to all ports...

    Regards,
    -Chris

Categories

Upcoming Training