Lab 3.1 Install Kubernetes - Registry look like is down
Hello, i am trying to setup the kubeadm cp on a VM, not on GCE or AWS
kubeadm init --config=kubeadm-crio.yaml | tee kubeadm-crio.out
.
i got this error
kubeadm init -v=5 --config=kube/kubeadm-crio.yaml | tee kube/kubeadm-crio-init.out
I0129 16:42:46.439739 4510 initconfiguration.go:247] loading configuration from "kube/kubeadm-crio.yaml" I0129 16:42:46.442130 4510 interface.go:431] Looking for default routes with IPv4 addresses I0129 16:42:46.442187 4510 interface.go:436] Default route transits interface "enp1s0" I0129 16:42:46.442297 4510 interface.go:208] Interface enp1s0 is up I0129 16:42:46.442450 4510 interface.go:256] Interface "enp1s0" has 2 addresses :[192.168.122.10/24 fe80::5054:ff:fe24:504/64]. I0129 16:42:46.442569 4510 interface.go:223] Checking addr 192.168.122.10/24. I0129 16:42:46.442731 4510 interface.go:230] IP found 192.168.122.10 I0129 16:42:46.442883 4510 interface.go:262] Found valid IPv4 address 192.168.122.10 for interface "enp1s0". I0129 16:42:46.442969 4510 interface.go:442] Found active IP 192.168.122.10 [init] Using Kubernetes version: v1.22.4 [preflight] Running pre-flight checks I0129 16:42:46.448842 4510 checks.go:577] validating Kubernetes and kubeadm version I0129 16:42:46.448899 4510 checks.go:170] validating if the firewall is enabled and active [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly I0129 16:42:46.460936 4510 checks.go:205] validating availability of port 6443 I0129 16:42:46.462526 4510 checks.go:205] validating availability of port 10259 I0129 16:42:46.462705 4510 checks.go:205] validating availability of port 10257 I0129 16:42:46.462964 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0129 16:42:46.463095 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0129 16:42:46.463223 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0129 16:42:46.463332 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0129 16:42:46.463678 4510 checks.go:432] validating if the connectivity type is via proxy or direct I0129 16:42:46.463712 4510 checks.go:471] validating http connectivity to first IP address in the CIDR I0129 16:42:46.463745 4510 checks.go:471] validating http connectivity to first IP address in the CIDR I0129 16:42:46.463771 4510 checks.go:106] validating the container runtime I0129 16:42:46.478017 4510 checks.go:372] validating the presence of executable crictl I0129 16:42:46.478074 4510 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0129 16:42:46.478115 4510 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward I0129 16:42:46.478132 4510 checks.go:649] validating whether swap is enabled or not I0129 16:42:46.478150 4510 checks.go:372] validating the presence of executable conntrack I0129 16:42:46.478160 4510 checks.go:372] validating the presence of executable ip I0129 16:42:46.478166 4510 checks.go:372] validating the presence of executable iptables I0129 16:42:46.478174 4510 checks.go:372] validating the presence of executable mount I0129 16:42:46.478187 4510 checks.go:372] validating the presence of executable nsenter I0129 16:42:46.478201 4510 checks.go:372] validating the presence of executable ebtables I0129 16:42:46.478208 4510 checks.go:372] validating the presence of executable ethtool I0129 16:42:46.478215 4510 checks.go:372] validating the presence of executable socat I0129 16:42:46.478259 4510 checks.go:372] validating the presence of executable tc I0129 16:42:46.478267 4510 checks.go:372] validating the presence of executable touch I0129 16:42:46.478319 4510 checks.go:520] running all checks I0129 16:42:46.488925 4510 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost I0129 16:42:46.489007 4510 checks.go:618] validating kubelet version I0129 16:42:46.549359 4510 checks.go:132] validating if the "kubelet" service is enabled and active I0129 16:42:46.561319 4510 checks.go:205] validating availability of port 10250 I0129 16:42:46.561517 4510 checks.go:205] validating availability of port 2379 I0129 16:42:46.561711 4510 checks.go:205] validating availability of port 2380 I0129 16:42:46.561891 4510 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0129 16:42:46.562145 4510 checks.go:838] using image pull policy: IfNotPresent I0129 16:42:46.581090 4510 checks.go:847] image exists: k8s.gcr.io/kube-apiserver:v1.22.4 I0129 16:42:46.599228 4510 checks.go:847] image exists: k8s.gcr.io/kube-controller-manager:v1.22.4 I0129 16:42:46.616527 4510 checks.go:847] image exists: k8s.gcr.io/kube-scheduler:v1.22.4 I0129 16:42:46.632748 4510 checks.go:847] image exists: k8s.gcr.io/kube-proxy:v1.22.4 I0129 16:42:46.648435 4510 checks.go:847] image exists: k8s.gcr.io/pause:3.5 I0129 16:42:46.665430 4510 checks.go:847] image exists: k8s.gcr.io/etcd:3.5.0-0 I0129 16:42:46.682597 4510 checks.go:855] pulling: k8s.gcr.io/coredns:v1.8.4 [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.8.4: output: time="2022-01-29T16:42:49-04:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.4 in k8s.gcr.io/coredns: manifest unknown: Failed to fetch \"v1.8.4\" from request \"/v2/coredns/manifests/v1.8.4\"." , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` error execution phase preflight k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897 k8s.io/kubernetes/cmd/kubeadm/app.Run /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 main.main _output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 runtime.main /usr/lib64/go/1.17/src/runtime/proc.go:255 runtime.goexit /usr/lib64/go/1.17/src/runtime/asm_amd64.s:1581
looks like coredns does not exist, so How can i list the images from k8s.gcr.io?
Comments
-
First, reset the kubeadm, then create
kubeadm-config.yaml
then run init command.sudo kubeadm reset
nano kubeadm-config.yaml
in kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.21.1 controlPlaneEndpoint: "k8scp:6443" networking: podSubnet: 192.168.0.0/16
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
0 -
Hello, my issue is that the image coredns not exist on the repository k8s.gcr.io
0 -
Are you using docker or cri-o?
0 -
0
-
@devdorrejo sorry man I haven't used cri-o for this course.
0 -
Hi @devdorrejo,
On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.
For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.
It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.
Regards,
-Chris0 -
@chrispokorni said:
Hi @devdorrejo,On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.
For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.
It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.
Regards,
-ChrisThanks for the answed, did the changes, now i progress little.
but now i have the next issue
kubelet.service: https://pastebin.com/eRQXe0pn
kubeadm-init.out: https://pastebin.com/ZZv6ekTZit can't found the node itself.
my steps:
swapoff -av sed -e '/^[^#]/ s/\(^.*swap.*$\)/#\ \1/' -i /etc/fstab wget -c https://training.linuxfoundation.org/cm/LFS258/LFS258_V2021-09-20_SOLUTIONS.tar.xz --user=xxxxxx --password=xxxxxx -O - | tar -xJv modprobe br_netfilter && modprobe overlay cat >/etc/sysctl.d/99-kubernetes-cri.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sysctl --system export OS=xUbuntu_18.04 export VER=1.21 echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/ /" | tee -a /etc/apt/sources.list.d/cri-0.list && curl -L http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/Release.key | apt-key add - echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee -a /etc/apt/sources.list.d/libcontainers.list && curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add - apt update && apt install -y cri-o cri-o-runc systemctl daemon-reload && systemctl enable --now crio echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt update apt install -y kubeadm=1.21.1-00 kubelet=1.21.1-00 kubectl=1.21.1-00 apt-mark hold kubelet kubeadm kubectl systemctl enable --now kubelet wget https://docs.projectcalico.org/manifests/calico.yaml cp /etc/hosts /etc/hosts.old cat >/etc/hosts <<EOF 192.168.122.20 k8scp 127.0.0.1 localhost EOF find /home -name kubeadm-crio.yaml -exec cp {} . \; sed -i 's/1.20.0/1.21.1/' kubeadm-crio.yaml kubeadm -v=5 init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out
0 -
Hi @devdorrejo,
There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).Regards,
-Chris0 -
@chrispokorni said:
Hi @devdorrejo,There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).Regards,
-ChrisHi Chris,
The refuse connection is with the VM itself, the machine is the one with the ip 192.168.122.10 that is different to the one of 192.168.0.0/16 from calico.
i opened the port 6443, which is the system itself.
this table is created by doing the steps of the labs.
VM iptables -L
Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-FIREWALL all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-FIREWALL all -- anywhere anywhere Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT Chain KUBE-KUBELET-CANARY (0 references) target prot opt source destination
Host (virt-manager) iptables:
ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:sun-sr-https 6443 ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:10250 ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:http-alt 8080
0 -
Hi @devdorrejo,
For the IP address overlap I would encourage you to explore resources that may clarify the network size notation associated with the Calico network plugin, to help you to avoid such overlaps when working with local Kubernetes deployments.
It seems to me that the hypervisor allows TCP traffic to a small number of ports. In doing so, traffic of different protocols (such as UDP) to other ports that are required by Kubernetes, Calico, coreDNS, and other plugins/addons will not be allowed, hence impacting the required functionality for container orchestration.
EDIT: The "Overview" section of Lab Exercise 3.1 outlines the networking requirements set at the cloud VPC level or the local hypervisor for Kubernetes Node VMs, such as:
... allows all traffic to all ports...
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 145 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)