Welcome to the Linux Foundation Forum!

Lab 3.1 Install Kubernetes - Registry look like is down

Hello, i am trying to setup the kubeadm cp on a VM, not on GCE or AWS

kubeadm init --config=kubeadm-crio.yaml | tee kubeadm-crio.out.

i got this error

  1. kubeadm init -v=5 --config=kube/kubeadm-crio.yaml | tee kube/kubeadm-crio-init.out
  2.  
  1. I0129 16:42:46.439739 4510 initconfiguration.go:247] loading configuration from "kube/kubeadm-crio.yaml"
  2. I0129 16:42:46.442130 4510 interface.go:431] Looking for default routes with IPv4 addresses
  3. I0129 16:42:46.442187 4510 interface.go:436] Default route transits interface "enp1s0"
  4. I0129 16:42:46.442297 4510 interface.go:208] Interface enp1s0 is up
  5. I0129 16:42:46.442450 4510 interface.go:256] Interface "enp1s0" has 2 addresses :[192.168.122.10/24 fe80::5054:ff:fe24:504/64].
  6. I0129 16:42:46.442569 4510 interface.go:223] Checking addr 192.168.122.10/24.
  7. I0129 16:42:46.442731 4510 interface.go:230] IP found 192.168.122.10
  8. I0129 16:42:46.442883 4510 interface.go:262] Found valid IPv4 address 192.168.122.10 for interface "enp1s0".
  9. I0129 16:42:46.442969 4510 interface.go:442] Found active IP 192.168.122.10
  10. [init] Using Kubernetes version: v1.22.4
  11. [preflight] Running pre-flight checks
  12. I0129 16:42:46.448842 4510 checks.go:577] validating Kubernetes and kubeadm version
  13. I0129 16:42:46.448899 4510 checks.go:170] validating if the firewall is enabled and active
  14. [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  15. I0129 16:42:46.460936 4510 checks.go:205] validating availability of port 6443
  16. I0129 16:42:46.462526 4510 checks.go:205] validating availability of port 10259
  17. I0129 16:42:46.462705 4510 checks.go:205] validating availability of port 10257
  18. I0129 16:42:46.462964 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
  19. I0129 16:42:46.463095 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
  20. I0129 16:42:46.463223 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
  21. I0129 16:42:46.463332 4510 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
  22. I0129 16:42:46.463678 4510 checks.go:432] validating if the connectivity type is via proxy or direct
  23. I0129 16:42:46.463712 4510 checks.go:471] validating http connectivity to first IP address in the CIDR
  24. I0129 16:42:46.463745 4510 checks.go:471] validating http connectivity to first IP address in the CIDR
  25. I0129 16:42:46.463771 4510 checks.go:106] validating the container runtime
  26. I0129 16:42:46.478017 4510 checks.go:372] validating the presence of executable crictl
  27. I0129 16:42:46.478074 4510 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
  28. I0129 16:42:46.478115 4510 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
  29. I0129 16:42:46.478132 4510 checks.go:649] validating whether swap is enabled or not
  30. I0129 16:42:46.478150 4510 checks.go:372] validating the presence of executable conntrack
  31. I0129 16:42:46.478160 4510 checks.go:372] validating the presence of executable ip
  32. I0129 16:42:46.478166 4510 checks.go:372] validating the presence of executable iptables
  33. I0129 16:42:46.478174 4510 checks.go:372] validating the presence of executable mount
  34. I0129 16:42:46.478187 4510 checks.go:372] validating the presence of executable nsenter
  35. I0129 16:42:46.478201 4510 checks.go:372] validating the presence of executable ebtables
  36. I0129 16:42:46.478208 4510 checks.go:372] validating the presence of executable ethtool
  37. I0129 16:42:46.478215 4510 checks.go:372] validating the presence of executable socat
  38. I0129 16:42:46.478259 4510 checks.go:372] validating the presence of executable tc
  39. I0129 16:42:46.478267 4510 checks.go:372] validating the presence of executable touch
  40. I0129 16:42:46.478319 4510 checks.go:520] running all checks
  41. I0129 16:42:46.488925 4510 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
  42. I0129 16:42:46.489007 4510 checks.go:618] validating kubelet version
  43. I0129 16:42:46.549359 4510 checks.go:132] validating if the "kubelet" service is enabled and active
  44. I0129 16:42:46.561319 4510 checks.go:205] validating availability of port 10250
  45. I0129 16:42:46.561517 4510 checks.go:205] validating availability of port 2379
  46. I0129 16:42:46.561711 4510 checks.go:205] validating availability of port 2380
  47. I0129 16:42:46.561891 4510 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
  48. [preflight] Pulling images required for setting up a Kubernetes cluster
  49. [preflight] This might take a minute or two, depending on the speed of your internet connection
  50. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  51. I0129 16:42:46.562145 4510 checks.go:838] using image pull policy: IfNotPresent
  52. I0129 16:42:46.581090 4510 checks.go:847] image exists: k8s.gcr.io/kube-apiserver:v1.22.4
  53. I0129 16:42:46.599228 4510 checks.go:847] image exists: k8s.gcr.io/kube-controller-manager:v1.22.4
  54. I0129 16:42:46.616527 4510 checks.go:847] image exists: k8s.gcr.io/kube-scheduler:v1.22.4
  55. I0129 16:42:46.632748 4510 checks.go:847] image exists: k8s.gcr.io/kube-proxy:v1.22.4
  56. I0129 16:42:46.648435 4510 checks.go:847] image exists: k8s.gcr.io/pause:3.5
  57. I0129 16:42:46.665430 4510 checks.go:847] image exists: k8s.gcr.io/etcd:3.5.0-0
  58. I0129 16:42:46.682597 4510 checks.go:855] pulling: k8s.gcr.io/coredns:v1.8.4
  59. [preflight] Some fatal errors occurred:
  60. [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.8.4: output: time="2022-01-29T16:42:49-04:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.4 in k8s.gcr.io/coredns: manifest unknown: Failed to fetch \"v1.8.4\" from request \"/v2/coredns/manifests/v1.8.4\"."
  61. , error: exit status 1
  62. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  63. error execution phase preflight
  64. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
  65. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
  66. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
  67. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
  68. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
  69. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
  70. k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
  71. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
  72. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
  73. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
  74. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
  75. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
  76. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
  77. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
  78. k8s.io/kubernetes/cmd/kubeadm/app.Run
  79. /home/abuild/rpmbuild/BUILD/kubernetes-1.22.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
  80. main.main
  81. _output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
  82. runtime.main
  83. /usr/lib64/go/1.17/src/runtime/proc.go:255
  84. runtime.goexit
  85. /usr/lib64/go/1.17/src/runtime/asm_amd64.s:1581
  86.  

looks like coredns does not exist, so How can i list the images from k8s.gcr.io?

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • First, reset the kubeadm, then create kubeadm-config.yaml then run init command.
    sudo kubeadm reset
    nano kubeadm-config.yaml
    in kubeadm-config.yaml

    1. apiVersion: kubeadm.k8s.io/v1beta2
    2. kind: ClusterConfiguration
    3. kubernetesVersion: 1.21.1
    4. controlPlaneEndpoint: "k8scp:6443"
    5. networking:
    6. podSubnet: 192.168.0.0/16

    kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out

  • Hello, my issue is that the image coredns not exist on the repository k8s.gcr.io

  • Are you using docker or cri-o?

  • c> @alihasanahmedk said:

    Are you using docker or cri-o?

    cri-o

  • @devdorrejo sorry man I haven't used cri-o for this course.

  • Hi @devdorrejo,

    On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.

    For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.

    It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.

    Regards,
    -Chris

  • Posts: 24
    edited February 2022

    @chrispokorni said:
    Hi @devdorrejo,

    On a local VM I would ensure that my guest OS firewalls are disabled, and that the hypervisor is allowing all inbound traffic to my VM instances from all sources, all protocols, to all ports.

    For cri-o installation keep in mind that in step 5.(b).iv the variable is supposed to match your guest OS Ubuntu version.

    It also seems that you have deviated from the recommended installation, by installing Kubernetes v1.22.4. The recommended version to initialize the cluster is v1.21.1, while in an exercise in Chapter 4 you may find the cluster upgrade steps, from v1.21.1 to v1.22.1.

    Regards,
    -Chris

    Thanks for the answed, did the changes, now i progress little.

    but now i have the next issue
    kubelet.service: https://pastebin.com/eRQXe0pn
    kubeadm-init.out: https://pastebin.com/ZZv6ekTZ

    it can't found the node itself.

    my steps:

    1. swapoff -av
    2. sed -e '/^[^#]/ s/\(^.*swap.*$\)/#\ \1/' -i /etc/fstab
    3.  
    4. wget -c https://training.linuxfoundation.org/cm/LFS258/LFS258_V2021-09-20_SOLUTIONS.tar.xz --user=xxxxxx --password=xxxxxx -O - | tar -xJv
    5.  
    6. modprobe br_netfilter && modprobe overlay
    7.  
    8. cat >/etc/sysctl.d/99-kubernetes-cri.conf <<EOF
    9. net.bridge.bridge-nf-call-iptables = 1
    10. net.ipv4.ip_forward = 1
    11. net.bridge.bridge-nf-call-ip6tables = 1
    12. EOF
    13. sysctl --system
    14.  
    15. export OS=xUbuntu_18.04
    16. export VER=1.21
    17.  
    18. echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/ /" | tee -a /etc/apt/sources.list.d/cri-0.list && curl -L http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VER/$OS/Release.key | apt-key add -
    19.  
    20. echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee -a /etc/apt/sources.list.d/libcontainers.list && curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
    21.  
    22. apt update && apt install -y cri-o cri-o-runc
    23.  
    24. systemctl daemon-reload && systemctl enable --now crio
    25.  
    26. echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
    27.  
    28. curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt update
    29.  
    30. apt install -y kubeadm=1.21.1-00 kubelet=1.21.1-00 kubectl=1.21.1-00
    31.  
    32. apt-mark hold kubelet kubeadm kubectl
    33.  
    34. systemctl enable --now kubelet
    35.  
    36. wget https://docs.projectcalico.org/manifests/calico.yaml
    37.  
    38. cp /etc/hosts /etc/hosts.old
    39.  
    40. cat >/etc/hosts <<EOF
    41. 192.168.122.20 k8scp
    42. 127.0.0.1 localhost
    43. EOF
    44.  
    45. find /home -name kubeadm-crio.yaml -exec cp {} . \;
    46.  
    47. sed -i 's/1.20.0/1.21.1/' kubeadm-crio.yaml
    48.  
    49. kubeadm -v=5 init --config=kubeadm-crio.yaml --upload-certs | tee kubeadm-init.out
  • Hi @devdorrejo,

    There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
    In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).

    Regards,
    -Chris

  • Posts: 24
    edited February 2022

    @chrispokorni said:
    Hi @devdorrejo,

    There are many "connection refused" messages indicating that critical ports are still blocked. When provisioning your VMs please ensure that the hypervisor firewall rule allows traffic from all sources, to all ports, all protocols. Disable guest OS firewalls.
    In addition, assign VM IP addresses from a subnet that does not overlap the default Calico pod network 192.168.0.0/16 (or modify calico.yaml and kubeadm-crio.yaml to use a different pod subnet).

    Regards,
    -Chris

    Hi Chris,

    The refuse connection is with the VM itself, the machine is the one with the ip 192.168.122.10 that is different to the one of 192.168.0.0/16 from calico.

    i opened the port 6443, which is the system itself.

    this table is created by doing the steps of the labs.

    VM iptables -L

    1. Chain INPUT (policy ACCEPT)
    2. target prot opt source destination
    3. KUBE-FIREWALL all -- anywhere anywhere
    4.  
    5. Chain FORWARD (policy ACCEPT)
    6. target prot opt source destination
    7.  
    8. Chain OUTPUT (policy ACCEPT)
    9. target prot opt source destination
    10. KUBE-FIREWALL all -- anywhere anywhere
    11.  
    12. Chain KUBE-FIREWALL (2 references)
    13. target prot opt source destination
    14. DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
    15. DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
    16.  
    17. Chain KUBE-KUBELET-CANARY (0 references)
    18. target prot opt source destination
    19.  

    Host (virt-manager) iptables:

    1. ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:sun-sr-https 6443
    2. ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:10250
    3. ACCEPT tcp -- anywhere 192.168.122.20 tcp dpt:http-alt 8080
  • Posts: 2,451
    edited February 2022

    Hi @devdorrejo,

    For the IP address overlap I would encourage you to explore resources that may clarify the network size notation associated with the Calico network plugin, to help you to avoid such overlaps when working with local Kubernetes deployments.

    It seems to me that the hypervisor allows TCP traffic to a small number of ports. In doing so, traffic of different protocols (such as UDP) to other ports that are required by Kubernetes, Calico, coreDNS, and other plugins/addons will not be allowed, hence impacting the required functionality for container orchestration.

    EDIT: The "Overview" section of Lab Exercise 3.1 outlines the networking requirements set at the cloud VPC level or the local hypervisor for Kubernetes Node VMs, such as:

    ... allows all traffic to all ports...

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training