kubeadm upgrade apply - stuck

I am upgrading kubernetes using kubeadm from 20.1 to 20.2
Console Output -
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
In another putty session - showing below
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-744cfdf676-fgtgr 0/1 CrashLoopBackOff 28 141m
Any idea
Comments
-
Hello,
Our lab environment is not yet using 20.1 nor updating to 20.2. What happens when you follow the lab as written?
Are there any errors or indications in the output of kubectl -n kube-system logs calico-kube-controllers-?
Regards,
0 -
Thanks Serewicz for the response. I can see below error in log
2021-01-16 21:27:03.488 [ERROR][1] client.go 261: Error getting cluster information config ClusterInformation="default" error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host
2021-01-16 21:27:03.488 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host0 -
Hi @hello2sharad,
Did you try to delete the
calico-kube-controller-...
Pod to allow the Deployment controller to re-deploy it?Regards,
-Chris0 -
Hi,
I bumped in to the same issue and it seems to be a bug reported here as well Kubeadm Issue# 2035
Jus to highlight, i am updating from v.18.8 to v1.19.0, also reconciled with
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/
but error is persisting.baqai@ckan01:~/k8ssetup$ sudo kubeadm upgrade apply v1.19.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.19.0" [upgrade/versions] Cluster version: v1.19.7 [upgrade/versions] kubeadm version: v1.19.7 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"... Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7 Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version "3.4.13-0" is not newer than the currently installed "3.4.13-0". Skipping etcd upgrade [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests053835124" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7 Static pod: kube-apiserver-ckan01 hash: 98610ddf262474b522fe96193a20c293 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d Static pod: kube-controller-manager-ckan01 hash: ead3b8933eb874ce423dbc0be136df58 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5 Static pod: kube-scheduler-ckan01 hash: 23d2ea3ba1efa3e09e8932161a572387 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS
Also performed the following steps:
- Deleted
calico-kube-controllers
deployment - Deleted
coredns
and re-enable the add on by issuing following command:
aqai@ckan01:~/k8ssetup$ sudo kubeadm init phase addon coredns --config kubeadm-config.yaml W0123 10:09:38.823436 14857 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [addons] Applied essential addon: CoreDNS
Details of the Environment
Operating System:Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic
kubeadm
Version:baqai@ckan01:~$ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:21:39Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
kubectl
version:baqai@ckan01:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Appreciate some help
0 - Deleted
-
Hello,
I note that you have version 1.18.15 and 1.19.0 for kubectl and 1.19.7 for kubeadm. This tends to be problematic, as typically the upgrade is designed to match versions. We lock our version of kubernetes in the lab, so somewhere you have diverged from the lab.
What happens when you follow the lab and all the software is on 1.18.1 before you update to 1.19.0?
Regards,
0
Categories
- All Categories
- 50 LFX Mentorship
- 103 LFX Mentorship: Linux Kernel
- 575 Linux Foundation IT Professional Programs
- 304 Cloud Engineer IT Professional Program
- 125 Advanced Cloud Engineer IT Professional Program
- 53 DevOps Engineer IT Professional Program
- 60 Cloud Native Developer IT Professional Program
- 5 Express Training Courses
- 5 Express Courses - Discussion Forum
- 2K Training Courses
- 19 LFC110 Class Forum
- 7 LFC131 Class Forum
- 27 LFD102 Class Forum
- 157 LFD103 Class Forum
- 20 LFD121 Class Forum
- 1 LFD137 Class Forum
- 61 LFD201 Class Forum
- 1 LFD210 Class Forum
- LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- LFD237 Class Forum
- 23 LFD254 Class Forum
- 611 LFD259 Class Forum
- 105 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFD273 Class Forum
- 2 LFS145 Class Forum
- 24 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 11 LFS203 Class Forum
- 75 LFS207 Class Forum
- 300 LFS211 Class Forum
- 54 LFS216 Class Forum
- 47 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 11 LFS244 Class Forum
- 36 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 140 LFS253 Class Forum
- LFS254 Class Forum
- 1.1K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 93 LFS260 Class Forum
- 132 LFS261 Class Forum
- 33 LFS262 Class Forum
- 80 LFS263 Class Forum
- 15 LFS264 Class Forum
- 11 LFS266 Class Forum
- 18 LFS267 Class Forum
- 17 LFS268 Class Forum
- 23 LFS269 Class Forum
- 203 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- LFS281 Class Forum
- 234 LFW211 Class Forum
- 172 LFW212 Class Forum
- 7 SKF100 Class Forum
- SKF200 Class Forum
- 902 Hardware
- 219 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 209 Networking
- 101 Printers & Scanners
- 85 Storage
- 763 Linux Distributions
- 88 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 24 openSUSE
- 142 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 357 Ubuntu
- 479 Linux System Administration
- 41 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 108 System Management
- 49 Web Management
- 68 Mobile Computing
- 23 Android
- 30 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 537 Off Topic
- 131 Introductions
- 217 Small Talk
- 21 Study Material
- 826 Programming and Development
- 278 Kernel Development
- 514 Software Development
- 928 Software
- 260 Applications
- 184 Command Line
- 3 Compiling/Installing
- 76 Games
- 316 Installation
- 62 All In Program
- 62 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)