Lab 4.1 Upgrade Kubernetes
I follow the laboratory steps but when I run sudo kubeadm upgrade apply v1.19.0, (also with v1.18.10) It get stucked with the following output:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.10"
[upgrade/versions] Cluster version: v1.18.1
[upgrade/versions] kubeadm version: v1.18.10
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.10"...
Static pod: kube-apiserver-master01 hash: 7207afcb0e8b032b8071dc06ba96adc7
Static pod: kube-controller-manager-master01 hash: a2e7dbae641996802ce46175f4f5c5dc
Static pod: kube-scheduler-master01 hash: 363a5bee1d59c51a98e345162db75755
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.10" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests459503285"
W1028 05:57:20.429155 31411 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-10-28-05-57-16/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master01 hash: 7207afcb0e8b032b8071dc06ba96adc7
Static pod: kube-apiserver-master01 hash: 7207afcb0e8b032b8071dc06ba96adc7
Static pod: kube-apiserver-master01 hash: 7207afcb0e8b032b8071dc06ba96adc7
Static pod: kube-apiserver-master01 hash: 7207afcb0e8b032b8071dc06ba96adc7
Static pod: kube-apiserver-master01 hash: dcb7a8a98edec0956dac28ac29367aba
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-10-28-05-57-16/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master01 hash: a2e7dbae641996802ce46175f4f5c5dc
Static pod: kube-controller-manager-master01 hash: e0aab2e1ec91acfcdd545297a1188de6
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-10-28-05-57-16/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master01 hash: 363a5bee1d59c51a98e345162db75755
Static pod: kube-scheduler-master01 hash: 305afebaf30deeacadb3276276308c75
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
And it never finish.
Also I run kubectl get pods --all-namespaces
My calico controller pod's log show the following output:
2020-10-28 06:07:36.238 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"kubernetes"}
W1028 06:07:36.241071 1 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-10-28 06:07:36.242 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-10-28 06:07:46.243 [ERROR][1] client.go 261: Error getting cluster information config ClusterInformation="default" error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded
2020-10-28 06:07:46.244 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded
I hope you can help me, thanks.
Comments
-
I found the solution, it is related with a single control-node because it can't initialize the coredns pods the issue is on kubeadm GitHub repo:
- First I need to do
kubectl -n kube-system get cm kubeadm-config -oyaml> config.yaml
- Then from config.yaml remove some lines to get the following file:
# config.yaml apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: master01:6443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.18.1 networking: dnsDomain: cluster.local podSubnet: 192.168.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
- Delete coredns deployment:
kubectl delete deployments coredns -n kube-system
- Uncordon the control-plane
kubectl uncordon master01
- Remove control-plain taints:
kubectl taint nodes --all node-role.kubernetes.io/master01-
- Apply the following command:
sudo kubeadm init phase addon coredns --config config.yaml
- Finally
sudo kubeadm upgrade apply v1.19.3
- Then continue with the lab upgrading kubectl and kubelet
I hope this can help some one else.
0 - First I need to do
-
Hi @fjgarciamacias,
Thank you for your post.
Keep in mind however that the lab exercises have not been tested against Kubernetes v1.19.3. The labs are using v1.19.0 and are aligned with the current CKA exam environment.Regards,
-Chris0 -
thank you that saved me from losing a lot of time
0
Categories
- All Categories
- 51 LFX Mentorship
- 104 LFX Mentorship: Linux Kernel
- 576 Linux Foundation IT Professional Programs
- 304 Cloud Engineer IT Professional Program
- 125 Advanced Cloud Engineer IT Professional Program
- 53 DevOps Engineer IT Professional Program
- 61 Cloud Native Developer IT Professional Program
- 5 Express Training Courses
- 5 Express Courses - Discussion Forum
- 2.1K Training Courses
- 19 LFC110 Class Forum
- 7 LFC131 Class Forum
- 27 LFD102 Class Forum
- 158 LFD103 Class Forum
- 21 LFD121 Class Forum
- 1 LFD137 Class Forum
- 61 LFD201 Class Forum
- 1 LFD210 Class Forum
- LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- LFD237 Class Forum
- 23 LFD254 Class Forum
- 613 LFD259 Class Forum
- 105 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFD273 Class Forum
- 2 LFS145 Class Forum
- 25 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 11 LFS203 Class Forum
- 77 LFS207 Class Forum
- 300 LFS211 Class Forum
- 54 LFS216 Class Forum
- 47 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 11 LFS244 Class Forum
- 37 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 141 LFS253 Class Forum
- LFS254 Class Forum
- 1.1K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 93 LFS260 Class Forum
- 132 LFS261 Class Forum
- 33 LFS262 Class Forum
- 80 LFS263 Class Forum
- 15 LFS264 Class Forum
- 11 LFS266 Class Forum
- 18 LFS267 Class Forum
- 18 LFS268 Class Forum
- 23 LFS269 Class Forum
- 203 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- LFS281 Class Forum
- 236 LFW211 Class Forum
- 172 LFW212 Class Forum
- 7 SKF100 Class Forum
- SKF200 Class Forum
- 903 Hardware
- 219 Drivers
- 74 I/O Devices
- 44 Monitors
- 116 Multimedia
- 209 Networking
- 101 Printers & Scanners
- 85 Storage
- 763 Linux Distributions
- 88 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 24 openSUSE
- 142 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 357 Ubuntu
- 479 Linux System Administration
- 41 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 108 System Management
- 49 Web Management
- 68 Mobile Computing
- 23 Android
- 30 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 538 Off Topic
- 131 Introductions
- 217 Small Talk
- 22 Study Material
- 826 Programming and Development
- 278 Kernel Development
- 514 Software Development
- 928 Software
- 260 Applications
- 184 Command Line
- 3 Compiling/Installing
- 76 Games
- 316 Installation
- 61 All In Program
- 61 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)