Welcome to the Linux Foundation Forum!

kubeadm upgrade apply - stuck

I am upgrading kubernetes using kubeadm from 20.1 to 20.2
Console Output -
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS

In another putty session - showing below

NAME READY STATUS RESTARTS AGE
calico-kube-controllers-744cfdf676-fgtgr 0/1 CrashLoopBackOff 28 141m

Any idea

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 1,000

    Hello,

    Our lab environment is not yet using 20.1 nor updating to 20.2. What happens when you follow the lab as written?

    Are there any errors or indications in the output of kubectl -n kube-system logs calico-kube-controllers-?

    Regards,

  • Thanks Serewicz for the response. I can see below error in log

    2021-01-16 21:27:03.488 [ERROR][1] client.go 261: Error getting cluster information config ClusterInformation="default" error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host
    2021-01-16 21:27:03.488 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host

  • Posts: 2,453

    Hi @hello2sharad,

    Did you try to delete the calico-kube-controller-... Pod to allow the Deployment controller to re-deploy it?

    Regards,
    -Chris

  • Hi,

    I bumped in to the same issue and it seems to be a bug reported here as well Kubeadm Issue# 2035

    Jus to highlight, i am updating from v.18.8 to v1.19.0, also reconciled with https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/ but error is persisting.

    1. baqai@ckan01:~/k8ssetup$ sudo kubeadm upgrade apply v1.19.0
    2. [upgrade/config] Making sure the configuration is correct:
    3. [upgrade/config] Reading configuration from the cluster...
    4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    5. [preflight] Running pre-flight checks.
    6. [upgrade] Running cluster health checks
    7. [upgrade/version] You have chosen to change the cluster version to "v1.19.0"
    8. [upgrade/versions] Cluster version: v1.19.7
    9. [upgrade/versions] kubeadm version: v1.19.7
    10. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    11. [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
    12. [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
    13. [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
    14. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
    15. Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7
    16. Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d
    17. Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5
    18. [upgrade/etcd] Upgrading to TLS for etcd
    19. [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version "3.4.13-0" is not newer than the currently installed "3.4.13-0". Skipping etcd upgrade
    20. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests053835124"
    21. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    22. [upgrade/staticpods] Renewing apiserver certificate
    23. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
    24. [upgrade/staticpods] Renewing front-proxy-client certificate
    25. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
    26. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-apiserver.yaml"
    27. [upgrade/staticpods] Waiting for the kubelet to restart the component
    28. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    29. Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7
    30. Static pod: kube-apiserver-ckan01 hash: 98610ddf262474b522fe96193a20c293
    31. [apiclient] Found 1 Pods for label selector component=kube-apiserver
    32. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    33. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    34. [upgrade/staticpods] Renewing controller-manager.conf certificate
    35. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-controller-manager.yaml"
    36. [upgrade/staticpods] Waiting for the kubelet to restart the component
    37. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    38. Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d
    39. Static pod: kube-controller-manager-ckan01 hash: ead3b8933eb874ce423dbc0be136df58
    40. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    41. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    42. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    43. [upgrade/staticpods] Renewing scheduler.conf certificate
    44. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-scheduler.yaml"
    45. [upgrade/staticpods] Waiting for the kubelet to restart the component
    46. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    47. Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5
    48. Static pod: kube-scheduler-ckan01 hash: 23d2ea3ba1efa3e09e8932161a572387
    49. [apiclient] Found 1 Pods for label selector component=kube-scheduler
    50. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    51. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    52. [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
    53. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    55. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    56. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    57. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    58. [addons] Applied essential addon: CoreDNS

    Also performed the following steps:

    • Deleted calico-kube-controllers deployment
    • Deleted coredns and re-enable the add on by issuing following command:
    1. aqai@ckan01:~/k8ssetup$ sudo kubeadm init phase addon coredns --config kubeadm-config.yaml
    2. W0123 10:09:38.823436 14857 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    3. [addons] Applied essential addon: CoreDNS

    Details of the Environment
    Operating System:

    1. Distributor ID: Ubuntu
    2. Description: Ubuntu 18.04.1 LTS
    3. Release: 18.04
    4. Codename: bionic

    kubeadm Version:

    1. baqai@ckan01:~$ kubeadm version
    2. kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:21:39Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

    kubectl version:

    1. baqai@ckan01:~$ kubectl version
    2. Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    3. Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

    Appreciate some help

  • Posts: 1,000

    Hello,

    I note that you have version 1.18.15 and 1.19.0 for kubectl and 1.19.7 for kubeadm. This tends to be problematic, as typically the upgrade is designed to match versions. We lock our version of kubernetes in the lab, so somewhere you have diverged from the lab.

    What happens when you follow the lab and all the software is on 1.18.1 before you update to 1.19.0?

    Regards,

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training