Welcome to the Linux Foundation Forum!

Lab 4.1 Kubeadm frozen upgrade

After 40 mins the process continues as this log:

  1. student@master:~$ sudo kubeadm upgrade apply v1.19.0
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Running cluster health checks
  7. [upgrade/version] You have chosen to change the cluster version to "v1.19.0"
  8. [upgrade/versions] Cluster version: v1.18.1
  9. [upgrade/versions] kubeadm version: v1.19.0
  10. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
  11. [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
  12. [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
  13. [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
  14. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
  15. Static pod: kube-apiserver-master hash: 2ba71dfff518c3f3b54a5b4c091ef562
  16. Static pod: kube-controller-manager-master hash: a2e7dbae641996802ce46175f4f5c5dc
  17. Static pod: kube-scheduler-master hash: 363a5bee1d59c51a98e345162db75755
  18. [upgrade/etcd] Upgrading to TLS for etcd
  19. Static pod: etcd-master hash: b7c02e796a9da1140040a08e1817d263
  20. [upgrade/staticpods] Preparing for "etcd" upgrade
  21. [upgrade/staticpods] Renewing etcd-server certificate
  22. [upgrade/staticpods] Renewing etcd-peer certificate
  23. [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
  24. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-09-46-04/etcd.yaml"
  25. [upgrade/staticpods] Waiting for the kubelet to restart the component
  26. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  27. Static pod: etcd-master hash: b7c02e796a9da1140040a08e1817d263
  28. Static pod: etcd-master hash: 3c881c8c0b94104fff8a19ab6d6dfce3
  29. [apiclient] Found 1 Pods for label selector component=etcd
  30. [upgrade/staticpods] Component "etcd" upgraded successfully!
  31. [upgrade/etcd] Waiting for etcd to become available
  32. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests947290782"
  33. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
  34. [upgrade/staticpods] Renewing apiserver certificate
  35. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
  36. [upgrade/staticpods] Renewing front-proxy-client certificate
  37. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
  38. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-09-46-04/kube-apiserver.yaml"
  39. [upgrade/staticpods] Waiting for the kubelet to restart the component
  40. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  41. Static pod: kube-apiserver-master hash: 2ba71dfff518c3f3b54a5b4c091ef562
  42. Static pod: kube-apiserver-master hash: 2ba71dfff518c3f3b54a5b4c091ef562
  43. Static pod: kube-apiserver-master hash: 33b81d16e67eaec4d6672d3c096e4260
  44. [apiclient] Found 1 Pods for label selector component=kube-apiserver
  45. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
  46. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
  47. [upgrade/staticpods] Renewing controller-manager.conf certificate
  48. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-09-46-04/kube-controller-manager.yaml"
  49. [upgrade/staticpods] Waiting for the kubelet to restart the component
  50. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  51. Static pod: kube-controller-manager-master hash: a2e7dbae641996802ce46175f4f5c5dc
  52. Static pod: kube-controller-manager-master hash: ead3b8933eb874ce423dbc0be136df58
  53. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
  54. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
  55. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
  56. [upgrade/staticpods] Renewing scheduler.conf certificate
  57. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-09-46-04/kube-scheduler.yaml"
  58. [upgrade/staticpods] Waiting for the kubelet to restart the component
  59. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  60. Static pod: kube-scheduler-master hash: 363a5bee1d59c51a98e345162db75755
  61. Static pod: kube-scheduler-master hash: 23d2ea3ba1efa3e09e8932161a572387
  62. [apiclient] Found 1 Pods for label selector component=kube-scheduler
  63. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
  64. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  65. [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
  66. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  67. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  68. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  69. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  70. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  71. [addons] Applied essential addon: CoreDNS

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Hi,

    I bumped into the same issue and it seems to be a bug reported here as well Kubeadm Issue# 2035

    Jus to highlight, i am updating from v.18.8 to v1.19.0, also reconciled with https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/ but error is persisting.

    1. baqai@ckan01:~/k8ssetup$ sudo kubeadm upgrade apply v1.19.0
    2. [upgrade/config] Making sure the configuration is correct:
    3. [upgrade/config] Reading configuration from the cluster...
    4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    5. [preflight] Running pre-flight checks.
    6. [upgrade] Running cluster health checks
    7. [upgrade/version] You have chosen to change the cluster version to "v1.19.0"
    8. [upgrade/versions] Cluster version: v1.19.7
    9. [upgrade/versions] kubeadm version: v1.19.7
    10. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    11. [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
    12. [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
    13. [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
    14. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
    15. Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7
    16. Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d
    17. Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5
    18. [upgrade/etcd] Upgrading to TLS for etcd
    19. [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version "3.4.13-0" is not newer than the currently installed "3.4.13-0". Skipping etcd upgrade
    20. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests053835124"
    21. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    22. [upgrade/staticpods] Renewing apiserver certificate
    23. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
    24. [upgrade/staticpods] Renewing front-proxy-client certificate
    25. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
    26. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-apiserver.yaml"
    27. [upgrade/staticpods] Waiting for the kubelet to restart the component
    28. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    29. Static pod: kube-apiserver-ckan01 hash: a9e702f54ca7ad08e4f0aad39aedabc7
    30. Static pod: kube-apiserver-ckan01 hash: 98610ddf262474b522fe96193a20c293
    31. [apiclient] Found 1 Pods for label selector component=kube-apiserver
    32. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    33. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    34. [upgrade/staticpods] Renewing controller-manager.conf certificate
    35. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-controller-manager.yaml"
    36. [upgrade/staticpods] Waiting for the kubelet to restart the component
    37. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    38. Static pod: kube-controller-manager-ckan01 hash: c5e60c4d599740592efddb382b4c673d
    39. Static pod: kube-controller-manager-ckan01 hash: ead3b8933eb874ce423dbc0be136df58
    40. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    41. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    42. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    43. [upgrade/staticpods] Renewing scheduler.conf certificate
    44. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-23-10-19-02/kube-scheduler.yaml"
    45. [upgrade/staticpods] Waiting for the kubelet to restart the component
    46. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    47. Static pod: kube-scheduler-ckan01 hash: 3167cdaddf6f15b096f2eb409185adc5
    48. Static pod: kube-scheduler-ckan01 hash: 23d2ea3ba1efa3e09e8932161a572387
    49. [apiclient] Found 1 Pods for label selector component=kube-scheduler
    50. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    51. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    52. [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
    53. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    55. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    56. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    57. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    58. [addons] Applied essential addon: CoreDNS

    Also performed the following steps:

    • Deleted calico-kube-controllers deployment
    • Deleted coredns and re-enable the add on by issuing following command:
    1. aqai@ckan01:~/k8ssetup$ sudo kubeadm init phase addon coredns --config kubeadm-config.yaml
    2. W0123 10:09:38.823436 14857 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    3. [addons] Applied essential addon: CoreDNS

    Details of the Environment
    Operating System:

    1. Distributor ID: Ubuntu
    2. Description: Ubuntu 18.04.1 LTS
    3. Release: 18.04
    4. Codename: bionic

    kubeadm Version:

    1. baqai@ckan01:~$ kubeadm version
    2. kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:21:39Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

    kubectl version:

    1. baqai@ckan01:~$ kubectl version
    2. Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    3. Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

    Appreciate some help

  • Posts: 1,000

    Hello,
    I think this post may be in more than one place. You are using 1.19.7, 1.18.15, and 1.19.0 all at the same time. This would not be the case had you followed the lab. What happens when you follow the lab as written?

  • Hello,
    Pls help with 4.1 task 5. (the tasks 3/4 are Ok)
    kubectl -n kube-system exec -it etcd-master -- sh -c "ETCDCTL_API=3 \
    ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt \
    ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key etcdctl --endpoints=https://127.0.0.1:2379
    ??? Nothing happens. I cannot see anything in the table format.

    I can save the etcd to -- > Snapshot saved at /var/lib/etcd/snapshot.db .This step is Ok

    Thank you

  • I've got an issue of upgrading my claster - > task 7

    alexey@master:~$ sudo kubeadm upgrade plan
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0. Current version: v1.18.0
    To see the stack trace of this error execute with --v=5 or higher

    Thank you

  • Posts: 1,000

    Hello,

    If some earlier commands are not working you may have other issues with your configuration, which may prevent the upgrade from running as expected.

    It looks like you are missing the actual command when you ran etcdctl. I don't see -w table endpoint status --cluster

    Let's start with some basics, what are you using for your exercise environment? What is the OS version you are using and how many CPUs and memory do your nodes have?

    What version did you install to begin with? Was it 1.18.0 for all systems when you used kubeadm init?

    If you run kubectl get pod --all-namespaces do all the pods show as having all containers running properly?

    Regards,

  • alexey@master:~$ cat /etc/os-release
    NAME="Ubuntu"
    VERSION="18.04.5 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.5 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic

    UBUNTU_CODENAME=bionic

    alexey@master:~$ free -h
    total used free shared buff/cache available
    Mem: 3.9G 1.0G 397M 1.4M 2.5G 2.9G

    Swap: 0B 0B 0B

    alexey@master:~$ cat /proc/cpuinfo | grep processor
    processor : 0

    processor : 1

    All nodes have the same OS (Ubuntu 18.04) + CPU (2) + RAM(4GB).

    ===========
    alexey@master:~$ kubectl get pod --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    default nginx-6d48c9bcb8-ml9mw 1/1 Running 0 6h34m
    kube-system calico-kube-controllers-7dbc97f587-rkqf4 1/1 Running 0 93m
    kube-system calico-node-cxsqq 1/1 Running 0 6d22h
    kube-system calico-node-gnp2l 1/1 Running 0 6d22h
    kube-system calico-node-nqqfs 1/1 Running 0 6d22h
    kube-system coredns-66bff467f8-fphbs 1/1 Running 0 93m
    kube-system coredns-66bff467f8-vxgvh 1/1 Running 0 93m
    kube-system etcd-master 1/1 Running 0 6d22h
    kube-system kube-apiserver-master 1/1 Running 0 6d22h
    kube-system kube-controller-manager-master 1/1 Running 0 6d22h
    kube-system kube-proxy-4fddg 1/1 Running 0 6d22h
    kube-system kube-proxy-m55vm 1/1 Running 0 6d22h
    kube-system kube-proxy-n9dxr 1/1 Running 0 6d22h

    kube-system kube-scheduler-master 1/1 Running 0 6d22h

    alexey@master:~$ kubectl get node
    NAME STATUS ROLES AGE VERSION
    master Ready,SchedulingDisabled master 6d22h v1.18.1
    worker1 Ready 6d22h v1.18.1

    worker2 Ready 6d22h v1.1

    alexey@master:~/backup$ cat kubeadm-config.yaml
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: 1.18.0
    controlPlaneEndpoint: "k8smaster:6443"
    networking:
    podSubnet: 192.168.0.0/16

    ====

    The labs were performed step by step . All labs were Ok for chapter 3 and 4. I got the issue with upgrading my claster - > task 7.

    YES, I installed 1.18.0 .

  • Hello,
    It seems like the lab guide has been changed recently. I installed 1.18 due to the lab guide several weeks ago.
    I can see ver 1.19 must be installed before upgrading to 1.20.1 now. I've re-installed the lab step by step. It'ok now. The upgrade was Ok to 1.20.1.

    I dont understand the idea to hold on/off so many times but it's ok.

    Thank you!

  • how can to take a back for the ETCD from the jumbo box

  • Posts: 1,000

    Hello,

    I'm unsure of what you are asking. I think you are asking about restoration, but what do you mean "jumbo box"?

    Regards,

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training