Welcome to the Linux Foundation Forum!

Lab 4 Step 8 - sudo kubeadm upgrade plan

Hi,

I'm trying to follow the lab guides but got stuck with the following command:

sudo kubeadm upgrade plan

I run the labs on VMWare Workstation PRO 15.5
Ubuntu 24.04 LTS
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.1", GitCommit:"6911225c3f747e1cd9d109c305436d08b668f086", GitTreeState:"clean", BuildDate:"2024-05-14T10:49:05Z", GoVersion:"go1.22.2", Compiler:"gc", Platform:"linux/amd64"}

I get following error message:

sudo kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade/health] FATAL: [preflight] Some fatal errors occurred:
[ERROR CreateJob]: Job "upgrade-health-check-q7cx8" in the namespace "kube-system" did not complete in 15s: no condition of type Complete
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...

To see the stack trace of this error execute with --v=5 or higher
`

Output of the same command with --v=10

sudo kubeadm upgrade plan --v=5
I0814 12:47:56.580912 9320 plan.go:102] [upgrade/plan] verifying health of cluster
I0814 12:47:56.580955 9320 plan.go:103] [upgrade/plan] retrieving configuration from cluster
I0814 12:47:56.581923 9320 common.go:94] running preflight checks
[preflight] Running pre-flight checks.
I0814 12:47:56.581953 9320 preflight.go:77] validating if there are any unsupported CoreDNS plugins in the Corefile
I0814 12:47:56.593216 9320 preflight.go:109] validating if migration can be done for the current CoreDNS release.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0814 12:47:56.600546 9320 kubeproxy.go:55] attempting to download the KubeProxyConfiguration from ConfigMap "kube-proxy"
I0814 12:47:56.603909 9320 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
[upgrade] Running cluster health checks
I0814 12:47:56.626933 9320 health.go:171] Creating a Job with the prefix "upgrade-health-check" in the namespace "kube-system"
I0814 12:47:56.637635 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:47:57.638610 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:47:58.638698 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:47:59.638441 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:00.639103 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:01.639249 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:02.639949 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:03.639091 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:04.639446 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:05.639505 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:06.638590 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:07.638890 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:08.638243 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:09.638625 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:10.638625 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
I0814 12:48:11.638099 9320 health.go:202] Job "upgrade-health-check-2blxh" in the namespace "kube-system" is not yet complete, retrying
[preflight] Some fatal errors occurred:
[ERROR CreateJob]: Job "upgrade-health-check-2blxh" in the namespace "kube-system" did not complete in 15s: no condition of type Complete
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
[upgrade/health] FATAL
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:137
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:104
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.newCmdPlan.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:78
github.com/spf13/cobra.(Command).execute
github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(
Command).ExecuteC
github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main

    runtime/proc.go:271

runtime.goexit
runtime/asm_amd64.s:1695
`

Any suggestions / help is highly appriciated

Answers

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @patrickpk8s,

    It seems you are booting the VMs with an OS release that has not been tested yet. Please use the recommended release Ubuntu 20.04 LTS. Most learners who tried Ubuntu 24.04 LTS and encountered issues, were eventually successful in booting their VMs with Ubuntu 20.04 LTS.

    Otherwise please ensure the hypervisor is allowing all inbound traffic to your VMs (all protocols from all sources, to all port destinations), that they have the necessary resources (2 CPU cores, 8 GB RAM, 20+ GB disk, single bridged network adapter, IP addresses NOT from the 192.168.0.0/16 subnet). If your host is behind a proxy, that may also impact your outcomes.

    Regards,
    -Chris

  • Hi @chrispokorni,

    resources are sufficient. also the traffic is not blocked. No proxies.

    I will go and setup a new env with Ubuntu 20.04 LTS. Even though I do think that it would be a better learning to troubleshoot this.

    Thanks,
    Patrick

  • Hello! I had the same problem on Ubuntu 24.04. The issue was that the job was not schedulable because the child node was not ready. You can check the jobs with "kubectl get pods—-all-namespaces" to see if they are pending and then check if the child node is ready with"kubectl get nodes." After a restart, the child node had the swap activated, and the kublet service wouldn't start because of this.

Categories

Upcoming Training