Welcome to the Linux Foundation Forum!

what is this error related to?

Options

I am configuring the cp node sing gcp and on step 3.1.23 when running the following command:
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out

I get this error:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

what have i done wrong?

Best Answers

  • chrispokorni
    chrispokorni Posts: 2,165
    Answer ✓
    Options

    Hi @angel.olivares,

    Please revisit step 10 of Lab exercise 3.1 and verify the content of the /etc/sysctl.d/kubernetes.conf file matches the content of the grayed text box. Ensure there are no empty spaces preceding either of the net.... lines. If any corrections are necessary, run step 11 as well.

    Regards,
    -Chris

  • angel.olivares
    angel.olivares Posts: 9
    Answer ✓
    Options

    Thanks Chris, super fast and effective reply. Kind regards

Answers

  • angel.olivares
    Options

    Hi, this time im trying the setup froma Vm running with virtual box, error is the same and the selution doesn't work this time.

    when running the command with v=5 i get:

    root@cp:~# kubeadm init --config=kubeadm-config.yaml --upload-certs --v=5 | tee kubeadm-init.out
    I0228 04:29:50.787363 2296 initconfiguration.go:255] loading configuration from "kubeadm-config.yaml"
    W0228 04:29:50.787843 2296 initconfiguration.go:306] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: strict decoding error: unknown field "KubernetesVersion"
    I0228 04:29:50.788082 2296 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
    I0228 04:29:50.790127 2296 interface.go:432] Looking for default routes with IPv4 addresses
    I0228 04:29:50.790145 2296 interface.go:437] Default route transits interface "enp0s3"
    I0228 04:29:50.790927 2296 interface.go:209] Interface enp0s3 is up
    I0228 04:29:50.790997 2296 interface.go:257] Interface "enp0s3" has 3 addresses :[192.168.1.139/24 2a0c:5a81:206:9d00:a00:27ff:fee2:a39b/64 fe80::a00:27ff:fee2:a39b/64].
    I0228 04:29:50.791028 2296 interface.go:224] Checking addr 192.168.1.139/24.
    I0228 04:29:50.791037 2296 interface.go:231] IP found 192.168.1.139
    I0228 04:29:50.791050 2296 interface.go:263] Found valid IPv4 address 192.168.1.139 for interface "enp0s3".
    I0228 04:29:50.791062 2296 interface.go:443] Found active IP 192.168.1.139
    I0228 04:29:50.791091 2296 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
    I0228 04:29:50.795347 2296 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
    I0228 04:29:51.168290 2296 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.24
    I0228 04:29:51.168558 2296 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.24.txt
    I0228 04:29:51.509438 2296 checks.go:570] validating Kubernetes and kubeadm version
    I0228 04:29:51.509497 2296 checks.go:170] validating if the firewall is enabled and active
    [init] Using Kubernetes version: v1.24.10
    [preflight] Running pre-flight checks
    I0228 04:29:51.523120 2296 checks.go:205] validating availability of port 6443
    I0228 04:29:51.523283 2296 checks.go:205] validating availability of port 10259
    I0228 04:29:51.523336 2296 checks.go:205] validating availability of port 10257
    I0228 04:29:51.523394 2296 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
    I0228 04:29:51.523408 2296 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
    I0228 04:29:51.523421 2296 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
    I0228 04:29:51.523430 2296 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
    I0228 04:29:51.523438 2296 checks.go:432] validating if the connectivity type is via proxy or direct
    I0228 04:29:51.523457 2296 checks.go:471] validating http connectivity to first IP address in the CIDR
    I0228 04:29:51.523478 2296 checks.go:471] validating http connectivity to first IP address in the CIDR
    I0228 04:29:51.523506 2296 checks.go:106] validating the container runtime
    I0228 04:29:51.553234 2296 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    I0228 04:29:51.553297 2296 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
    I0228 04:29:51.553346 2296 checks.go:646] validating whether swap is enabled or not
    I0228 04:29:51.553387 2296 checks.go:372] validating the presence of executable crictl
    I0228 04:29:51.553425 2296 checks.go:372] validating the presence of executable conntrack
    I0228 04:29:51.553457 2296 checks.go:372] validating the presence of executable ip
    I0228 04:29:51.553491 2296 checks.go:372] validating the presence of executable iptables
    I0228 04:29:51.553529 2296 checks.go:372] validating the presence of executable mount
    I0228 04:29:51.553553 2296 checks.go:372] validating the presence of executable nsenter
    I0228 04:29:51.553578 2296 checks.go:372] validating the presence of executable ebtables
    I0228 04:29:51.553608 2296 checks.go:372] validating the presence of executable ethtool
    I0228 04:29:51.553635 2296 checks.go:372] validating the presence of executable socat
    I0228 04:29:51.553650 2296 checks.go:372] validating the presence of executable tc
    I0228 04:29:51.553667 2296 checks.go:372] validating the presence of executable touch
    I0228 04:29:51.553689 2296 checks.go:518] running all checks
    I0228 04:29:51.578406 2296 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
    I0228 04:29:51.578596 2296 checks.go:612] validating kubelet version
    I0228 04:29:51.644772 2296 checks.go:132] validating if the "kubelet" service is enabled and active
    I0228 04:29:51.659118 2296 checks.go:205] validating availability of port 10250
    I0228 04:29:51.659388 2296 checks.go:205] validating availability of port 2379
    I0228 04:29:51.659655 2296 checks.go:205] validating availability of port 2380
    I0228 04:29:51.659961 2296 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
    [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
    error execution phase preflight
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(Runner).Run.func1
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(
    Runner).visitAll
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(Runner).Run
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
    k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    cmd/kubeadm/app/cmd/init.go:153
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(
    Command).execute
    vendor/github.com/spf13/cobra/command.go:856
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(Command).ExecuteC
    vendor/github.com/spf13/cobra/command.go:974
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(
    Command).Execute
    vendor/github.com/spf13/cobra/command.go:902
    k8s.io/kubernetes/cmd/kubeadm/app.Run
    cmd/kubeadm/app/kubeadm.go:50
    main.main
    cmd/kubeadm/kubeadm.go:25
    runtime.main
    /usr/local/go/src/runtime/proc.go:250
    runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1571

  • angel.olivares
    Options

    Hi i solved it by running steps 9 to 11 again.

  • john.steber
    Options

    having issues when launching mine with
    kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
    Had same issue you did, but resolved with the other steps.
    Now my process runs, but doesn't setup fully.
    Running in Azure.

    root@SteberK8Test:/home/john# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
    W0906 04:15:40.646616 3182 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: strict decoding error: unknown field "podSubnet"
    [init] Using Kubernetes version: v1.26.1
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8scp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local steberk8test] and IPs [10.96.0.1 xx.xx.x.xx]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost steberk8test] and IPs [xx.xx.x.xx 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost steberk8test] and IPs [xx.xx.x.xx 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    To see the stack trace of this error execute with --v=5 or higher
    timed out waiting for the condition

    This error is likely caused by:

    • The kubelet is not running
    • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

    • 'systemctl status kubelet'
    • 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.
    Here is one example how you may list all running Kubernetes containers by using crictl:

    • 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
      Once you have found the failing container, you can inspect its logs with:

    • 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'

  • john.steber
    Options

    disregard, got mine working lol!

Categories

Upcoming Training