Welcome to the Linux Foundation Forum!

podCIDR error when creating Flannel network

Posts: 33
edited November 2022 in LFS258 Class Forum

Hi,

I'm creating a cluster using Kelsey Hightowers "Kubernetes the hard way", with some customization. I'm doing it locally using KVM, for instance. Also not HA. I intend it to be 1 cp master and 2 worker nodes.

I'm using this manifest for Flannel:
https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

I've provisioned a CP master and a worker node. When I create the Flannel overlay network I get an error about podCIDR and the flannel ds does not come up.

Running: kubectl -n kube-flannel logs kube-flannel-ds-ws8lj
I get:

  1. Error registering network: failed to acquire lease: node "c2-worker1" pod cidr not assigned

Because it's a cluster, I'm not configuring podCIDR in kubelet on the worker node:

  1. --pod-cidr string
  2. The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

I can manually add podCIDR to the node applying a patch:
kubectl patch nodes c2-worker1 --patch '{"spec": {"podCIDR":"10.244.0.0/16"}}'
Then, Flannel overlay network comes up as expected.

What's happening here?

Best Answer

  • Posts: 33
    Answer ✓

    I solved it.

    I needed to pass --allocate-node-cidrs=true to the kube-controller-manager.

    I'm running it as a systemd service. The start of the unit file looks like this:

    1. [Service]
    2. ExecStart=/usr/bin/kube-controller-manager \
    3. --cluster-cidr=10.244.0.0/16 \
    4. --allocate-node-cidrs=true \

    Now each node is assigned a /24 each and it works as expected without a manual patch.

    kubectl describe nodes | grep -i cidr

    1. PodCIDR: 10.244.0.0/24
    2. PodCIDRs: 10.244.0.0/24
    3. PodCIDR: 10.244.1.0/24
    4. PodCIDRs: 10.244.1.0/24

Answers

  • Posts: 1

    can you please mention file name to edit

  • Posts: 33
    edited March 2023

    @nuthan That would be the systemd service file for kube-controller-manager.
    It should be somewhere under /etc/systemd/system/

    Note that kube-controller-manager would be run as a pod if you created your cluster with kubeadm.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training