Welcome to the Linux Foundation Forum!

podCIDR error when creating Flannel network

pnts
pnts Posts: 19
edited November 10 in LFS258 Class Forum

Hi,

I'm creating a cluster using Kelsey Hightowers "Kubernetes the hard way", with some customization. I'm doing it locally using KVM, for instance. Also not HA. I intend it to be 1 cp master and 2 worker nodes.

I'm using this manifest for Flannel:
https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

I've provisioned a CP master and a worker node. When I create the Flannel overlay network I get an error about podCIDR and the flannel ds does not come up.

Running: kubectl -n kube-flannel logs kube-flannel-ds-ws8lj
I get:

Error registering network: failed to acquire lease: node "c2-worker1" pod cidr not assigned

Because it's a cluster, I'm not configuring podCIDR in kubelet on the worker node:

--pod-cidr string
The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

I can manually add podCIDR to the node applying a patch:
kubectl patch nodes c2-worker1 --patch '{"spec": {"podCIDR":"10.244.0.0/16"}}'
Then, Flannel overlay network comes up as expected.

What's happening here?

Answers

  • pnts
    pnts Posts: 19

    I solved it.

    I needed to pass --allocate-node-cidrs=true to the kube-controller-manager.

    I'm running it as a systemd service. The start of the unit file looks like this:

    [Service]
    ExecStart=/usr/bin/kube-controller-manager \
      --cluster-cidr=10.244.0.0/16 \
      --allocate-node-cidrs=true \
    

    Now each node is assigned a /24 each and it works as expected without a manual patch.

    kubectl describe nodes | grep -i cidr

    PodCIDR:                      10.244.0.0/24
    PodCIDRs:                     10.244.0.0/24
    PodCIDR:                      10.244.1.0/24
    PodCIDRs:                     10.244.1.0/24
    

Categories

Upcoming Training