Welcome to the Linux Foundation Forum!

(Linkerd lab 7.2 issue) Pods in worker node unable to access Cluster IP for kube-api in CP

Installing linkerd fails if i have the worker node online, if i take it offline and install linkerd the pods deploy to the master node and all pods start

After some testing, the problem seems to be with any pod started up on the worker node not being able to access the clusterip for the kube-api service

I tested this by creating a deployment of 3 nodes then using telnet, telnet the kube-api
on port 443 using cluster ip from a pod being scheduled on the worker node fails

cp established a connection.

its worth noting that i can telnet to other cluster ip's from pods on the worker node

Service IPs

ckanode2:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 126d
nginx ClusterIP 10.108.223.73 443/TCP 118d
registry ClusterIP 10.98.29.147 5000/TCP 118d

Calico Block Affinities
ckanode1:~/app2$ kubectl describe blockaffinities.crd.projectcalico.org | egrep "Name:|Cidr|Node"
Name: ckanode2-192-168-85-192-26
Cidr: 192.168.85.192/26
Node: ckanode2
Name: cp-192-168-242-64-26
Cidr: 192.168.242.64/26
Node: cp

Linkerd Pod Errors

ckanode2:~$ kubectl logs -f linkerd-identity-68f44dcc9b-pbz69 -n linkerd identity
time="2022-07-03T08:55:31Z" level=info msg="running version stable-2.11.2"
time="2022-07-03T08:56:01Z" level=fatal msg="Failed to initialize identity service: Post \"https://10.96.0.1:443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews\": dial tcp 10.96.0.1:443: i/o timeout"

ckanode2:~$ kubectl logs -f -n linkerd linkerd-proxy-injector-5998bcd56-dh6h9 -c linkerd-proxy
time="2022-07-03T08:57:39Z" level=info msg="Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2022-07-03T08:57:39Z" level=info msg="Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der"
[ 0.000486s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[ 0.000726s] INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[ 0.000736s] INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[ 0.000738s] INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[ 0.000741s] INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
[ 0.000742s] INFO ThreadId(01) linkerd2_proxy: Local identity is linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local
[ 0.000744s] INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.000748s] INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.001833s] WARN ThreadId(01) policy:watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_app_core::control: Failed to resolve control-plane component error=no record found for name: linkerd-policy.linkerd.svc.cluster.local. type: SRV class: IN

Comments

  • chrispokorni
    chrispokorni Posts: 1,549

    Hi @blambo10,

    What type of infrastructure are you using for your cluster, and how did you provision and bootstrap the cluster? Have you noticed such behaviors in earlier lab exercises?

    Regards,
    -Chris

  • blambo10
    blambo10 Posts: 5

    Hi Chris,

    Im using the Virtual Machine approach, running two ubuntu VMs within virtual box on a windows pc,
    I have configured them both to use the bridge adapter and disabled configured the box's appropriately as per the guide.

    I leveraged the scripts in the guide to bootstrap the CP then to bootstrap the Worker.

    I dont recall hitting any other issues during the lab exercises like this, for the most part the pods work on either node, except when an attempt is made to call the api service from within a pod on the worker node it seems.

    Thanks,
    Bryce

  • chrispokorni
    chrispokorni Posts: 1,549

    Hi @blambo10,

    VirtualBox VMs are often causing networking issues when the default DHCP configuration is used.

    Is the promiscuous mode set to allow all on each VM's bridged adapter?

    What are the IP addresses of the VMs/nodes? This can be retrieved with kubectl get nodes -o wide

    What pod subnet is defined by the Calico network plugin? This can be retrieved from the calico.yaml file and from the kubeadm.yaml file.

    Regards,
    -Chris

  • blambo10
    blambo10 Posts: 5
    edited July 7

    Hi @chrispokorni,

    Both VMs have the IP's configured statically.

    Promiscuous MODE

    Both VMs have the bridge adapters configured with promiscuous

    Node IPs

    [email protected]:~$ kubectl get nodes -o wide

    NAME       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP        CONTAINER-RUNTIME
    ckanode2   Ready    <none>                 129d   v1.23.1    192.168.1.241     cri-o://1.23.1
    cp         Ready    control-plane,master   130d   v1.23.4   192.168.1.237    cri-o://1.23.1 
    

    kubadm-config

    ckanode1:~/debugging$ cat kubeadm-config.yaml

    apiVersion: v1
    data:
      ClusterConfiguration: |
        apiServer:
          extraArgs:
            authorization-mode: Node,RBAC
          timeoutForControlPlane: 4m0s
        apiVersion: kubeadm.k8s.io/v1beta3
        certificatesDir: /etc/kubernetes/pki
        clusterName: kubernetes
        controllerManager: {}
        dns: {}
        etcd:
          local:
            dataDir: /var/lib/etcd
        imageRepository: k8s.gcr.io
        kind: ClusterConfiguration
        kubernetesVersion: v1.23.1
        networking:
          dnsDomain: cluster.local
          podSubnet: 192.168.0.0/16
          serviceSubnet: 10.96.0.0/12
        scheduler: {}
    kind: ConfigMap
    metadata:
      creationTimestamp: "2022-02-27T03:30:39Z"
      name: kubeadm-config
      namespace: kube-system
      resourceVersion: "208"
      uid: aa597570-d12c-4907-9412-80004bfe04db
    

    calico.yaml

    Im not entirely sure what "calico.yaml" is as it is installed via the scripts provided

    it creates alot of CRDs,
    Any guidance on specifics here would be appreciated.

    ckanode1:~/debugging$ kubectl get crd | grep calico

    bgpconfigurations.crd.projectcalico.org               2022-02-27T03:30:51Z
    bgppeers.crd.projectcalico.org                        2022-02-27T03:30:51Z
    blockaffinities.crd.projectcalico.org                 2022-02-27T03:30:51Z
    caliconodestatuses.crd.projectcalico.org              2022-02-27T03:30:51Z
    clusterinformations.crd.projectcalico.org             2022-02-27T03:30:51Z
    felixconfigurations.crd.projectcalico.org             2022-02-27T03:30:51Z
    globalnetworkpolicies.crd.projectcalico.org           2022-02-27T03:30:51Z
    globalnetworksets.crd.projectcalico.org               2022-02-27T03:30:51Z
    hostendpoints.crd.projectcalico.org                   2022-02-27T03:30:51Z
    ipamblocks.crd.projectcalico.org                      2022-02-27T03:30:51Z
    ipamconfigs.crd.projectcalico.org                     2022-02-27T03:30:51Z
    ipamhandles.crd.projectcalico.org                     2022-02-27T03:30:51Z
    ippools.crd.projectcalico.org                         2022-02-27T03:30:51Z
    ipreservations.crd.projectcalico.org                  2022-02-27T03:30:51Z
    kubecontrollersconfigurations.crd.projectcalico.org   2022-02-27T03:30:51Z
    networkpolicies.crd.projectcalico.org                 2022-02-27T03:30:51Z
    networksets.crd.projectcalico.org                     2022-02-27T03:30:51Z
    

    Thanks,
    Bryce

  • chrispokorni
    chrispokorni Posts: 1,549

    Hi @blambo10,

    Thank you for all the detailed outputs. There are several aspects of the cluster that are of concern.

    Mismatching Kubernetes versions between the two nodes (unless you are going through the cluster upgrade process, and the worker node upgrade is to follow...)

    If Calico is used with default pod network, matching the podSubnet 192.168.0.0/16 from the kubeadm-config.yaml file, then the pod network overlaps the VM/node IP addresses 192.168.1.x. In such cases, routing within the cluster will be impacted.

    The recommendation is to bootstrap a new cluster and ensure the hypervisor assigned IP addresses of your VMs/nodes do not overlap the default pod network managed by the Calico network plugin. Also, ensure all nodes run the same Kubernetes version.

    Regards,
    -Chris

  • blambo10
    blambo10 Posts: 5

    Hi @chrispokorni,

    Now you've pointed that out its quite obvious.

    ckanode1:~/app2$ kubectl describe blockaffinities.crd.projectcalico.org | egrep "Name:|Cidr|Node"

    Name: ckanode2-192-168-85-192-26
    Cidr: 192.168.85.192/26
    Node: ckanode2
    Name: cp-192-168-242-64-26
    Cidr: 192.168.242.64/26
    Node: cp
    

    While previously checking i must not have been thinking properly and completely missed the /16 on the mask, second pair of eyes always helps.

    I was going to ask if there was a way to modify the calico ip pools without recreating the cluster,
    as per comments found in the installation yaml on https://projectcalico.docs.tigera.io/networking/change-block-size seems not

    # Configures Calico networking.
       calicoNetwork:
         # Note: The ipPools section cannot be modified post-install.
    

    Thanks for your help its been greatly appreciated.

    Thanks,
    Bryce

  • chrispokorni
    chrispokorni Posts: 1,549
    edited July 11

    Hi @blambo10,

    I would follow the Note you quoted from the Calico documentation, and build a new cluster with the desired subnets. There may be articles or blog posts outlining quick and dirty ways to update properties of a live cluster, but chances are that down the road they will come back to haunt you.

    Regards,
    -Chris

  • blambo10
    blambo10 Posts: 5

    Hi @chrispokorni,

    Yes i avoided trying to fix the broken cluster and simply rebuilt it.

    I ended up choosing to download the calico.yaml and updating the "calico-node" env vars,
    aswell as updating the kubeadm.yaml pod subnet changing it from 192.16.0.0/26 to 172.16.0.0

    kubeadm.yaml

    kind: ClusterConfiguration
    kubernetesVersion: 1.24.2
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 172.16.0.0/16
    

    calico.yaml (calico-node) container env

    1.             - name: CALICO_IPV4POOL_CIDR
    2.               value: "172.16.0.0/16" 
    

    I assume calico config negates the podsubnet set in kubeadm,
    Though i just set both to be safe.

    My problem with linkerd has now been resolved,
    Below illustrates the linkerd pods starting on the worker node

    ckad-cp:~$  kubectl get pods -n linkerd -o wide
    NAME                                      READY   STATUS    RESTARTS   AGE   IP              NODE                          
    linkerd-destination-5c5686697d-rn2pc      4/4     Running   0          16m   172.16.108.68   ckad-worker.thelabshack.com   
    linkerd-identity-778bd797b9-8fc6h         2/2     Running   0          16m   172.16.108.67   ckad-worker.thelabshack.com  
    linkerd-proxy-injector-84bcc448cd-tlzts   2/2     Running   0          16m   172.16.108.69   ckad-worker.thelabshack.com   
    

    Thanks,
    Bryce

  • chrispokorni
    chrispokorni Posts: 1,549

    Hi @blambo10,

    Though i just set both to be safe.

    Setting both is the recommended course of action :wink:

    Glad the service mesh is running now!

    Regards,
    -Chris

Categories

Upcoming Training