Welcome to the Linux Foundation Forum!

3.2.1 easyregistry.yaml pod created in worker node not cp node

docklander Posts: 6
edited July 13 in LFD259 Class Forum

Hi Guys,
I'm up to 3.2.1.
now whenever I run kubectl create [....sample.yaml....]

unlike the documentation in LAB, all pods get created in 'Worker' node hence I fail to curl to the registry or service.

here is the cmd
[email protected]:~/app1$ kubectl create -f easyregistry.yaml
service/nginx created
service/registry created
deployment.apps/nginx created
persistentvolumeclaim/nginx-claim0 created
deployment.apps/registry created
persistentvolumeclaim/registry-claim0 created
persistentvolume/vol1 created
persistentvolume/vol2 created
[email protected]:~/app1$ kubectl get pods -o wide
nginx-688c5c8689-8lr72 1/1 Running 0 17s lfclass-aks-worker
registry-7f9b448c88-frzmn 1/1 Running 0 17s lfclass-aks-worker

see the NODE, it's using lfclass-aks-worker node. but I want to get them created in cp node.

what needs to be done in the easyregistry.yaml file so it targets to the cp not worker

and is there any global setting that i can do to avoid this situation moving forward. I still got heaps LABs to go :smile:

by the way I have done taint thing as well from 2.2.15
[email protected]:~/app1$ kubectl describe nodes | grep -i taint


  • chrispokorni
    chrispokorni Posts: 1,552
    edited July 13

    Hi @docklander,

    The nginx and registry pods can be scheduled on any desired node by adding the nodeName property to each Deployment definition - see a sample in the documentation.

    EDIT: The failed curl indicates that the networking between your cluster nodes may be misconfigured. In a healthy cluster, one should be able to curl a Service IP address from either one of the nodes (control plane or worker), regardless where the Pod gets scheduled. After completing the Repo configuration from Lab exercise 3.2, the container runtimes from both nodes are expected to be able to communicate with the registry, regardless of the node where it is running.


  • docklander
    docklander Posts: 6
    edited August 9

    [SOLUTION] anyone who is using Azure Linux VM, having some challenges for networking.
    This is what I did and it fixed my issue.

    when you install k8scp.sh install canl.yaml instead of calico.yaml as below

    Line 98 # kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml -> comment this
    Line 99 kubectl apply -f https://projectcalico.docs.tigera.io/manifests/canal.yaml -> use this

    this makes CP node communicate with WORKER node without an issue. This has been working in 2022 July


Upcoming Training