Welcome to the Linux Foundation Forum!

Lab 3.2 Step 20 CURL not working

Hi all,

I've been following all the steps of lab 3.2. I used the edited-localregistry.yaml to deploy the local registry.

Everything seems to be running without any error, but when I try to do a CURL to my registry on port 5000, I get a Connection timed out.

I'm currently doing the labs on 2 VirtualBox VMs running Ubuntu 20.04 LTS. The Firewall is disabled. The CURL operation did work on the docker-compose.yaml.

  1. mark@master:~$ kubectl get pods,svc,pvc,pv,deploy
  2. NAME READY STATUS RESTARTS AGE
  3. pod/nginx-b68dd9f75-tsvnz 1/1 Running 0 27m
  4. pod/registry-6b5bb79c4-xq8sp 1/1 Running 0 27m
  5.  
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h30m
  8. service/nginx ClusterIP 10.102.40.15 <none> 443/TCP 27m
  9. service/registry ClusterIP 10.102.44.4 <none> 5000/TCP 27m
  10.  
  11. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  12. persistentvolumeclaim/nginx-claim0 Bound task-pv-volume 200Mi RWO 27m
  13. persistentvolumeclaim/registry-claim0 Bound registryvm 200Mi RWO 27m
  14.  
  15. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  16. persistentvolume/registryvm 200Mi RWO Retain Bound default/registry-claim0 27m
  17. persistentvolume/task-pv-volume 200Mi RWO Retain Bound default/nginx-claim0 27m
  18.  
  19. NAME READY UP-TO-DATE AVAILABLE AGE
  20. deployment.apps/nginx 1/1 1 1 27m
  21. deployment.apps/registry 1/1 1 1 27m
  22. mark@master:~$ curl http://10.102.44.4:5000/v2/
  23. curl: (28) Failed to connect to 10.102.44.4 port 5000: Connection timed out

Am I missing something?

Thanks in advance for the help!

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 2,451

    Hi @mark.hendriks,

    Since you are running on VBox, would you be able to provide the output of kubectl get pod -A -o wide ?

    Regards,
    -Chris

  • Hi @chrispokorni,

    Thanks for your reply.

    This is the output you requested:

    1. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. default nginx-b68dd9f75-tsvnz 1/1 Running 0 88m 192.168.171.72 worker <none> <none>
    3. default registry-6b5bb79c4-xq8sp 1/1 Running 0 88m 192.168.171.71 worker <none> <none>
    4. kube-system calico-kube-controllers-69496d8b75-whp7d 1/1 Running 3 6h31m 192.168.219.75 master <none> <none>
    5. kube-system calico-node-5njrd 1/1 Running 3 6h31m 192.168.178.201 master <none> <none>
    6. kube-system calico-node-7mkm2 1/1 Running 2 6h31m 192.168.178.202 worker <none> <none>
    7. kube-system coredns-74ff55c5b-8x58m 1/1 Running 3 6h31m 192.168.219.76 master <none> <none>
    8. kube-system coredns-74ff55c5b-cv6gq 1/1 Running 3 6h31m 192.168.219.74 master <none> <none>
    9. kube-system etcd-master 1/1 Running 3 6h31m 192.168.178.201 master <none> <none>
    10. kube-system kube-apiserver-master 1/1 Running 5 6h31m 192.168.178.201 master <none> <none>
    11. kube-system kube-controller-manager-master 1/1 Running 5 6h31m 192.168.178.201 master <none> <none>
    12. kube-system kube-proxy-2ncsc 1/1 Running 3 6h31m 192.168.178.201 master <none> <none>
    13. kube-system kube-proxy-rtcsx 1/1 Running 2 6h31m 192.168.178.202 worker <none> <none>
    14. kube-system kube-scheduler-master 1/1 Running 4 6h31m 192.168.178.201 master <none> <none>

    Thanks,
    Mark

  • And as I post my last reply, I reboot my cluster, and all of a sudden it works.

    Still wondering why it didn't work before.

  • Posts: 2,451

    Thank you for the detailed output, Mark. As I suspected, your VM IP addresses overlap the Pod IP subnet managed by the Calico CNI plugin. This overlap causes DNS and routing issues between Nodes and Pods in your cluster.

    I would recommend re-building your cluster, and making sure that either the VMs are not using IP addresses from 192.168.0.0/16 by managing the hypervisor's configuration, or that you reconfigure the calico.yaml file and the kubeadm init command found in the k8sMaster.sh script to a different private IP subnet for your Pods.

    Regards,
    -Chris

  • I just finished rebuilding my cluster. I went with Ubuntu 18.04 LTS this time around, since the k8sMaster.sh mentioned that version.

    I changed the IP range to 192.10.0.0/16 in both the k8sMaster.sh and the calico.yaml files.

    Everything works just fine now.

    Thanks again for the assistance Chris!

    Kind regards,
    Mark

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training