Welcome to the Linux Foundation Forum!

Lab 3.2 Step 20 CURL not working

Hi all,

I've been following all the steps of lab 3.2. I used the edited-localregistry.yaml to deploy the local registry.

Everything seems to be running without any error, but when I try to do a CURL to my registry on port 5000, I get a Connection timed out.

I'm currently doing the labs on 2 VirtualBox VMs running Ubuntu 20.04 LTS. The Firewall is disabled. The CURL operation did work on the docker-compose.yaml.

[email protected]:~$ kubectl get pods,svc,pvc,pv,deploy
NAME                           READY   STATUS    RESTARTS   AGE
pod/nginx-b68dd9f75-tsvnz      1/1     Running   0          27m
pod/registry-6b5bb79c4-xq8sp   1/1     Running   0          27m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    5h30m
service/nginx        ClusterIP   10.102.40.15   <none>        443/TCP    27m
service/registry     ClusterIP   10.102.44.4    <none>        5000/TCP   27m

NAME                                    STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginx-claim0      Bound    task-pv-volume   200Mi      RWO                           27m
persistentvolumeclaim/registry-claim0   Bound    registryvm       200Mi      RWO                           27m

NAME                              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
persistentvolume/registryvm       200Mi      RWO            Retain           Bound    default/registry-claim0                           27m
persistentvolume/task-pv-volume   200Mi      RWO            Retain           Bound    default/nginx-claim0                              27m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx      1/1     1            1           27m
deployment.apps/registry   1/1     1            1           27m
[email protected]:~$ curl http://10.102.44.4:5000/v2/
curl: (28) Failed to connect to 10.102.44.4 port 5000: Connection timed out

Am I missing something?

Thanks in advance for the help!

Comments

  • chrispokorni
    chrispokorni Posts: 1,132

    Hi @mark.hendriks,

    Since you are running on VBox, would you be able to provide the output of kubectl get pod -A -o wide ?

    Regards,
    -Chris

  • Hi @chrispokorni,

    Thanks for your reply.

    This is the output you requested:

    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE     NOMINATED NODE   READINESS GATES
    default       nginx-b68dd9f75-tsvnz                      1/1     Running   0          88m     192.168.171.72    worker   <none>           <none>
    default       registry-6b5bb79c4-xq8sp                   1/1     Running   0          88m     192.168.171.71    worker   <none>           <none>
    kube-system   calico-kube-controllers-69496d8b75-whp7d   1/1     Running   3          6h31m   192.168.219.75    master   <none>           <none>
    kube-system   calico-node-5njrd                          1/1     Running   3          6h31m   192.168.178.201   master   <none>           <none>
    kube-system   calico-node-7mkm2                          1/1     Running   2          6h31m   192.168.178.202   worker   <none>           <none>
    kube-system   coredns-74ff55c5b-8x58m                    1/1     Running   3          6h31m   192.168.219.76    master   <none>           <none>
    kube-system   coredns-74ff55c5b-cv6gq                    1/1     Running   3          6h31m   192.168.219.74    master   <none>           <none>
    kube-system   etcd-master                                1/1     Running   3          6h31m   192.168.178.201   master   <none>           <none>
    kube-system   kube-apiserver-master                      1/1     Running   5          6h31m   192.168.178.201   master   <none>           <none>
    kube-system   kube-controller-manager-master             1/1     Running   5          6h31m   192.168.178.201   master   <none>           <none>
    kube-system   kube-proxy-2ncsc                           1/1     Running   3          6h31m   192.168.178.201   master   <none>           <none>
    kube-system   kube-proxy-rtcsx                           1/1     Running   2          6h31m   192.168.178.202   worker   <none>           <none>
    kube-system   kube-scheduler-master                      1/1     Running   4          6h31m   192.168.178.201   master   <none>           <none>
    

    Thanks,
    Mark

  • And as I post my last reply, I reboot my cluster, and all of a sudden it works.

    Still wondering why it didn't work before.

  • chrispokorni
    chrispokorni Posts: 1,132

    Thank you for the detailed output, Mark. As I suspected, your VM IP addresses overlap the Pod IP subnet managed by the Calico CNI plugin. This overlap causes DNS and routing issues between Nodes and Pods in your cluster.

    I would recommend re-building your cluster, and making sure that either the VMs are not using IP addresses from 192.168.0.0/16 by managing the hypervisor's configuration, or that you reconfigure the calico.yaml file and the kubeadm init command found in the k8sMaster.sh script to a different private IP subnet for your Pods.

    Regards,
    -Chris

  • I just finished rebuilding my cluster. I went with Ubuntu 18.04 LTS this time around, since the k8sMaster.sh mentioned that version.

    I changed the IP range to 192.10.0.0/16 in both the k8sMaster.sh and the calico.yaml files.

    Everything works just fine now.

    Thanks again for the assistance Chris!

    Kind regards,
    Mark

Categories

Upcoming Training