Welcome to the Linux Foundation Forum!

Exercise 3.2: Configure A Local Docker Repo - Unable to Push to Local Docker Repo

Hi all,
My Kubernetes cluster is setup behind a proxy, and I have had to create NO_PROXY entries for cluster nodes and services in the /etc/systemd/system/docker.service.d/http-proxy.conf file.

Environment="NO_PROXY="localhost,127.0.0.1,10.100.113.117.....
where "10.100.113.117" is the service/registry ClusterIP

All looks good with the Local docker repo setup on this Exercise 3.2, till the point of pushing the docker image to the local repo.
It fails with error : Client.Timeout exceeded while awaiting headers

root@kubemaster:~# sudo docker push 10.100.113.117:5000/tagtest
The push refers to repository [10.100.113.117:5000/tagtest]
Get http://10.100.113.117:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

The registry and nginx pods are running, as below;
root@kubemaster:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-f875c68bb-mtxhv 1/1 Running 0 3h20m
registry-7ccd695dc7-4d8px 1/1 Running 0 3h20m

  • curl http://10.100.113.117:5000/v2/

    {}root@kubemaster

  • sudo vim /etc/docker/daemon.json
    { "insecure-registries":["10.100.113.117:5000"] }

  • sudo systemctl status docker.service | grep Active

    Active: active (running) since Sun 2020-05-10 13:27:36 UTC; 15min ago

Anybody experienced similar issue with pushing docker images to a local repo? Can anyone please with this?

Answers

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @eromskiee,

    While the NO_PROXY seems to be a common fix when running behind a proxy, there may be other settings specific to your own environment that should also be examined. Your hypervisor, cloud provider, operating system, and possibly others play a key role in figuring out what causes the timeout. I would start by looking at firewalls, both at your infra level and OS level as most often they are the reason users experience timeouts.

    Regards,
    -Chris

  • I have the same issue, but my question is: how could I access to a ClusterIP service from a Node (not from a Pod) without creating a proxy? Shouldn't it be a NodePort type service to be accessible from nodes?
    In the Exercise 3.2: Configure A Local Repo there is no mention about proxy..

  • serewicz
    serewicz Posts: 1,000

    Hello Simone,

    Calico allows the nodes to contact a ClusterIP. Assuming there isn't a misconfiguration in firewall or networking each node should be able to connect to each ClusterIP. The ClusterIP is the persistent IP within the cluster. A NodePort would be used for traffic outside of the cluster.

    Regards,

  • SimoneZennaro
    SimoneZennaro Posts: 5
    edited November 2021

    Thanks @serewicz for clarify! You're right, indeed from the same worker node where the service is running, I'm able to reach the ClusterIP service, only from the control plane I'm not!

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Please share the output of kubectl get pods --all-namespaces, so we can see if calico is having any issues. Also are you sure there is no firewall between nodes? If using GCE ensure that the VPC allows ALL traffic.

    Regards,

  • SimoneZennaro
    SimoneZennaro Posts: 5
    edited November 2021

    Hi, this is the output:
    root@ip-172-31-2-67:~# k get pods -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    default busybox 1/1 Running 1 (22h ago) 22h
    default dep1-5749cf8c94-7b29l 1/1 Running 0 39h
    default dep1-5749cf8c94-gwjl9 1/1 Running 0 39h
    default dep1-5749cf8c94-kq4xp 1/1 Running 0 21h
    default dep1-5749cf8c94-wsv4k 1/1 Running 0 39h
    default nginx-b68dd9f75-9fs86 1/1 Running 0 22h
    default probe-pod 1/1 Running 19 (22m ago) 19h
    default registry-6b5bb79c4-4gkzk 1/1 Running 0 22h
    default sleep3for5--1-ls62m 0/1 Completed 0 18h
    default sleep3for5--1-r887m 0/1 Completed 0 18h
    default test 1/1 Running 0 40h
    kube-system calico-kube-controllers-5d995d45d6-m6n2h 1/1 Running 0 9d
    kube-system calico-node-xd6cg 1/1 Running 1 41h
    kube-system calico-node-zrs64 1/1 Running 0 9d
    kube-system coredns-78fcd69978-m4ztt 1/1 Running 0 9d
    kube-system coredns-78fcd69978-qqgzg 1/1 Running 0 9d
    kube-system etcd-cp 1/1 Running 0 9d
    kube-system kube-apiserver-cp 1/1 Running 0 9d
    kube-system kube-controller-manager-cp 1/1 Running 0 9d
    kube-system kube-proxy-5d68h 1/1 Running 0 9d
    kube-system kube-proxy-j5s66 1/1 Running 2 9d
    kube-system kube-scheduler-cp 1/1 Running 0 9d

    About the network, the instances (AWS EC2) are in the same security group and I allowed all traffic (from their private ip) between them.

  • chrispokorni
    chrispokorni Posts: 2,155
    edited November 2021

    Hi @SimoneZennaro,

    I see two issues here:

    the instances (AWS EC2) are in the same security group

    Since SG names are unique, it seems that each instance is in its own SG, and you have 2 separate SGs defined. I would correct this first.

    allowed all traffic

    It seems that only TCP traffic is allowed, while all other protocols will be blocked. The earlier recommendation to allow ALL traffic, implied all protocols, all sources. Kubernetes and its plugins may use other protocols to communicate in addition to TCP to manage the cluster.

    The course includes a demo video guiding the set up process for AWS EC2 instances, including VPC and SG configuration. I would recommend watching the video as it may clarify the overall set up.

    Regards,
    -Chris

  • thanks @chrispokorni , changing All TCP with All Traffic it worked!

Categories

Upcoming Training