Issue with worker node on Lab 3.2 Step 30 (worker pull from registry)

Hi,
I have a problem in Step 30 of Lab 3.2
context: i have two ec2 instances with port TCP 5000 opened in its security group
worker node (cannot pull from local registry):
ubuntu@ip-172-31-16-147:~/LFD259/SOLUTIONS/s_02$ cat /etc/docker/daemon.json { "insecure-registries":["10.111.241.52:5000"] } ubuntu@ip-172-31-16-147:~/LFD259/SOLUTIONS/s_02$ k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h nginx ClusterIP 10.107.231.29 <none> 443/TCP 61m registry ClusterIP 10.111.241.52 <none> 5000/TCP 61m ubuntu@ip-172-31-16-147:~/LFD259/SOLUTIONS/s_02$ docker pull 10.111.241.52:5000/simpleapp Using default tag: latest Error response from daemon: Get http://10.111.241.52:5000/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ubuntu@ip-172-31-16-147:~/LFD259/SOLUTIONS/s_02$ curl http://10.111.241.52:5000/v2/ ^C
master node (works fine):
ubuntu@ip-172-31-17-134:~/LFD259/SOLUTIONS/s_02$ k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h nginx ClusterIP 10.107.231.29 <none> 443/TCP 58m registry ClusterIP 10.111.241.52 <none> 5000/TCP 58m ubuntu@ip-172-31-17-134:~/LFD259/SOLUTIONS/s_02$ bat /etc/docker/daemon.json { "insecure-registries":["10.111.241.52:5000"] } ubuntu@ip-172-31-17-134:~/LFD259/SOLUTIONS/s_02$ curl http://10.111.241.52:5000/v2/ {} ubuntu@ip-172-31-17-134:~/LFD259/SOLUTIONS/s_02$ sudo docker pull 10.111.241.52:5000/tagtest Using default tag: latest latest: Pulling from tagtest Digest: sha256:134c7fe821b9d359490cd009ce7ca322453f4f2d018623f849e580a89a685e5d Status: Image is up to date for 10.111.241.52:5000/tagtest:latest
Any ideas? Thank you!
Comments
-
Typically docker commands run as root. Also, I am not sure what the ownership of the daemon.json file is, normally it should be root.
I see an interesting setup there with kubectl running on both nodes.
As far as curl goes on the worker, there may still be an SG issue.Regards,
-Chris0 -
I've added user to docker group, so no need to sudo i guess. If i do a docker info i see how the daemon.json config has been picked after restart docker service in both worker and master nodes.
Yeah, i've installed kubectl in worker node too to troubleshoot, but problem was there before
Curl does not work from worker to master using that k8s cluster inner ip (10.111.241.52), but Ping works with master node IP (private IP from of ec2 instances within the same VPC). I've opened all TCP ports on SG with same result. So... no really confident about being a SG issue.
Some colleagues suggested that maybe the type of the registry service should be NodePort instead of ClusterIP in order to publish the address and port to the worker node. Actually, I've changed it to NodePort and it worked, but i am not sure that was the right approach since the Lab is pretty clear about using k8s inner IP from the worker node (type ClusterIP).
Any other ideas? Do you need more info from my side?
Many thanks for the helpRegards,
Rubén0 -
Hello,
I see from your context you mention opening TCP port 5000 in your security group. Could you please try the lab again, this time with the firewall fully removed as suggested in the lab setup guide and video and see if the issue persists?
Kind regards,
0 -
Hi @rcougil,
Is your SG overall restrictive with individual rules to allow traffic to specific ports? An SG configured to allow all ingress traffic may be the solution in this case. For an extra level of security, a custom VPC may be suitable for the all-open SG, with the Kubernetes cluster nodes deployed in the custom VPC.Regards,
-Chris0 -
Hello again,
Please reference the docker-compose.yaml file in the lab. You will note that there is an nginx registry running on port 443. If you add this port to your network security settings does the pull command work? Also, when you run the sudo docker pull command, if you include a -vvv0 -
Hello, I had a similar problem maybe my fix could solve also your case.
As the lab uses calico network plugin, the TCP port 179 should be open on every node's iptables.A good way to verify that this is eventually your case is via
sudo calicoctl node status
You can get calicoctl from
curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.11.1/calicoctl
If you see anything different from Established in the Info column it means that the connection among your nodes is not fully established.
In my case the port TCP 179 was not accepting incoming connections on the worker machine. I just fixed it on that machine's iptables by
sudo iptables -I INPUT 5 -i eth0 -p tcp --dport 179 -m state --state NEW,ESTABLISHED -j ACCEPT
and everything started working properly, calicoctl showing Established now.0
Categories
- All Categories
- 50 LFX Mentorship
- 103 LFX Mentorship: Linux Kernel
- 575 Linux Foundation IT Professional Programs
- 304 Cloud Engineer IT Professional Program
- 125 Advanced Cloud Engineer IT Professional Program
- 53 DevOps Engineer IT Professional Program
- 60 Cloud Native Developer IT Professional Program
- 5 Express Training Courses
- 5 Express Courses - Discussion Forum
- 2K Training Courses
- 19 LFC110 Class Forum
- 7 LFC131 Class Forum
- 27 LFD102 Class Forum
- 156 LFD103 Class Forum
- 20 LFD121 Class Forum
- 1 LFD137 Class Forum
- 61 LFD201 Class Forum
- 1 LFD210 Class Forum
- LFD210-CN Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- LFD237 Class Forum
- 23 LFD254 Class Forum
- 611 LFD259 Class Forum
- 105 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFD273 Class Forum
- 2 LFS145 Class Forum
- 24 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 10 LFS203 Class Forum
- 75 LFS207 Class Forum
- 300 LFS211 Class Forum
- 54 LFS216 Class Forum
- 47 LFS241 Class Forum
- 41 LFS242 Class Forum
- 37 LFS243 Class Forum
- 11 LFS244 Class Forum
- 34 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 140 LFS253 Class Forum
- LFS254 Class Forum
- 1.1K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 93 LFS260 Class Forum
- 132 LFS261 Class Forum
- 33 LFS262 Class Forum
- 80 LFS263 Class Forum
- 15 LFS264 Class Forum
- 11 LFS266 Class Forum
- 18 LFS267 Class Forum
- 17 LFS268 Class Forum
- 23 LFS269 Class Forum
- 203 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- LFS281 Class Forum
- 233 LFW211 Class Forum
- 172 LFW212 Class Forum
- 7 SKF100 Class Forum
- SKF200 Class Forum
- 902 Hardware
- 219 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 209 Networking
- 101 Printers & Scanners
- 85 Storage
- 763 Linux Distributions
- 88 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 24 openSUSE
- 142 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 357 Ubuntu
- 479 Linux System Administration
- 41 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 108 System Management
- 49 Web Management
- 68 Mobile Computing
- 23 Android
- 30 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 537 Off Topic
- 131 Introductions
- 217 Small Talk
- 21 Study Material
- 826 Programming and Development
- 278 Kernel Development
- 514 Software Development
- 928 Software
- 260 Applications
- 184 Command Line
- 3 Compiling/Installing
- 76 Games
- 316 Installation
- 62 All In Program
- 62 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)