Lab 3.1 : Creating insecure registry on k8s
I'm curious why we went with a ClusterIP service since it is only reachable via the cluster itself for the insecure registry instead of a NodeIP? My problem is that I went ahead and copied the k8s configuration file from the master node onto my local macbook pro. This allows me to manage the cluster from my laptop instead of on the master node. Typically in a development environment you are not developing off of the master node or any of the k8s nodes to begin with.
Comments
-
Hi,
You are right, the ClusterIP would be accessible only from inside the cluster, which I believe was the idea behind this exercise. For access from outside the cluster - the host, you would need a NodePort type service.
-Chris
0 -
As you said MrM, there are many options and ways to access the cluster. In this case it was to only done this way to as one of several choices. In later labs you'll make use of NodePort and LoadBalancer, and learn the advantages and disadvantages of each.
Kind regards,
1 -
Honestly, I don't know if we should have even used Kompose to covert the docker-compose deployment to k8s. I think I would have benefited more from understanding how to deploy the registry from scratch than using that method. Perhaps the author was looking for an easy way to do this but me as a student I want to learn how to do it not do it the easy way. I feel like some of these labs were not well thought out or done to hastely.
1 -
I'm in the same boat. These labs seem to be inconsistent when trying to execute them. I ended up using Minikube to get a single-node cluster going (for 2.1) because the Ubuntu shell scripts didn't work properly (used a VM). Right now (on 3.1) I'm getting connection issues with my cluster, so I'm just pushing forward, even though I'm not getting the expected results in the exercises.
1 -
Hi,
I know how frustrating it is when your own results are different from the ones presented in the lab manual. All the labs have been beta tested and all commands and outputs were reproduced several times. For consistency however, each lab was completed on Google Compute Engines inside VPCs.
Can you provide some details about your setup, we may be able to figure out what causes the errors mentioned above. The error outputs may also help.
Are your VMs in the cloud, or local VBox/VMware?
If your infrastructure is good, then the next culprit is yaml indentation. If the whitespacing is not correct, yaml files will cause a lot of headache.
Regards,
-Chris
0 -
As was mentioned earlier, the local repository service is tied to clusterIP and not a NodePort. Therefore, page 14 of the lab guide is incorrect when it states to configure the minion to connect to the clusterIP of the master and pull simpleapp. Please provide the correct procedure to connect the minion to the master repository.
0 -
Hi,
You are correct about the registry being tied in with the master's ClusterIP. Subsequently, the minion will use the same ClusterIP to connect to the master's registry, just as presented in the lab manual.
Are you getting any errors at this step? Can you provide an output to help to identify what causes the error, if any?
A few troubleshooting steps would be to check if the firewalls are disabled on both nodes, and if all traffic is enabled on both nodes. Are the VMs local or cloud?
Thanks,
-Chris
0 -
I created a NodePort service to access my registry running on a VirtualBox multi-node deployment with calico, but I'm curious as to how this should work on GKE without NodePort. If ClusterIPs are only addressable within the cluster network, where does the author of Lab 3.1 want us to run "curl http://10.110.186.162:5000/v2/" from? exec into a pod and run it from there? Or does GKE do something funky with routes/bridges to allow access from nodes straight into the cluster network ip range?
0 -
I suspect this issue is pod-network-type dependent. Calico doesn't create a route/bridge from the node network to the cluster network, but I'm told that Flannel (and maybe some others) do.
I suggest the instructions at the top of Lab 2.1 page 3 saying:
You should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/Should be changed to specify a suitable network to complete the rest of the labs.
0 -
Hello,
If you were using Google Kubernetes Environment (GKE), which is a Kubernetes environment deployed and controlled by Google, then this would be difficult as they also control the network. Using Google Compute Environment (GCE) as we use for the labs it is just a bunch of nodes and you control the entire Kubernetes cluster. As such you have access to the master and can choose which ever network you would like. We use Calico in the GCE lab environment.
Regards,
0 -
For local registry lab, i am getting error after kompose step.
curl http://10.106.87.30:5000/v2/
{"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]}With docker-compose up it works fine. But after running it in K8s it doesnt work
And i would like to know whether i can continue with the course without localregistry lab setup0 -
Hi @githingeorge ,
I have seen similar errors at this step when the nodes were not part of a VPC network on GCE. Are you using cloud VMs? Do you have a VPC network created?
Regards,
-Chris0 -
Yes, I am using Google Cloud VM instances. But this is not because of VPC network. I get the same error if i try it from the same node/instance. I have tried to run curl http://127.0.0.1:5000/v2/ from within the registry container and that also gives me the same error
0 -
@githingeorge
Do you have a VPC network and your nodes inside that network?0 -
its the default VPC network and firewall is open for http on all ports on the worker node instance
Another thing i noticed is for all the lab exercises, the pod placement is on master node, but for me it was always on worker nodes. So initially i and other people taking the lab will face issues regarding executing curl commands from Master node and not working as there pods are on worker nodes. I had to change firewall rules on worker nodes to make my curl commands work from master nodes.The lab pdf should mention this in lab 1 itself. I think only in lab 3 it even mentions some thing related to network/firewall
0 -
If you have opened up all the ports between nodes then you may have a taint on your master node which is causing the non-system nodes to run only on the worker. Please ensure you have opened up all the ports, not just port 80, as there are other ports in use by Kubernetes.
Regards,
0 -
Hi @githingeorge ,
Working out of the default VPC has caused me some issues on this lab, but when I created a custom VPC with a firewall to open all ports, all protocols, from all sources, I was able to complete this lab and move on to the next.
Lab 2.1 in the Overview section lists all the requirements when working on GCE VMs, before running the installation scripts.
Regards,
-Chris0 -
Ya created new network with everyhinh open and added new instances to it. Now it s working .
Thanks guys0 -
Hi @TITYKOUKI ,
Your question was addressed earlier in this Discussion:<...> there are many options and ways to access the cluster. In this case it was to only done this way to as one of several choices. In later labs you'll make use of NodePort and LoadBalancer, and learn the advantages and disadvantages of each <...>
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 791 Linux Foundation IT Professional Programs
- 353 Cloud Engineer IT Professional Program
- 178 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 147 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 47 LFC110 Class Forum - Discontinued
- 71 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 38 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 697 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 148 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 29 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 134 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 152 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)