Lab 8.1 / 8.2 Questions
Hi,
In lab 8.1 using the nginx-one.yaml I am creating a deployment within the namespace accounting.
The deployment is sucessfull created and afterwards exposed on step 10 and on step 13 recreated to expose port 80.
joaocfernandes@master-node:~$ kubectl --namespace=accounting get ep nginx-one NAME ENDPOINTS AGE nginx-one 10.244.1.42:80,10.244.1.43:80 17m joaocfernandes@master-node:~$ kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE accounting nginx-one ClusterIP 10.97.119.130 <none> 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17d joaocfernandes@master-node:~$ kubectl -n accounting describe service nginx-one Name: nginx-one Namespace: accounting Labels: system=secondary Annotations: <none> Selector: app=nginx Type: ClusterIP IP: 10.97.119.130 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.42:80,10.244.1.43:80 Session Affinity: None Events: <none>
First Question:
In the step 13, we are asked to recreate the deployment to expose port 80. So in order to prove this I should curl 10.97.119.130:80 or the pods directly 10.244.1.42:80/10.244.1.43:80 ?
Second question:
Previously to step 13 I first exposed port 8080, then exposed 80. Is it necessary to expose the deployment again in order to update the exposed port? (I had to in my case or I did something wrong).
Comments
-
Hi,
1 - after you expose port 80 (by exposing the deployment = creating a service) you should have successful responses when curling both endpoints IP:80 and cluster IP:80
2 - after changing the port number in the yaml file, you would have to delete and re-create the deployment, and delete and re-create the service in order to expose the new port
0 -
Hi Chris,
Thanks for you response, it was very helpfull.
Best Regards,
0 -
Lab 8.1
Step 12 shows using CURL to access the newly created service via CURL.
Close examination of the commands, IPs, and outputs of the previous commands reveals that in the example the user is running the PODS on a server called lfs458-worker and running the curl command on lfs458-node-1a0a (the master).However in my setup (using GCP and Ubuntu 18) I'm unable to connect from the master, but ONLY able to connect from the node actually running the POD. The expose command listed doesn't specify the service type, so it should be the default clusterIP which should be available 'internally' which should (i think) include accessing the service from ANY node in the cluster (including the master.)
Google searching the issue I found this:
https://github.com/kubernetes/kubernetes/issues/52783
Is this a bug? Should the 'clusterIP' services be available from ANY node in the cluster or not?
I'm confused whether I'm having an issue because either my setup is wrong, my service is wrong, or this is a bug.0 -
Totally stuck on this, tried NodePort and LoadBalancer, unable to connect to ngnix from outside cluster or any node other than the one running the POD.
0 -
Hello,
If you find you can connect from the node where the pod is running, but not on the other node you may be encountering a firewall with whatever virualization tool you are using. Opening all traffic between the nodes, which is done differently in virtualbox than AWS than GCE, should fix this issue.
You could use tcpdump on the interfaces to see the CURL request leaving the master node and then not see the call arrive on the worker node. This would indicate the issue is between the nodes, instead of with Kubernetes itself.
Regards,
0 -
Same problem here, I solved it allowing ALL the traffic between the cluster nodes. In GCP this is a possible solution:
- Tag all the hosts of the cluster with the same network tag, say "k8s".
- Add a firewall rule to the network to allow all the traffic from hosts tagged "k8s" to hosts tagged "k8s".
Fabio
0 -
The problem with this response is:
ONE: I don't know how to "add a firewall rule to allow all traffic from hosts tagged k8s to hosts tagged k8s".
TWO: it doesn't get to the root of the issue. Would this be the "production fix"? Is this the solution I should be learning in this course?
I'm looking at this course to help teach me the CORRECT way to use and administer Kubernetes, not just "how to get it working" I could have googled that for free. (or just used minikube).I'm expecting experts to weigh in on this, determine why its not working given that I have configured the lab using the same versions, systems, OS's, and cloud provider that the course designers used. I'm expecting the expert to attempt these steps and see if they work and if not why and provide some insight.
Given that I setup my lab the SAME way as described in the begininng of the course ... WHY would I have any "firewall issue" that the course designers didn't have?0 -
Mr. Koontz,
There are many possible configurations both inside of GCE as well as the operating system. For example you are using Ubuntu 18, which may have different firewall considerations than ubuntu 16.04.
To answer your questiton:
Finally your comment doesn't address the issues I'm having with the 8.2 lab in which the NodePort service SHOULD allow acces from outside the cluster allowing traffic between clusters doesn't seem to be something I expect to fix that issue
Yes, if you configure a NodePort you should be able to access the web server, via the service, via the <PublicIP>:<HighPort> on either node in the cluster. If this is not working for you I would make sure:
1) the web server is running.
2) Then make sure the service is using port 80 on the pod (ensure it was changed back from 8080, where nothing was listening)
3) Verify the high port in use by the NodePort
4) Test using the <ens interface>:HighPort of the node, from within the node. Then from the other node. If it stops working as soon as you leave the node it is a firewall with ubuntu 18 or GCE.
Some information which could be useful if using GCE and firewalls: https://cloud.google.com/vpc/docs/firewalls and several youtube videos.
Regards,
0 -
I understand, but did you read this in exercise 3.3 step 20?
...If the curl command times out the pod may be running on the other node. Run the same command on that node and it should work.
You can accept this explanation, or you can decide there is a problem and solve it allowing traffic between the hosts. Then we can talk about security and WHICH traffic we should allow, that's a good point, but honestly... I'll keep this for chapter 16, at the moment I'm only at 9.
0 -
Started with a clean cluster:
william_j_koontz@kube01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-zshkd 1/1 Running 1 4h
kube-system calico-kube-controllers-74b888b647-lsqpr 1/1 Running 1 4h
kube-system calico-node-rh8mc 2/2 Running 3 3h
kube-system calico-node-smp4z 2/2 Running 3 4h
kube-system coredns-78fcdf6894-9xhkg 1/1 Running 1 4h
kube-system coredns-78fcdf6894-jbj9z 1/1 Running 1 4h
kube-system etcd-kube01 1/1 Running 1 4h
kube-system kube-apiserver-kube01 1/1 Running 1 4h
kube-system kube-controller-manager-kube01 1/1 Running 1 4h
kube-system kube-proxy-7qqbv 1/1 Running 1 4h
kube-system kube-proxy-96ksg 1/1 Running 1 3h
kube-system kube-scheduler-kube01 1/1 Running 1 4h
william_j_koontz@kube01:~$
Run an echo server:
william_j_koontz@kube01:~$ kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080 --replicas=2
deployment.apps/echoserver created
william_j_koontz@kube01:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
echoserver 2 2 2 2 10s
william_j_koontz@kube01:~$
Verify it is running:william_j_koontz@kube01:~$ kubectl get po
NAME READY STATUS RESTARTS AGE
echoserver-5668d55678-9bpzx 1/1 Running 0 28s
echoserver-5668d55678-bpnfc 1/1 Running 0 28s
Expose with NodePort:william_j_koontz@kube01:~$ kubectl expose deployment echoserver --type=NodePort
service/echoserver exposed
william_j_koontz@kube01:~$
Get the port:william_j_koontz@kube01:~$ kubectl describe services/echoserver
Name: echoserver
Namespace: default
Labels: run=echoserver
Annotations: <none>
Selector: run=echoserver
Type: NodePort
IP: 10.100.176.5
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31130/TCP
Endpoints: 192.168.146.7:8080,192.168.197.204:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
william_j_koontz@kube01:~$
Test it... notice that I tested several times, you can see the "^C" character where I had to control-c when it timed out, but it also worked several times. Both with "localhost" and with the node IP.
william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-williamkubectl cluster-info
Kubernetes master is running at https://10.142.0.2:6443
KubeDNS is running at https://10.142.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
^C
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver-5668d55678-9bpzx 1/1 Running 0 4m
echoserver-5668d55678-bpnfc 1/1 Running 0 4m
william_j_koontz@kube01:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver-5668d55678-9bpzx 1/1 Running 0 4m
echoserver-5668d55678-bpnfc 1/1 Running 0 4m
william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130
^C
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$
william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
^C
william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
^C
william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.142.0.2:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=10.142.0.2:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
CLIENT VALUES:
client_address=10.142.0.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.142.0.2:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=10.142.0.2:31130
user-agent=curl/7.58.0
BODY:
-no body in request-william_j_koontz@kube01:~$ curl http://10.142.0.2:31130
^C
william_j_koontz@kube01:~$
0 -
Hi,
Kubernetes relies on (but does not manage) a good working infrastructure, including the networking between nodes. The infrastructure vendors Google Cloud, AWS, and Oracle VirtualBox each have documentation on how to set up compute engines/compute instances/VMs and how to create basic firewall rules.
The solution posted by @Armox176 can be implemented by simply researching such online documentation.
I find the extra troubleshooting required by some of the labs to be a good learning tool which prepares me for a real-world Dev/Test/QA scenario. After the issues are fixed and I have a working cluster, then I can really say that my setup is Production ready.
Regards,
-Chris
0 -
Could it be because Calico requires AMD64 and I'm using Intel processors????
Callico and all or most of the Pod overlays say they require AMD64:
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/requirements#kernel-dependenciesBut all the GCP regions only offer Intel processors. Could this be an issue?
I can't find any info on the net why these network overlays require amd...0 -
Hi, when you create your VM instance on GCP, when selecting the Boot Disk you will notice that the Ubuntu images are all AMD64 built.
Regards,
-Chris
0 -
NO.
Regards,
Fabio
0 -
Hello,
There are a couple of things that I do not undestanding from LAB 8.1:
1) When calling curl on port 80 we are getting a response from nginx, even though the port in not exposed in the service nor in the container.
Is this normall? I would thought the deployment/service would serve as a kind of firewall, in which only ports actually declared are open
2) When making a nslookup from a busybox running on the cluster I am not getting the response I expected
kubectl exec -it busybox-dns -- nslookup nginx-one
Server: 10.96.0.10
Address: 10.96.0.10:53** server can't find nginx-one: NXDOMAIN
*** Can't find nginx-one: No answer
Is there some misconfiguration on my cluster?
Thank you in advance for your help
0 -
Hi,
The role of a service is not to block or allow traffic. A service is an abstraction mechanism that allows external exposure for a set of pods. This means that although port 8080 is exposed, port 80 is not blocked - therefore you are able to get a response when curling port 80.There is a dns troubleshooting guide below, which may help:
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/Regards,
-Chris0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 734 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 166 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 6K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 222 LFD103 Class Forum
- 17 LFD110 Class Forum
- 34 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 110 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 113 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 18 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 118 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 50 LFS241 Class Forum
- 44 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 146 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 156 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 8 LFW111 Class Forum
- 257 LFW211 Class Forum
- 180 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.2K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 312 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)