Welcome to the Linux Foundation Forum!

Lab 8.1 / 8.2 Questions

Joaocfernandes
Joaocfernandes Posts: 5
edited March 2018 in LFS258 Class Forum

Hi,



In lab 8.1 using the nginx-one.yaml I am creating a deployment within the namespace accounting.

The deployment is sucessfull created and afterwards exposed on step 10 and on step 13 recreated to expose port 80.


joaocfernandes@master-node:~$ kubectl --namespace=accounting get ep nginx-one NAME ENDPOINTS AGE nginx-one 10.244.1.42:80,10.244.1.43:80 17m joaocfernandes@master-node:~$ kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE accounting nginx-one ClusterIP 10.97.119.130 <none> 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17d joaocfernandes@master-node:~$ kubectl -n accounting describe service nginx-one Name: nginx-one Namespace: accounting Labels: system=secondary Annotations: <none> Selector: app=nginx Type: ClusterIP IP: 10.97.119.130 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.42:80,10.244.1.43:80 Session Affinity: None Events: <none>

 

First Question:

In the step 13, we are asked to recreate the deployment to expose port 80. So in order to prove this I should curl 10.97.119.130:80 or the pods directly 10.244.1.42:80/10.244.1.43:80 ?

Second question:

Previously to step 13 I first exposed port 8080, then exposed 80. Is it necessary to expose the deployment again in order to update the exposed port? (I had to in my case or I did something wrong).

 

Comments

  • chrispokorni
    chrispokorni Posts: 2,144
    edited March 2018

    Hi, 

    1 - after you expose port 80 (by exposing the deployment = creating a service) you should have successful responses when curling both endpoints IP:80 and cluster IP:80

    2 - after changing the port number in the yaml file, you would have to delete and re-create the deployment, and delete and re-create the service in order to expose the new port

  • Joaocfernandes
    Joaocfernandes Posts: 5
    edited March 2018

    Hi Chris,

    Thanks for you response, it was very helpfull.

    Best Regards,

  • Lab 8.1

    Step 12 shows using CURL to access the newly created service via CURL.

    Close examination of the commands, IPs, and outputs of the previous commands reveals that in the example the user is running the PODS on a server called lfs458-worker and running the curl command on lfs458-node-1a0a (the master).

    However in my setup (using GCP and Ubuntu 18) I'm unable to connect from the master, but ONLY able to connect from the node actually running the POD.  The expose command listed doesn't specify the service type, so it should be the default clusterIP which should be available 'internally' which should (i think) include accessing the service from ANY node in the cluster (including the master.)

    Google searching the issue I found this:

    https://github.com/kubernetes/kubernetes/issues/52783

    Is this a bug?  Should the 'clusterIP' services be available from ANY node in the cluster or not?

    I'm confused whether I'm having an issue because either my setup is wrong, my service is wrong, or this is a bug.

  • Totally stuck on this, tried NodePort and LoadBalancer, unable to connect to ngnix from outside cluster or any node other than the one running the POD.

     

  • serewicz
    serewicz Posts: 1,000

    Hello,

    If you find you can connect from the node where the pod is running, but not on the other node you may be encountering a firewall with whatever virualization tool you are using. Opening all traffic between the nodes, which is done differently in virtualbox than AWS than GCE, should fix this issue.  

    You could use tcpdump on the interfaces to see the CURL request leaving the master node and then not see the call arrive on the worker node. This would indicate the issue is between the nodes, instead of with Kubernetes itself.

    Regards,

  • Same problem here, I solved it allowing ALL the traffic between the cluster nodes. In GCP this is a possible solution:

    1. Tag all the hosts of the cluster with the same network tag, say "k8s".
    2. Add a firewall rule to the network to allow all the traffic from hosts tagged "k8s" to hosts tagged "k8s".

    Fabio

  • The problem with this response is: 

    ONE: I don't know how to "add a firewall rule to allow all traffic from hosts tagged k8s to hosts tagged k8s".

    TWO: it doesn't get to the root of the issue.  Would this be the "production fix"?  Is this the solution I should be learning in this course? 



    I'm looking at this course to help teach me the CORRECT way to use and administer Kubernetes, not just "how to get it working" I could have googled that for free. (or just used minikube).

    I'm expecting experts to weigh in on this, determine why its not working given that I have configured the lab using the same versions, systems, OS's, and cloud provider that the course designers used.  I'm expecting the expert to attempt these steps and see if they work and if not why and provide some insight.



    Given that I setup my lab the SAME way as described in the begininng of the course ... WHY would I have any "firewall issue" that the course designers didn't have?

  • serewicz
    serewicz Posts: 1,000

    Mr. Koontz,

    There are many possible configurations both inside of GCE as well as the operating system. For example you are using Ubuntu 18, which may have different firewall considerations than ubuntu 16.04.  

    To answer your questiton: 


    Finally your comment doesn't address the issues I'm having with the 8.2 lab in which the NodePort service SHOULD allow acces from outside the cluster allowing traffic between clusters doesn't seem to be something I expect to fix that issue

    Yes, if you configure a NodePort you should be able to access the web server, via the service, via the <PublicIP>:<HighPort> on either node in the cluster.  If this is not working for you I would make sure:

    1) the web server is running.

    2) Then make sure the service is using port 80 on the pod (ensure it was changed back from 8080, where nothing was listening)

    3) Verify the high port in use by the NodePort

    4) Test using the <ens interface>:HighPort of the node, from within the node. Then from the other node. If it stops working as soon as you leave the node it is a firewall with ubuntu 18 or GCE. 

    Some information which could be useful if using GCE and firewalls: https://cloud.google.com/vpc/docs/firewalls and several youtube videos. 

    Regards,

  • I understand, but did you read this in exercise 3.3 step 20?


    ...If the curl command times out the pod may be running on the other node. Run the same command on that node and it should work.

    You can accept this explanation, or you can decide there is a problem and solve it allowing traffic between the hosts. Then we can talk about security and WHICH traffic we should allow, that's a good point, but honestly... I'll keep this for chapter 16, at the moment I'm only at 9.

  • williamkoontz
    williamkoontz Posts: 11
    edited August 2018

    Started with a clean cluster:

     

    william_j_koontz@kube01:~$ kubectl get pods --all-namespaces 

    NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE

    kube-system   calico-etcd-zshkd                          1/1       Running   1          4h

    kube-system   calico-kube-controllers-74b888b647-lsqpr   1/1       Running   1          4h

    kube-system   calico-node-rh8mc                          2/2       Running   3          3h

    kube-system   calico-node-smp4z                          2/2       Running   3          4h

    kube-system   coredns-78fcdf6894-9xhkg                   1/1       Running   1          4h

    kube-system   coredns-78fcdf6894-jbj9z                   1/1       Running   1          4h

    kube-system   etcd-kube01                                1/1       Running   1          4h

    kube-system   kube-apiserver-kube01                      1/1       Running   1          4h

    kube-system   kube-controller-manager-kube01             1/1       Running   1          4h

    kube-system   kube-proxy-7qqbv                           1/1       Running   1          4h

    kube-system   kube-proxy-96ksg                           1/1       Running   1          3h

    kube-system   kube-scheduler-kube01                      1/1       Running   1          4h

    william_j_koontz@kube01:~$

     

    Run an echo server:

    william_j_koontz@kube01:~$ kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080 --replicas=2

    deployment.apps/echoserver created

    william_j_koontz@kube01:~$ kubectl get deployments

    NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

    echoserver   2         2         2            2           10s

    william_j_koontz@kube01:~$ 



    Verify it is running:

    william_j_koontz@kube01:~$ kubectl get po

    NAME                          READY     STATUS    RESTARTS   AGE

    echoserver-5668d55678-9bpzx   1/1       Running   0          28s

    echoserver-5668d55678-bpnfc   1/1       Running   0          28s



    Expose with NodePort:

    william_j_koontz@kube01:~$ kubectl expose deployment echoserver --type=NodePort

    service/echoserver exposed

    william_j_koontz@kube01:~$



    Get the port:

    william_j_koontz@kube01:~$ kubectl describe services/echoserver

    Name:                     echoserver

    Namespace:                default

    Labels:                   run=echoserver

    Annotations:              <none>

    Selector:                 run=echoserver

    Type:                     NodePort

    IP:                       10.100.176.5

    Port:                     <unset>  8080/TCP

    TargetPort:               8080/TCP

    NodePort:                 <unset>  31130/TCP

    Endpoints:                192.168.146.7:8080,192.168.197.204:8080

    Session Affinity:         None

    External Traffic Policy:  Cluster

    Events:                   <none>

    william_j_koontz@kube01:~$ 



    Test it... notice that I tested several times, you can see the "^C" character where I had to control-c when it timed out, but it also worked several times.  Both with "localhost" and with the node IP.

     

    william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-williamkubectl cluster-info

    Kubernetes master is running at https://10.142.0.2:6443

    KubeDNS is running at https://10.142.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

     

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    ^C

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ curl http://localhost:31130

     

    ^C

    william_j_koontz@kube01:~$ curl http://localhost:31130

    ^C

    william_j_koontz@kube01:~$ kubectl get pods

    NAME                          READY     STATUS    RESTARTS   AGE

    echoserver-5668d55678-9bpzx   1/1       Running   0          4m

    echoserver-5668d55678-bpnfc   1/1       Running   0          4m

    william_j_koontz@kube01:~$ kubectl get pods

    NAME                          READY     STATUS    RESTARTS   AGE

    echoserver-5668d55678-9bpzx   1/1       Running   0          4m

    echoserver-5668d55678-bpnfc   1/1       Running   0          4m

    william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130

     

     

    ^C

    william_j_koontz@kube01:~$ curl http://localhost:31130

    ^C

    william_j_koontz@kube01:~$ curl http://localhost:31130

    ^C

    william_j_koontz@kube01:~$ curl http://localhost:31130

    ^C

    william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://localhost:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=localhost:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://localhost:31130

    ^C

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ 

    william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    ^C

    william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    ^C

    william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://10.142.0.2:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=10.142.0.2:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    CLIENT VALUES:

    client_address=10.142.0.2

    command=GET

    real path=/

    query=nil

    request_version=1.1

    request_uri=http://10.142.0.2:8080/

     

    SERVER VALUES:

    server_version=nginx: 1.10.0 - lua: 10001

     

    HEADERS RECEIVED:

    accept=*/*

    host=10.142.0.2:31130

    user-agent=curl/7.58.0

    BODY:

    -no body in request-william_j_koontz@kube01:~$ curl http://10.142.0.2:31130

    ^C

    william_j_koontz@kube01:~$




  • chrispokorni
    chrispokorni Posts: 2,144

    Hi, 

    Kubernetes relies on (but does not manage) a good working infrastructure, including the networking between nodes. The infrastructure vendors Google Cloud, AWS, and Oracle VirtualBox each have documentation on how to set up compute engines/compute instances/VMs and how to create basic firewall rules. 

    The solution posted by @Armox176 can be implemented by simply researching such online documentation.

    I find the extra troubleshooting required by some of the labs to be a good learning tool which prepares me for a real-world Dev/Test/QA scenario. After the issues are fixed and I have a working cluster, then I can really say that my setup is Production ready. 

    Regards, 

    -Chris

  • Could it be because Calico requires AMD64 and I'm using Intel processors????

    Callico and all or most of the Pod overlays say they require AMD64:

    https://docs.projectcalico.org/v3.1/getting-started/kubernetes/requirements#kernel-dependencies

    But all the GCP regions only offer Intel processors.  Could this be an issue?

    I can't find any info on the net why these network overlays require amd...  

  • chrispokorni
    chrispokorni Posts: 2,144
    edited August 2018

    Hi, when you create your VM instance on GCP, when selecting the Boot Disk you will notice that the Ubuntu images are all AMD64 built. 

    Regards,

    -Chris

  • NO.

    Regards,

    Fabio

  • Hello,

    There are a couple of things that I do not undestanding from LAB 8.1:

    1) When calling curl on port 80 we are getting a response from nginx, even though the port in not exposed in the service nor in the container.

    Is this normall? I would thought the deployment/service would serve as a kind of firewall, in which only ports actually declared are open

    2) When making a nslookup from a busybox running on the cluster I am not getting the response I expected

    kubectl exec -it busybox-dns -- nslookup nginx-one

    Server: 10.96.0.10
    Address: 10.96.0.10:53

    ** server can't find nginx-one: NXDOMAIN

    *** Can't find nginx-one: No answer

    Is there some misconfiguration on my cluster?

    Thank you in advance for your help

  • Hi,
    The role of a service is not to block or allow traffic. A service is an abstraction mechanism that allows external exposure for a set of pods. This means that although port 8080 is exposed, port 80 is not blocked - therefore you are able to get a response when curling port 80.

    There is a dns troubleshooting guide below, which may help:
    https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

    Regards,
    -Chris

Categories

Upcoming Training