Welcome to the Linux Foundation Forum!

Lab_8

Hello Team,
1- Working on LFS258 Lab_8.2 it says in previous exercise we deployed a LoadBalancer. However reviewing my deployment from the Lab_8.1 it shows - nginx-one ClusterIP; I used example yaml file at the beginning of the lab_8.1. Please help me to find out what I missed in this Lab setup :-)
2- Regarding the same Lab_8.1, curl to the service IP or ep IP works on the port 80 from worker node, where svc is deployed, however it's not reachable from the master node. It seems it should be accessible with this yaml deployment from both master and worker nodes. Any tips on how to troubleshoot this would be greatly appreciated.

Thank you,

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Could you please paste the commands you have run. As you review them, are there any places where your edits may not match the book? Without an error, or commands with output it is difficult to troubleshoot.

    As far as access to from the other node, are you sure you have removed all firewalls between the nodes? Are you running the nodes using GCE? Do you allow all traffic on all ports in the VPC?

    Regards,

  • sashko
    sashko Posts: 8

    Hello and Thank you for a quick reply.
    1- I completely understand that it's impossible to troubleshoot like that :-) . I have not hit any error with the deployment. I have run the deployment following steps in Lab_8.1.pdf, those are major commands:
    kubectl create -f nginx-one.yaml -> kubectl create ns accounting -> kubectl label nodes lfs258-worker system=secondOne -> kubectl -n accounting expose deployment nginx-one
    My confusion is that this expected to be a LoadBalancer one. It seems I have explicitly specify the type=LoadBalancer, in the yaml or expose command, otherwise default would be ClusterIP.

    2- I'm deploying in AKS. I made sure that no firewall restrictions, everything is wide open. Apparently is not, i'm going to double check that. The error from master I'm getting while connecting to service that runs on worker is timeout:
    curl: (7) Failed to connect to 10.110.243.210 port 80: Connection timed out

    Thank you,

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Yes, without declaring the type of resource to expose the default is ClusterIP.

    Are you referring to Azure Kubernetes Service? The Azure products are known to have issues with firewalls and other networking issues and Kubernetes. Also you may have issues with Microsoft running the network as AKS is a fully managed service not just compute resources like GCE or AWS. So you may not able to actually control it or the other resources yourself.

    My suggestion would be to use a different service, not Azure.

    Regards,

  • sashko
    sashko Posts: 8

    Thank you for quick reply.
    To be more accurate i'm using just vm's on Azure, it's not AKS, my bad. I installed k8s following Lab assignment from this training on regular ubuntu vm's.
    Declaring the resource it make sense, I'm going to test that. In this case Lab_8.1 should be updated I guess to explicitly point out to expose service as LB, or maybe example yaml should have this type.
    Thanks again, I'll continue looking from the vm networking side of things then.

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Even without using AKS, just Azure, I have heard of ongoing problems with Kubernetes and networking. I have not worked with it myself but I have had lots of request asking me figure out why the same steps that work with GCE, AWS, VirtualBox, VMWare, and bare metal don't work with Azure.

    To avoid lengthy and possibly fruitless troubleshooting you may consider using some other provider.

    Regards,

  • sashko
    sashko Posts: 8

    Thank you for the quick reply. The network portion is clear.
    Regarding the type LoadBalancer in Lab_8.1. Is the step specifying exact type is missing in the lab pdf or I have to revisit the setup?

    Regards,

  • serewicz
    serewicz Posts: 1,000

    Hello,

    Yes, in a previous lab, lab 3, we deployed a LoadBalancer. In 8.1 we deploy a ClusterIP. In 8.2 we deploy a NodePort. You seem to be seeing the expected output as the lab describes.

    Regards,

  • sashko
    sashko Posts: 8

    Thanks again. I'll go back reviewing previous labs configurations one more time.
    Regards.

  • Hello,
    Could you please provide more details on how correctly enable kubectl commands running on the worker node? Should we just copy over .kube/config from the master to worker node?
    Thank you.

  • Hi @sashko,
    The .kube/config file is for the control plane only, so in this case only for the master node. Kubectl should be issued from the master node, and has effect over the entire cluster - master and worker(s).
    If you are using the Calico network plugin for pod networking, there have been issues reported earlier on running a Kubernetes cluster on Azure VMs, with the Calico network plugin.
    You can find out more details on the Calico documentation page.
    Regards,
    -Chris

  • hi @chrispokorni , @serewicz
    i'm facing this issue
    on the lab 8.1, i can't do curl to local ip (192.xxx) from the master, but that's accessible from the worker
    kindly give some advice

  • another thing, i cannot access nginx using node public IP, i can access using worker public IP
    is anything wrong ?

  • Hi @nkristianto,
    Similar issues have been discussed and resolved in earlier posts.
    It seems that the networking between your nodes is not properly setup, and traffic to some ports may be blocked.
    There are a few reasons why this happens. There may be a firewall running on your Ubuntu instances: ufw, apparmor, ... Or there may be a firewall between the nodes at your infrastructure level. The fixes are to disable any local firewall on the Ubuntu instances, and to create a custom VPC network with a custom firewall rule open to all traffic: all ports, all protocols, all sources/destinations and add your VM instances to that VPC.
    Regards,
    -Chris

  • hi @chrispokorni
    thanks for the response
    for this, i'm using GCP,

    will try your suggestions for open to all traffic: all ports, all protocols, all sources/destinations and add your VM instances to that VPC. i'll get back for the result
    Regards
    nkristianto

  • hi @chrispokorni

    implement all firewall rules to allow all traffic for all port, disable firewall on local ubuntu instance, but still cannot do CURL from master
    do you have any suggestion ?

    thanks
    regards
    nkristianto

  • nvm Cris,
    i made a mistake.
    all is working fine now after allow all ip range with all port and all protocol
    thanks...

    regards
    nkristianto

  • Hello @chrispokorni,
    I'm not sure I fully understand your reply to my question regarding running kubectl commands on worker nodes. My question is basically what needs to be done in order to be able to run kubectl commands from worker node or for instance from just a regular pc, interacting with the cluster?
    Thank you,

  • Hi @sashko, in order to run kubectl from a remote workstation to manage your cluster install kubect on the remote workstation and copy the .kube/config file to the remote workstation. If kubectl does not work after these 2 steps, then you may need to edit the remote .kube/config file to include the master node's private or public IP and you may also need to re-set the certs from your remote .kube/config file - these last 2 steps may have to be performed depending on your current setup.
    Regards,
    -Chris

  • Thank you Chris! Really appreciate your help with this.

Categories

Upcoming Training