Welcome to the Linux Foundation Forum!

Getting started with labs

allensiho
allensiho Posts: 9
edited December 2019 in LFD259 Class Forum

Apologies for my ignorance.

I think I have followed the instructions in the Video from the resource GoogleFreeTierUse.

But after doing that and creating a Google Kubernetes cluster, I am trying install the tar file resources to do the labs

wget https://training.linuxfoundation.org/cm/LFD259/LFD259 V2019-11-05 SOLUTIONS.tar.bz2 --user=xxxxxxx --password=xxxxxx

I get the response

Resolving training.linuxfoundation.org (training.linuxfoundation.org)... 151.101.1.5, 151.101.65.5, 151.101.129.5, ...
Connecting to training.linuxfoundation.org (training.linuxfoundation.org)|151.101.1.5|:443... connected.
HTTP request sent, awaiting response... 401 Restricted
Authentication selected: Basic realm="Linux Training"
Connecting to training.linuxfoundation.org (training.linuxfoundation.org)|151.101.1.5|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2019-12-23 11:22:09 ERROR 404: Not Found.
--2019-12-23 11:22:09-- http://v2019-11-05/
Resolving v2019-11-05 (v2019-11-05)... failed: Name or service not known.
wget: unable to resolve host address ‘v2019-11-05’
--2019-12-23 11:22:09-- http://solutions.tar.bz2/
Resolving solutions.tar.bz2 (solutions.tar.bz2)... failed: Name or service not known.
wget: unable to resolve host address ‘solutions.tar.bz2’

Is there another way to get those Tar files into the Google Kubernetes Engine to do the labs?

Comments

  • fcioanca
    fcioanca Posts: 1,886

    If you copy the wget command from the pdf and then paste it in your terminal, some of the underscores are not pasted. We suggest you type the command or add the missing underscores manually. This has been discussed in previous posts as well.

  • Hi @allensiho,

    Please read carefully the Overview section. There is a note right after the wget command, addressing the issue with the underscores and the solution as well.

    Regards,

    -Chris

  • Thanks for that. Just getting started. Im using Google cloud. Am I meant to delete Instance Groups once I finish practising the labs? to prevent further billing? - We have limited free credits.

    Alternatively instead of creatine a new instance group we could just use the Google Kubernetes Engine?

    Or maybe that is not a good idea?

  • I'm getting

    unable to recognize "calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: c
    onnection refused

    When I try and run the K8sMaster.sh file on one of the VMIstances?
    This is too frustrating
    Is it possible to do the labs locally instead?

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @allensiho,

    I would recommend stopping your GCE instances instead of deleting them, to prevent the loss of all your work.

    The course presents the vendor-neutral Open Source Kubernetes project, and the lab exercises have been designed to explore features in such a vendor-neutral environment. GKE is a distribution of Kubernetes designed to work specifically on the Google Cloud Platform. The level of customization of GKE may change the behavior of some exercises, and in some cases, it may prevent some steps from running at all.

    The timeout you notice is consistent with networking issues between your cluster nodes, which have been addressed several times in the Forum. There may be GCP infrastructure firewall rules blocking traffic to some ports. My recommendation is to create a custom VPC network with a firewall rule to allow ALL ingress traffic (from all sources, to all ports, all protocols) and place your cluster nodes inside that custom VPC network.

    If you chose to run the labs locally, the same networking requirements will apply, but node sizing for CPU and memory may become an issue (revisit the Overview section of Lab 2 for sizing requirements), together with IP subnets of the hypervisor software which at times may overlap with calico network, causing DNS related problems.

    Regards,
    -Chris

  • Thanks...Thought I checked all firewall to allow all but will try and go back and check slowly

  • For now everytime I create a Kubernetes cluster on Google cloud, it uses the default network so at this stage I still do not know how

  • I think I still have firewall issues as below and I do not know why

    I create Pods and try to ping their IP Addresses and get no response.

    Not sure if anyone can offer anymore clues


  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @allensiho,

    It seems from your screenshot that you are using GKE - Google Kubernetes Engine. As mentioned earlier, GKE is a distribution of Kubernetes designed to work specifically on the Google Cloud Platform. The level of customization of GKE may change the behavior of some exercises, and in some cases, it may prevent some steps from running at all. In order to understand the configuration behind GKE infrastructure (including networking), and the behavior of a GKE cluster in general (including Services), I suggest consulting the GKE documentation from the Google Cloud Platform.

    The lab exercises of this course have not been designed or tested on GKE.

    The screenshot of the firewall rules indicates that you have multiple GKE-default-created rules associated with the labs-example network in addition to your custom lab-firewall rule. I am not sure whether you are allowed to remove or disable them since they are part of Google's managed Kubernetes offering.

    If all else fails you can always follow the lab exercise instructions and bootstrap your Kubernetes cluster on GCE VM instances ;)

    Good luck!
    -Chris

  • @allensiho

    If you're following the lab, you don't need (and don't want) to create a GKE cluster. Instead what you should do is create two VM instances in GCE, master and worker (can leave them in the default network).

    Basically, when you setup a cluster on GKE, you don't have to execute the setup_(master|worker) scripts as GKE handles that for you. By setting up the nodes yourself in GCE, you also get a better understanding of what's happening under the hood.

    1. For compatibility sake, use an ubuntu 18.04 image for both VMs
    2. On the master node/VM, execute k8sMaster.sh.
    3. Note the command line in the output ("kubeadm join ..."), you will have to execute that command on the worker node
    4. Execute k8sWorker.sh on the worker node.
    5. Execute the command you previously saved ("kubeadm join ...")
    6. Now you're on track but you still need to add a firewall rule to allow traffic between the VMs as explained here: https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/gce

    To create the firewall rule, just use the google cloud console instead of creating programmatically (see screenshot below)

    Don't forget to read the k8s* setup scripts to see what's actually installed.

Categories

Upcoming Training