Welcome to the Linux Foundation Forum!

Can I use local VMs instead of cloud resources?

I would prefer to do the labs with Ubuntu VMs running locally under windows11 rather than paying for cloud resources. Will this cause issues with the exams? Does it matter how I spin up my VMs so long as they work?

Answers

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @trisct,

    There are no exams in this class. There are only guided lab exercises you will complete at your own pace.

    You can complete the labs on local Ubuntu 20.04 LTS VMs. Assuming your host system supports the necessary amount of CPU (2 CPU cores per VM), RAM (8 GB RAM per VM), disk (20 GB vdisk per VM) resources, a single bridged network interface per VM (to allow VMs to communicate with each other, to be accessed by the host system, and to access the internet when necessary), and you configure your hypervisor to allow all ingress traffic to VMs (from all sources, all protocols, to all port destinations).

    Also, ensure that the VMs' private IP addresses are not overlapping 10.0.0.0/8 and 10.96.0.0/12.

    Regards,
    -Chris

  • trisct
    trisct Posts: 18

    Doesn't the certification step count as an exam? I was asking about that. I use 10.5.65.x for IPs, and bridged mode so there should be no additional firewalls

  • trisct
    trisct Posts: 18

    The hyper-V firewall is not applied to bridged mode - there is a different interface used if you want filtering.

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @trisct,

    The exam is offered in a hosted environment. Kubernetes features covered by the course material should work in a similar fashion both on cloud VM instances and local VMs.

    The 10.5.x.x IP of the VM overlaps the 10.0.0.0/8 Pod network, so I would expect some routing issues in your cluster.

    Regards,
    -Chris

  • trisct
    trisct Posts: 18

    I would have the same issue with any 10.x.x.x network - the POD network reserves that entire range?

  • trisct
    trisct Posts: 18

    I can create a NAT network adapter for 192.168.0.0 and try using that.

  • trisct
    trisct Posts: 18

    It would be nice if the course included a tutorial on creating local VMs that didn't cost money. Most people have available resources to run a couple of VMs, so asking people to spend extra money to use cloud resources seems unnecessary... I would think a windows VM tutorial based on 10/11 could be assembled

  • trisct
    trisct Posts: 18

    I rebuilt my nodes from scratch, giving them static IP addresses in the 192.168 range. They can all talk to each other and the Internet.
    However my kubectl get node commands return NotReady...
    tim@master:~$ kubectl get node
    NAME STATUS ROLES AGE VERSION
    master NotReady control-plane 21m v1.29.1
    worker NotReady 9s v1.29.1

    Do you have a suggestion to look into? The nodes took a while to become Ready before, but they did eventually change. This time something isn't right.

  • trisct
    trisct Posts: 18

    Is there something wrong with using 192.168.0.0/24 as a network address? I have swap turned off on my VMs (commented out in fstab) but kubelet still won't start. It says cni plugin not ready

  • trisct
    trisct Posts: 18

    Is the output below normal for Cilium?

    cilium-linux-amd64.tar.gz: OK
    cilium



    Installing Cilium, this may take a bit...



    strconv.ParseUint: parsing "": invalid syntax

    Cilium install finished. Continuing with script.

  • trisct
    trisct Posts: 18

    Can you just tell me what IP address range is safe? This worked when I used 10.5.65.0/24

  • trisct
    trisct Posts: 18

    Cilium is not installing properly. I will try a different IP segment... although should work fine with 192.168.0.0

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @trisct,

    Cilium is installed with the Pod CIDR set to the default 10.0.0.0/8. While everything may look temporarily ok if there is an overlap (such as VM IPs from 10.5.x.x/24), in time, as IP addresses are assigned to Pods and added to iptables, they may cause routing issues in the cluster.

    If the scripts k8scp.sh and k8sWorker.sh have not been altered in any way, the 192.168.0.0/24 or /16 network for VMs should be fine, with a bridged adapter - it seems to have worked best with other hypervisors. In other cases, all ingress traffic to VMs needed to be explicitly enabled from the hypervisor, otherwise it blocked critical protocols and destination ports, thus preventing Kubernetes and its plugins from properly initializing.

    Did you attempt to reboot your VMs? Any luck just doing that?

    Regards,
    -Chris

  • trisct
    trisct Posts: 18

    The outer network is not 192.168.. so it has to be a NAT adapter not bridged. Getting to the Internet was not a problem so I dont know why cilium is having issues. Reboots did not help. I went back to a simple bridge and it all works. I'll take my chances with a simple bridge, I cannot seem to make the NAT stuff work, somehow that breaks the Cilium install

  • trisct
    trisct Posts: 18

    Maybe I can change the pod CIDR to 10.0.0.0/16 somehow

  • trisct
    trisct Posts: 18

    cilium config set cluster-pool-ipv4-cidr 10.25.0.0/16

    After a reboot seems to be working just fine, at least the parameter is permanent and nothing complains

    This should make cilium use 10.25.x.x as the base for pod address pools. The default pool is /24

    This should work better

  • trisct
    trisct Posts: 18

    Cilium only seems to exist on the cp node so worker doesn't need this

  • trisct
    trisct Posts: 18

    Both cilium status and kubectl get node return ready status

  • trisct
    trisct Posts: 18

    The fact I had not started any pod collections yet probably made this easier. It seems like Cilium would restart existing pods if they were there but simpler this way.

  • chrispokorni
    chrispokorni Posts: 2,301

    Hi @trisct,

    I am glad your cluster is now operational.

    Cilium is initialized on the control plane, but eventually deploys its own controller pods on each node of the cluster, that includes the workers and eventually additional control plane nodes for HA clusters.

    Selecting the network size with /24, /16, or /8 is up to you. The smaller /24 should work just fine for a learning cluster.

    It is expected for any existing pods to be terminated in the case the pod network is updated, to re-distribute the pod subnets to nodes and assign new IP addresses to pods. So this is considered a disruptive change for the cluster as a whole.

    Regards,
    -Chris

Categories

Upcoming Training