Welcome to the Linux Foundation Forum!

Can I use local VMs instead of cloud resources?

I would prefer to do the labs with Ubuntu VMs running locally under windows11 rather than paying for cloud resources. Will this cause issues with the exams? Does it matter how I spin up my VMs so long as they work?

Answers

  • chrispokorni
    chrispokorni Posts: 2,331

    Hi @trisct,

    There are no exams in this class. There are only guided lab exercises you will complete at your own pace.

    You can complete the labs on local Ubuntu 20.04 LTS VMs. Assuming your host system supports the necessary amount of CPU (2 CPU cores per VM), RAM (8 GB RAM per VM), disk (20 GB vdisk per VM) resources, a single bridged network interface per VM (to allow VMs to communicate with each other, to be accessed by the host system, and to access the internet when necessary), and you configure your hypervisor to allow all ingress traffic to VMs (from all sources, all protocols, to all port destinations).

    Also, ensure that the VMs' private IP addresses are not overlapping 10.0.0.0/8 and 10.96.0.0/12.

    Regards,
    -Chris

  • trisct
    trisct Posts: 22

    Doesn't the certification step count as an exam? I was asking about that. I use 10.5.65.x for IPs, and bridged mode so there should be no additional firewalls

  • trisct
    trisct Posts: 22

    The hyper-V firewall is not applied to bridged mode - there is a different interface used if you want filtering.

  • chrispokorni
    chrispokorni Posts: 2,331

    Hi @trisct,

    The exam is offered in a hosted environment. Kubernetes features covered by the course material should work in a similar fashion both on cloud VM instances and local VMs.

    The 10.5.x.x IP of the VM overlaps the 10.0.0.0/8 Pod network, so I would expect some routing issues in your cluster.

    Regards,
    -Chris

  • trisct
    trisct Posts: 22

    I would have the same issue with any 10.x.x.x network - the POD network reserves that entire range?

  • trisct
    trisct Posts: 22

    I can create a NAT network adapter for 192.168.0.0 and try using that.

  • trisct
    trisct Posts: 22

    It would be nice if the course included a tutorial on creating local VMs that didn't cost money. Most people have available resources to run a couple of VMs, so asking people to spend extra money to use cloud resources seems unnecessary... I would think a windows VM tutorial based on 10/11 could be assembled

  • trisct
    trisct Posts: 22

    I rebuilt my nodes from scratch, giving them static IP addresses in the 192.168 range. They can all talk to each other and the Internet.
    However my kubectl get node commands return NotReady...
    tim@master:~$ kubectl get node
    NAME STATUS ROLES AGE VERSION
    master NotReady control-plane 21m v1.29.1
    worker NotReady 9s v1.29.1

    Do you have a suggestion to look into? The nodes took a while to become Ready before, but they did eventually change. This time something isn't right.

  • trisct
    trisct Posts: 22

    Is there something wrong with using 192.168.0.0/24 as a network address? I have swap turned off on my VMs (commented out in fstab) but kubelet still won't start. It says cni plugin not ready

  • trisct
    trisct Posts: 22

    Is the output below normal for Cilium?

    cilium-linux-amd64.tar.gz: OK
    cilium



    Installing Cilium, this may take a bit...



    strconv.ParseUint: parsing "": invalid syntax

    Cilium install finished. Continuing with script.

  • trisct
    trisct Posts: 22

    Can you just tell me what IP address range is safe? This worked when I used 10.5.65.0/24

  • trisct
    trisct Posts: 22

    Cilium is not installing properly. I will try a different IP segment... although should work fine with 192.168.0.0

  • chrispokorni
    chrispokorni Posts: 2,331

    Hi @trisct,

    Cilium is installed with the Pod CIDR set to the default 10.0.0.0/8. While everything may look temporarily ok if there is an overlap (such as VM IPs from 10.5.x.x/24), in time, as IP addresses are assigned to Pods and added to iptables, they may cause routing issues in the cluster.

    If the scripts k8scp.sh and k8sWorker.sh have not been altered in any way, the 192.168.0.0/24 or /16 network for VMs should be fine, with a bridged adapter - it seems to have worked best with other hypervisors. In other cases, all ingress traffic to VMs needed to be explicitly enabled from the hypervisor, otherwise it blocked critical protocols and destination ports, thus preventing Kubernetes and its plugins from properly initializing.

    Did you attempt to reboot your VMs? Any luck just doing that?

    Regards,
    -Chris

  • trisct
    trisct Posts: 22

    The outer network is not 192.168.. so it has to be a NAT adapter not bridged. Getting to the Internet was not a problem so I dont know why cilium is having issues. Reboots did not help. I went back to a simple bridge and it all works. I'll take my chances with a simple bridge, I cannot seem to make the NAT stuff work, somehow that breaks the Cilium install

  • trisct
    trisct Posts: 22

    Maybe I can change the pod CIDR to 10.0.0.0/16 somehow

  • trisct
    trisct Posts: 22

    cilium config set cluster-pool-ipv4-cidr 10.25.0.0/16

    After a reboot seems to be working just fine, at least the parameter is permanent and nothing complains

    This should make cilium use 10.25.x.x as the base for pod address pools. The default pool is /24

    This should work better

  • trisct
    trisct Posts: 22

    Cilium only seems to exist on the cp node so worker doesn't need this

  • trisct
    trisct Posts: 22

    Both cilium status and kubectl get node return ready status

  • trisct
    trisct Posts: 22

    The fact I had not started any pod collections yet probably made this easier. It seems like Cilium would restart existing pods if they were there but simpler this way.

  • chrispokorni
    chrispokorni Posts: 2,331

    Hi @trisct,

    I am glad your cluster is now operational.

    Cilium is initialized on the control plane, but eventually deploys its own controller pods on each node of the cluster, that includes the workers and eventually additional control plane nodes for HA clusters.

    Selecting the network size with /24, /16, or /8 is up to you. The smaller /24 should work just fine for a learning cluster.

    It is expected for any existing pods to be terminated in the case the pod network is updated, to re-distribute the pod subnets to nodes and assign new IP addresses to pods. So this is considered a disruptive change for the cluster as a whole.

    Regards,
    -Chris

  • trisct
    trisct Posts: 22

    The default installation of kubectl on Ubuntu 20.04.6 seems to be 1.29, not 1.31. Even doing an upgrade does not reinstall kubectl or kubeadm.

    Are you using a newer Ubuntu now?

  • trisct
    trisct Posts: 22

    In other words do I need to start over?

  • trisct
    trisct Posts: 22

    I tried upgrading but Ubunto says 1.29.9-1.1 is the latest available version

  • chrispokorni
    chrispokorni Posts: 2,331

    Hi @trisct,

    The course aimed at Kubernetes Developers does not cover the Kubernetes cluster upgrade process, which is typically performed by a Cluster Administrator.
    I would highly encourage you to start with two clean VMs running the recommended guest OS distribution/release and install the Kubernetes components of the recommended versions from their associated repositories as included in the shell scripts located in the Solutions tarball. The Kubernetes source file for the apt package manager needs to be updated with the correct Kubernetes minor version, otherwise it misses earlier or more recent releases.

    Regards,
    -Chris

  • trisct
    trisct Posts: 22

    OK, I have things lined up better now. The overall confusion comes from the fact that the class materials (supplied as a tar-ball) are out of date, and the supplied install scripts are not correct. The lecture steps do not clearly indicate that people should edit/fix the installer scripts, either. A small note there would save a lot of effort, since a bad install has to be solved with a reinstallation of the whole OS, basically.

Categories

Upcoming Training