Welcome to the Linux Foundation Forum!

Local setup using Vagrant

yorgji
yorgji Posts: 11
edited February 20 in LFD259 Class Forum

After some time of teetering trial and error, I managed to run a properly configured cluster using vagrant.


Vagrantfile

Vagrant.configure("2") do |config|
  # Define common settings for both VMs
  config.vm.box = "ubuntu/focal64"  # Ubuntu 20.04
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "8192"
    vb.cpus = 2
  end

  # Control Plane Node
  config.vm.define "cp-node" do |cp|
    cp.vm.hostname = "cp-node"
    cp.vm.network "private_network", type: "static", ip: "192.168.56.10"
    cp.vm.provider "virtualbox" do |vb|
    end
    cp.vm.provision "shell", inline: <<-SHELL
      sudo swapoff -a
      sudo sed -i '/ swap / s/^/#/' /etc/fstab
      sudo systemctl disable --now ufw
      sudo systemctl stop apparmor
      sudo systemctl disable --now apparmor
    SHELL
  end

  # Worker Node
  config.vm.define "worker-node" do |worker|
    worker.vm.hostname = "worker-node"
    worker.vm.network "private_network", type: "static", ip: "192.168.56.11"
    worker.vm.provision "shell", inline: <<-SHELL
      sudo swapoff -a
      sudo sed -i '/ swap / s/^/#/' /etc/fstab
      sudo systemctl disable --now ufw
      sudo systemctl stop apparmor
      sudo systemctl disable --now apparmor
    SHELL
  end
end


  1. start the machines
vagrant up
  1. populate the /etc/hosts file of each node with the ip addresses of both nodes:
echo -e "192.168.56.10 cp-node\n192.168.56.11 worker-node" | sudo tee -a /etc/hosts
  1. download the resources from the cm.lf.training(LFD259_V2024-09-20_SOLUTIONS) site on both nodes

  2. cp-node: modify the k8scp.sh script script and execute it: bash cp $(find $HOME -name k8scp.sh) . && sed -i 's|sudo kubeadm init --kubernetes-version=1.31.1 --pod-network-cidr=10.0.0.0/8|sudo kubeadm init --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.200.0.0/16|' k8scp.sh && sed -i 's|cilium install --version 1.16.1*|cilium install --version 1.16.1 --set ipam.operator.clusterPoolIPv4PodCIDRList=10.200.0.0/16|' k8scp.sh &&bash k8scp.sh | tee $HOME/cp.out
    worker-node: modify the k8Worker.sh script and execute it : bash cp $(find $HOME -name k8sWorker.sh) . && bash k8sWorker.sh | tee worker.out

  3. worker-node: Then just use the join command as per the instructions. And that should be it

I hope it helps

Comments

Categories

Upcoming Training