Welcome to the Linux Foundation Forum!

Local setup using Vagrant

Posts: 11
edited February 20 in LFD259 Class Forum

After some time of teetering trial and error, I managed to run a properly configured cluster using vagrant.


Vagrantfile

  1. Vagrant.configure("2") do |config|
  2. # Define common settings for both VMs
  3. config.vm.box = "ubuntu/focal64" # Ubuntu 20.04
  4. config.vm.provider "virtualbox" do |vb|
  5. vb.memory = "8192"
  6. vb.cpus = 2
  7. end
  8.  
  9. # Control Plane Node
  10. config.vm.define "cp-node" do |cp|
  11. cp.vm.hostname = "cp-node"
  12. cp.vm.network "private_network", type: "static", ip: "192.168.56.10"
  13. cp.vm.provider "virtualbox" do |vb|
  14. end
  15. cp.vm.provision "shell", inline: <<-SHELL
  16. sudo swapoff -a
  17. sudo sed -i '/ swap / s/^/#/' /etc/fstab
  18. sudo systemctl disable --now ufw
  19. sudo systemctl stop apparmor
  20. sudo systemctl disable --now apparmor
  21. SHELL
  22. end
  23.  
  24. # Worker Node
  25. config.vm.define "worker-node" do |worker|
  26. worker.vm.hostname = "worker-node"
  27. worker.vm.network "private_network", type: "static", ip: "192.168.56.11"
  28. worker.vm.provision "shell", inline: <<-SHELL
  29. sudo swapoff -a
  30. sudo sed -i '/ swap / s/^/#/' /etc/fstab
  31. sudo systemctl disable --now ufw
  32. sudo systemctl stop apparmor
  33. sudo systemctl disable --now apparmor
  34. SHELL
  35. end
  36. end
  37.  

  1. start the machines
  1. vagrant up
  1. populate the /etc/hosts file of each node with the ip addresses of both nodes:
  1. echo -e "192.168.56.10 cp-node\n192.168.56.11 worker-node" | sudo tee -a /etc/hosts
  1. download the resources from the cm.lf.training(LFD259_V2024-09-20_SOLUTIONS) site on both nodes

  2. cp-node: modify the k8scp.sh script script and execute it: bash cp $(find $HOME -name k8scp.sh) . && sed -i 's|sudo kubeadm init --kubernetes-version=1.31.1 --pod-network-cidr=10.0.0.0/8|sudo kubeadm init --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.200.0.0/16|' k8scp.sh && sed -i 's|cilium install --version 1.16.1*|cilium install --version 1.16.1 --set ipam.operator.clusterPoolIPv4PodCIDRList=10.200.0.0/16|' k8scp.sh &&bash k8scp.sh | tee $HOME/cp.out
    worker-node: modify the k8Worker.sh script and execute it : bash cp $(find $HOME -name k8sWorker.sh) . && bash k8sWorker.sh | tee worker.out

  3. worker-node: Then just use the join command as per the instructions. And that should be it

I hope it helps

Comments

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training