Welcome to the Linux Foundation Forum!

Creating VM nodes on QEMU/KVM with Ubuntu

headkaze
headkaze Posts: 15
edited August 2022 in LFS258 Class Forum

If you have a decent multi-core CPU with enough RAM running Ubuntu there's no reason not to use it for this course instead of a cloud service.

You should already have QEMU/KVM setup and a ssh key created using ssh-keygen -t rsa.

First thing you need to do is make sure you have a virtual bridge. In Ubuntu 22.04 you should already have one called virbr0.

$ ip link show type bridge
$ ip addr show virbr0
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:c3:12:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

You can see our static address range is 192.168.122.1/24. So let's add some host names to our /etc/hosts

192.168.122.191 mycluster-cp1
192.168.122.192 mycluster-cp2
192.168.122.193 mycluster-cp3
192.168.122.194 mycluster-w1
192.168.122.195 mycluster-w2
192.168.122.196 mycluster-w3
192.168.122.197 mycluster-w4

Create a file called user-data.yml and add the following:

#cloud-config
autoinstall:
  version: 1
  interactive-sections: []
  ssh:
    install-server: true
    allow-pw: true
    authorized-keys:
      - [SSH_RSA]
  user-data:
    disable_root: false
  identity:
    hostname: ubuntu-server
    username: ubuntu
    password: [ROOT_PASSWORD]
  early-commands: []
  late-commands:
    - swapoff -a
    - sed -i '/swap/ s/^\(.*\)$/#\1/g' /target/etc/fstab
    - echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
    - chmod 440 /target/etc/sudoers.d/ubuntu

Paste in your ssh key in [SSH_RSA]:

cat ~/.ssh/id_rsa.pub

Change the user@host to ubuntu@ubuntu-server

To generate a password for root you can run and paste it in the [ROOT_PASSWORD] section:

mkpasswd --method=SHA-512

Now run ubuntu-autoinstall-generator.sh to generate the Ubuntu 22.04 iso:

./ubuntu-autoinstall-generator.sh -a -u user-data.yml -d ubuntu-autoinstall.iso

Now we use this virt-install.sh bash script to generate our VM's:

#!/bin/bash

HOSTNAME=$1
CLIENTIP=$2
VCPUS=${3:-2}
MEMORY=${4:-2048}
DISKSIZE=${5:-30}
DISKPATH=./$HOSTNAME.rawdisk
MACADDR=RANDOM
DEVICE=enp3s0
AUTOCONF=off
BRIDGE=virbr0
SERVERIP=
DNS0IP=192.168.122.1
DNS1IP=
GATEWAYIP=192.168.122.1
NETMASK=255.255.255.0
LOCATION=./ubuntu-autoinstall.iso

sudo virt-install \
--connect=qemu:///system \
--name $HOSTNAME \
--memory $MEMORY \
--vcpus $VCPUS \
--bridge=$BRIDGE \
--mac=$MACADDR \
--autostart \
--check-cpu \
--os-type=linux \
--force \
--graphics none \
--virt-type kvm \
--os-variant=ubuntu22.04 \
--location $LOCATION,initrd=casper/initrd,kernel=casper/vmlinuz \
--disk path=$DISKPATH,format=raw,cache=none,bus=virtio,size=$DISKSIZE \
--debug \
--noautoconsole \
--wait=-1 \
--extra-args="ip=$CLIENTIP:$SERVERIP:$GATEWAYIP:$NETMASK:$HOSTNAME:$DEVICE:$AUTOCONF:$DNS0IP:$DNS1IP console=ttyS0 quiet autoinstall ds=nocloud;s=/cdrom/nocloud/"

Change the value of DEVICE to match your ethernet device:

$ hostname -I
192.168.0.141
$ ifconfig | grep 192.168.0.141 -a -3
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.141  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::5a5:e6a5:9aee:9e43  prefixlen 64  scopeid 0x20<link>
        ether a8:a1:59:40:ca:d4  txqueuelen 1000  (Ethernet)
        RX packets 45472938  bytes 16695796477 (16.6 GB)

So we should set:

DEVICE=enp3s0

Now we're finally ready to generate our VMs'!

#!/bin/bash
SCRIPT=./virt-install.sh

$SCRIPT mycluster-cp1 192.168.122.191 2 2048 30
$SCRIPT mycluster-cp2 192.168.122.192 2 2048 30
$SCRIPT mycluster-cp3 192.168.122.193 2 2048 30
$SCRIPT mycluster-w1 192.168.122.194 1 2048 30
$SCRIPT mycluster-w2 192.168.122.195 1 2048 30
$SCRIPT mycluster-w3 192.168.122.196 1 2048 30
$SCRIPT mycluster-w4 192.168.122.197 1 2048 30

If you only want two control planes' and two workers, for example, comment out the mycluster-cp3, mycluster-w3 and mycluster-w4 lines by prefixing them with a # character.

Once you have your VMs' created you should ssh into each one and set the correct host name and edit the /etc/hosts file to add the same hosts as we did before.

$ sudo nano /etc/hostname
mycluster-x
$ sudo nano /etc/hosts

Comments

  • headkaze
    headkaze Posts: 15
    edited August 2022

    I just commit a k8s-cluster repo which contains Ansible scripts for automating the creation of a k8s cluster on your VMs.

    It should be as simple as editing hosts.ini

    [control_plane]
    mycluster-cp1 ansible_host=192.168.122.191
    mycluster-cp2 ansible_host=192.168.122.192
    mycluster-cp3 ansible_host=192.168.122.193
    
    [workers]
    mycluster-w1 ansible_host=192.168.122.194
    mycluster-w2 ansible_host=192.168.122.195
    mycluster-w3 ansible_host=192.168.122.196
    
    [all:vars]
    ansible_python_interpreter=/usr/bin/python3
    

    Then run the install.sh script

    #!/bin/bash
    set -e
    ansible-playbook -i hosts.ini ./init.yml
    ansible-playbook -i hosts.ini ./kube-dependencies.yml
    ansible-playbook -i hosts.ini ./control-planes.yml
    ansible-playbook -i hosts.ini ./workers.yml
    

    By default it will install containerd CRI but aslo includes scripts for installing cri-o and docker.

  • headkaze
    headkaze Posts: 15
    edited March 2023

    The repo has been updated to use Vagrant instead of cloud-config

    k8s-cluster

Categories

Upcoming Training