Can I use local VMs instead of cloud resources?
I would prefer to do the labs with Ubuntu VMs running locally under windows11 rather than paying for cloud resources. Will this cause issues with the exams? Does it matter how I spin up my VMs so long as they work?
Answers
-
Hi @trisct,
There are no exams in this class. There are only guided lab exercises you will complete at your own pace.
You can complete the labs on local Ubuntu 20.04 LTS VMs. Assuming your host system supports the necessary amount of CPU (2 CPU cores per VM), RAM (8 GB RAM per VM), disk (20 GB vdisk per VM) resources, a single bridged network interface per VM (to allow VMs to communicate with each other, to be accessed by the host system, and to access the internet when necessary), and you configure your hypervisor to allow all ingress traffic to VMs (from all sources, all protocols, to all port destinations).
Also, ensure that the VMs' private IP addresses are not overlapping 10.0.0.0/8 and 10.96.0.0/12.
Regards,
-Chris0 -
Doesn't the certification step count as an exam? I was asking about that. I use 10.5.65.x for IPs, and bridged mode so there should be no additional firewalls
0 -
The hyper-V firewall is not applied to bridged mode - there is a different interface used if you want filtering.
0 -
Hi @trisct,
The exam is offered in a hosted environment. Kubernetes features covered by the course material should work in a similar fashion both on cloud VM instances and local VMs.
The 10.5.x.x IP of the VM overlaps the 10.0.0.0/8 Pod network, so I would expect some routing issues in your cluster.
Regards,
-Chris0 -
I would have the same issue with any 10.x.x.x network - the POD network reserves that entire range?
0 -
I can create a NAT network adapter for 192.168.0.0 and try using that.
0 -
It would be nice if the course included a tutorial on creating local VMs that didn't cost money. Most people have available resources to run a couple of VMs, so asking people to spend extra money to use cloud resources seems unnecessary... I would think a windows VM tutorial based on 10/11 could be assembled
1 -
I rebuilt my nodes from scratch, giving them static IP addresses in the 192.168 range. They can all talk to each other and the Internet.
However my kubectl get node commands return NotReady...
tim@master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 21m v1.29.1
worker NotReady 9s v1.29.1Do you have a suggestion to look into? The nodes took a while to become Ready before, but they did eventually change. This time something isn't right.
0 -
Is there something wrong with using 192.168.0.0/24 as a network address? I have swap turned off on my VMs (commented out in fstab) but kubelet still won't start. It says cni plugin not ready
0 -
Is the output below normal for Cilium?
cilium-linux-amd64.tar.gz: OK
ciliumInstalling Cilium, this may take a bit...
strconv.ParseUint: parsing "": invalid syntax
Cilium install finished. Continuing with script.
0 -
Can you just tell me what IP address range is safe? This worked when I used 10.5.65.0/24
0 -
Cilium is not installing properly. I will try a different IP segment... although should work fine with 192.168.0.0
0 -
Hi @trisct,
Cilium is installed with the Pod CIDR set to the default 10.0.0.0/8. While everything may look temporarily ok if there is an overlap (such as VM IPs from 10.5.x.x/24), in time, as IP addresses are assigned to Pods and added to iptables, they may cause routing issues in the cluster.
If the scripts k8scp.sh and k8sWorker.sh have not been altered in any way, the 192.168.0.0/24 or /16 network for VMs should be fine, with a bridged adapter - it seems to have worked best with other hypervisors. In other cases, all ingress traffic to VMs needed to be explicitly enabled from the hypervisor, otherwise it blocked critical protocols and destination ports, thus preventing Kubernetes and its plugins from properly initializing.
Did you attempt to reboot your VMs? Any luck just doing that?
Regards,
-Chris0 -
The outer network is not 192.168.. so it has to be a NAT adapter not bridged. Getting to the Internet was not a problem so I dont know why cilium is having issues. Reboots did not help. I went back to a simple bridge and it all works. I'll take my chances with a simple bridge, I cannot seem to make the NAT stuff work, somehow that breaks the Cilium install
0 -
Maybe I can change the pod CIDR to 10.0.0.0/16 somehow
0 -
cilium config set cluster-pool-ipv4-cidr 10.25.0.0/16
After a reboot seems to be working just fine, at least the parameter is permanent and nothing complains
This should make cilium use 10.25.x.x as the base for pod address pools. The default pool is /24
This should work better
0 -
Cilium only seems to exist on the cp node so worker doesn't need this
0 -
Both cilium status and kubectl get node return ready status
0 -
The fact I had not started any pod collections yet probably made this easier. It seems like Cilium would restart existing pods if they were there but simpler this way.
0 -
Hi @trisct,
I am glad your cluster is now operational.
Cilium is initialized on the control plane, but eventually deploys its own controller pods on each node of the cluster, that includes the workers and eventually additional control plane nodes for HA clusters.
Selecting the network size with /24, /16, or /8 is up to you. The smaller /24 should work just fine for a learning cluster.
It is expected for any existing pods to be terminated in the case the pod network is updated, to re-distribute the pod subnets to nodes and assign new IP addresses to pods. So this is considered a disruptive change for the cluster as a whole.
Regards,
-Chris0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 735 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 5.9K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 221 LFD103 Class Forum
- 17 LFD110 Class Forum
- 33 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 109 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 14 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 116 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 49 LFS241 Class Forum
- 43 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 145 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 154 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 7 LFW111 Class Forum
- 257 LFW211 Class Forum
- 178 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.1K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)