LAB 3.2 v09.05 dead loop on virtual device cilium-vxlan, fix it urgently!
adding 1 node works fine.
when the second node, exact same command is executed, Master goes into a loop and console shows
"dead loop on virtual device cilium-vxlan, fix it urgently!"
it may allow the second node to be added, then fails.
I lost several hours of my training day trying to make this work but didn't work.
How to fix it?
Comments
-
even adding 1 node , message shows added, status seen not ready , after a while, k8scp goes down
kubectl describe node node02
Name: node02
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Sep 2023 03:40:34 +0000
Taints: node.kubernetes.io/not-ready:NoExecute
node.cilium.io/agent-not-ready:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: node02
AcquireTime:
RenewTime: Tue, 12 Sep 2023 03:41:55 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason M essage
---- ------ ----------------- ------------------ ------ - ------
MemoryPressure False Tue, 12 Sep 2023 03:41:04 +0000 Tue, 12 Sep 2023 03:40:34 +0000 KubeletHasSufficientMemory k ubelet has sufficient memory available
DiskPressure False Tue, 12 Sep 2023 03:41:04 +0000 Tue, 12 Sep 2023 03:40:34 +0000 KubeletHasNoDiskPressure k ubelet has no disk pressure
PIDPressure False Tue, 12 Sep 2023 03:41:04 +0000 Tue, 12 Sep 2023 03:40:34 +0000 KubeletHasSufficientPID k ubelet has sufficient PID available
Ready False Tue, 12 Sep 2023 03:41:04 +0000 Tue, 12 Sep 2023 03:40:34 +0000 KubeletNotReady c ontainer runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.1.22
Hostname: node02
Capacity:
cpu: 2
ephemeral-storage: 64188044Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1964840Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 59155701253
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1862440Ki
pods: 110
System Info:
Machine ID: fd76f417257946eca2e98aab8cc4434f
System UUID: 16f1ff4c-f455-fc43-a6da-13a2eb9f2b63
Boot ID: c5425f60-2b97-4f41-9a7c-227d09add390
Kernel Version: 5.4.0-150-generic
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.22
Kubelet Version: v1.27.1
Kube-Proxy Version: v1.27.1
PodCIDR: 192.168.1.0/24
PodCIDRs: 192.168.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits A ge
--------- ---- ------------ ---------- --------------- ------------- - --
kube-system cilium-operator-788c7d7585-rfdt6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4 h49m
kube-system cilium-xv4t2 100m (5%) 0 (0%) 100Mi (5%) 0 (0%) 8 4s
kube-system kube-proxy-7x7bl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8 4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%) 0 (0%)
memory 100Mi (5%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 62s kube-proxy
Normal RegisteredNode 84s node-controller Node node02 event: Registered Node node02 in Controlle r
Normal NodeHasSufficientMemory 84s (x5 over 86s) kubelet Node node02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 84s (x5 over 86s) kubelet Node node02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 84s (x5 over 86s) kubelet Node node02 status is now: NodeHasSufficientPID0 -
Hi @porrascarlos80,
Please provide details about your environment, such as the cloud provider or hypervisor used to provision the VMs, the guest OS release/version, VM CPU, VM RAM, VM disk, how many network interfaces per VM, private/public, network bridged/nat, private subnet range for the VMs, whether all ingress traffic is allowed (from all sources, to all port destinations, all protocols).
This may help us to reproduce the behavior reported above.
Regards,
-Chris0 -
Problem appears if I follow instructions on the lab guide, lab 3.1 step 23
V 2023-09-05
applying cilium yaml.
as a work around, I joined master and two nodes first.
did the installation using this method :https://docs.cilium.io/en/stable/installation/k8s-install-kubeadm/and now nodes and master are in ready state with no errors. All pods are up and running!
this is how my hosts file shows up
192.168.1.20 k8scp
192.168.1.21 node01
192.168.1.22 node02
127.0.0.1 localhost
127.0.1.1 master01The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allroutersand used this guide for troubleshooting the non ready state
https://komodor.com/learn/how-to-fix-kubernetes-node-not-ready-error/0 -
Hi @porrascarlos80,
Thank you for the details provided above. While they do not answer the earlier questions, they provide enough information about you cluster in general.
The installation method from docs.cilium.io installs cilium in a different manner than the way it was intended by the installation instructions of the course lab guide. It implements the Pod network and it uses guest OS components differently than the lab guide, and some later exercises may behave differently as a result.
However, based on the hosts file entries provided, make sure that k8scp is an alias of the control plane node, and not the actual hostname of the control plane node.
The IP addresses of the node VMs are from the 192.168.1.0 subnet. This subnet overlaps with the Pod network implemented by the cilium network plugin 192.168.0.0/16. Such overlaps should be avoided. The nodes network (aaa.bbb.ccc.ddd), the Pods network (192.168.0.0/16), and the Services network (10.96.0.0/12) should be distinct. Because of this overlap the installation method from the lab guide did not complete successfully on your cluster.
If you are using a local hypervisor, managing the DHCP server is pretty straight forward, and all inbound traffic can be easily allowed from the hypervisor's settings.
Regards,
-Chris0 -
@chrispokorni said:
However, based on the hosts file entries provided, make sure that k8scp is an alias of the control plane node, and not the actual hostname of the control plane node.The IP addresses of the node VMs are from the 192.168.1.0 subnet. This subnet overlaps with the Pod network implemented by the cilium network plugin 192.168.0.0/16. Such overlaps should be avoided. The nodes network (aaa.bbb.ccc.ddd), the Pods network (192.168.0.0/16), and the Services network (10.96.0.0/12) should be distinct. Because of this overlap the installation method from the lab guide did not complete successfully on your cluster.
-ChrisI'd recommend updating text in the Lab Guide 3.x to explicitly state the above cilium yaml edits.
I ran into the same time-waster when I originally ran section 3. Although it was just a matter of reading the logs, then reading the yaml & making the edits to ensure each subnet was different, it's something that brand-new readers might be overwhelmed by.
Thanks.1 -
This exact issue got me too.
k8cps must point to the IP address of the Control Plane's/First node's IP address. In my it was on eth0 which was 192.168.1.225.
This will clash with cilium's subnet so have to change cluster-pool-ipv4-cidr in the cilium yaml to "192.169.0.0/16" and podSubnet in the kubeadm-config.yaml to 192.169.0.0/16
I would have loved if these notes were in the lab as I wasted a bit of time with this too.
0 -
Hi @mxsxs2,
Please keep in mind that 192.169.0.0/16 is not a private CIDR. The pod network should be private.
Regards,
-Chris0 -
I change values and parameters in the file kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.27.1
controlPlaneEndpoint: "k8scp:6443"
networking:
podSubnet: 10.10.0.0/16 <-- change
serviceSubnet: 10.96.0.0/12 <-- addToo follow this link mainly the install with helm.
https://docs.cilium.io/en/stable/installation/k8s-install-kubeadm/Install Helm
https://helm.sh/es/docs/intro/install/Setup Helm repository:
helm repo add cilium https://helm.cilium.io/Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.14.4 --namespace kube-systemwith this it worked for me..!!!
0 -
I don't get why it is so hard to provide a vagrant image with the correct networking set up or at least a script to fix the vm fiddling. It took me days (am not a sysadmin) to fix the lab setup
1 -
I paid good money for this course and honestly would like a refund. I have spent a few hours working in the first few chapter sections and a couple of days sorting through the bugs and reading forum posts involving all sorts of issues with a supposed step by step setup of a k8s cluster. I can't even finish chapter 3. This is painstaking and frustrating.
0 -
You can use this forum to ask course-related questions, especially when you need assistance with lab exercises. The forums are moderated by course instructors and they will work with you to understand your lab environment setup and what may cause issues, and then provide guidance on how to move forward.
Regards,
Flavia
The Linux Foundation Training Team0 -
Got the same issue as my home lab DHCP is configured to provide
192.168.1.0/24
.
To fix it and set10.10.0.0/16
CIDR to cilium, simply run this command and continue the guidesed -i s'#cluster-pool-ipv4-cidr: "192.168.0.0/16"#cluster-pool-ipv4-cidr: "10.10.0.0/16"#'g $(find $HOME -name cilium-cni.yaml)
It would be nice indeed that the course specifies the CIDR used by Cilium in the config file, and give the necessary instruction for changing it.
Best
Denis1 -
Hi!
I got the same issue after applying cilium yaml file from documentation provided. Control-plane machine crashes after kubectl apply -f cilium-cni.yaml and throws error:
dead loop on virtual device cilium-vxlan, fix it urgently!
After machine restart, the error comes back after 1-2 minutes. I spent a lot of time to troubleshoot this. Tried other fixes, but the issue persists. Did anyone find a fix for this issue?Regards,
Silviu0 -
Hi @silviukucsvan,
What type of infrastructure is hosting your cluster (local hypervisor or cloud)? What are the IP addresses of your VMs? What is the guest OS running the VMs? Are any firewalls active to filter/block ingress traffic to the VMs?
Regards,
-Chris0 -
Hi @chrispokorni,
I am using a local hypervisor (vmware fusion): 2 VMs with following IPs:
cp: 192.168.1.20
worker: 192.168.1.21
OS running is Ubuntu server 23.04
Both machines have no active firewall.I reinstalled everything from scratch and again:
Dead loop on virtual device cilium_vxlan, fix it urgently!
Tried the command mentioned by denismcx, but no effect. The loop appears after I join worker to cp using kubeadm join command. I let everything as it is in documentation.0 -
Hi @silviukucsvan,
There are several issues that prevent you from moving forward. So far they are the guest OS release and the VMs' IP addresses.
The recommended OS is still Ubuntu 20.04 LTS. The more recent 22, 23, 24 releases introduce some dependency issues that have not yet been addressed.
Most local hypervisors use the 192.168.0.0/x subnet for VMs' private IP addresses. This eventually overlaps with the default Pod subnet that is defined by the Cilium CNI plugin 192.168.0.0/16. Below you will find instructions how to avoid this overlap.
Start by provisioning two new VMs. Make sure you enable a single bridged network interface per VM, and all the ingress traffic is allowed to the VM by the hypervisor - that is all protocols, to all port destinations from all sources. IPs from the 192.168.0.0/x subnet are OK. The OS should be Ubuntu 20.04 LTS (server or desktop).
When you get to step 19 of lab exercise 3.1, make sure that the
/etc/hosts
file additional entry is the control plane node private IP and thek8scp
alias:... 192.168.x.x k8scp ...
At step 20, edit your
/root/kubeadm-config.yaml
manifest with a new Pod subnet, that should not overlap the VM IPs:apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: 1.29.1 controlPlaneEndpoint: "k8scp:6443" networking: podSubnet: 10.200.0.0/16
Then
init
the cluster (step 21) and prepare the~/.kube/config
file (step 22).Before applying the
cilium-cni.yaml
manifest (step 23), edit the file on line 198 with the desired Pod CIDR, to match the one supplied in thekubeadm-config.yaml
manifest earlier:... cluster-pool-ipv4-cidr: "10.200.0.0/16" ...
From here on you can follow the lab guide as is.
Regards,
-Chris2 -
Hi @chrispokorni,
I followed your instructions and everything works perfect now.
Thank you for your help!0 -
I understand the purpose of having us researching and troubleshooting but I have spent a lot of time fixing some things that could have been delivered or highlighted with a simply note or warning. I am still on chapter 3 after several days trying to figure out small issues/bugs.
0 -
hi @chrispokorni!
I am struggling with something I think is related to this, maybe you can help me too?
I created my cp VM (Ubuntu 24.04 as the pdf from Lab 3.1 specifies) with a bridged connection to my local connected network, to be able to ssh directly from my workstation. I configured my local network CIDR to be 10.0.0.0/24 and configured via my local router (physical) that the VM (identified by a virtual MAC address that I can set in VirtualBox) gets assigned by the DHCP server always the IP 10.0.0.10.
With this setup I was able to ssh into the cp VM without a problem and configure everything the pdf specifies, up until the cilium configuration.
I am able to create all cilium resources that are specified in the SOLUTIONS file cilium-cni.yaml. I set the:cluster-pool-ipv4-cidr: "192.168.0.0/16"
(and also the mask) as it was specified in the kubeadm-config.yaml file that I also applied from the SOLUTIONS folder (this is why I changed my local physical network CIDR) and AFAI can see, when I see the logs of the cilium operator pod, this configuration gets applied:time="2025-01-03T22:42:57Z" level=info msg="Initializing IPAM" mode=cluster-pool subsys=cilium-operator-generic time="2025-01-03T22:42:57Z" level=info msg="Starting ClusterPool IP allocator" ipv4CIDRs="[192.168.0.0/16]" ipv6CIDRs="[]" subsys=ipam-allocator-clusterpool time="2025-01-03T22:42:57Z" level=info msg="Managing Cilium Node Taints or Setting Cilium Is Up Condition for Kubernetes Nodes" k8sNamespace=kube-system label-selector="k8s-app=cilium" remove-cilium-node-taints=true set-cilium-is-up-condition=true set-cilium-node-taints=true subsys=cilium-operator-generic time="2025-01-03T22:42:57Z" level=info msg="LB-IPAM initializing" subsys=lbipam time="2025-01-03T22:42:57Z" level=debug msg="Settling pool conflicts" subsys=lbipam # what might these conflicts be? time="2025-01-03T22:42:57Z" level=info msg="Starting to synchronize CiliumNode custom resources" subsys=cilium-operator-generic time="2025-01-03T22:42:57Z" level=info msg="Starting to garbage collect stale CiliumNode custom resources" subsys=watchers
But as soon as the pods are running, I get my ssh sessions broken.
When I log into the VM via the VirutalBox window, I can see that some cilium specific virtual interfaces are created in the cp VM, which I imagine are to be able to send traffic to a virtual network (pods?), and one of them gets an IP from my local physical network 10.0.0.0/24:
This wouldn't be a problem since I don't use that IP, but the problem is that the kernel's routing table gets modified also to send all 10.0.0.0/24 network traffic to this virtual interface cilium_host instead of the default interface enp0s3:I have been searching about this cilium_host@cilium_net interface but my k8s level is beginner (configuring 1st lab of this course) and cant find anything. I am lost and need some guidance please
Thanks in advance.
Br, Virginia0 -
Hi @chrispokorni ! I have good news
When I deployed the cilium resources yesterday, the first time I deployed them, I did it with the default cidr that was set in thecilium-cni.yaml
file:10.0.0.0/8
and it behaved as I described before. Then I found this thread and deleted all resources in the cilium-cni.yaml file issuing a:kubectl delete -f /home/student/LFS258/SOLUTIONS/s_03/cilium-cni.yaml
With a successful output:
I waited for the virtual interfaces to be deleted as well on the cp VM, but it didn't happen, so I had to reboot (which I guess should never have to happen in the real world). After reboot the virtual interfaces dissapeared.
After the reboot of the cp VM I added my custom cidr to thecilium-cni.yaml
and created all resources again from the edited file, but I kept encountering the same behavior.
I did this more than once and then decided to investigate and ended adding my comment to this thread.
Today in the shower I thought, well, if I deleted all cluster resources from the cluster and anyway those interfaces were kept, then the cluster cilium resources must be writing some configuration in the cp VM OS that is not being restored when the cilium resources get deleted from the cluster, so to confirm this I created a new cp VM and added my custom cidr before I created the resources for the first time in thecilium-cni.yaml
file.
And voila, it worked as expectedSo 'something' is being created and not being updated nor deleted in the cp VM when editing or deleting the cilium resources.
I took a look at /opt/cni, but I see only binaries and no configuration files. Under /etc I also can't find any hardcoded IP segment or interface name (quick search). I guess this might be a bug?
Or is it getting hardcoded int he etcd container of the cluster and not updated afterwards? That is what etcd is for, isn't?
Thanks again and I am looking forward for your input.
Br, Virginia0 -
Hi @virginiatorres,
One of the most sensitive aspects of Kubernetes is the application/pod-to-pod network layer. There are many components that collectively are responsible for setting up the network and for routing packets between pods. There is the kube-proxy node agent, the cluster internal dns component, the cni plugin (cilium in our case), and the iptables utility.
During the
init
phase, the bootstrapping tool (kubeadm) sets up part of the required configuration for networking, while launching the cni plugin (cilium) sets up the network itself. A key requirement when building out a Kubernetes cluster is to avoid overlapping the three networks the cluster uses to conduct its networking operations - the hosts/nodes network, service network (10.96.0.0/12), and pod network. They should be distinct, without overlapping.On your first attempt, due to a typo that somehow resurfaced in the
cilium-cni.yaml
manifest, the cni plugin installed a pod network 10.0.0.0/8, which overlapped your own hosts network 10.0.0.0/24. An extensive amount of IP related data ends up in iptables, and with overlapping IP addresses it is easy for host components, and/or Kubernetes components and plugins to misinterpret sources and destinations in the cluster.On your second attempt, by fixing the
cilium-cni.yaml
manifest before launching the plugin, ensured that the pod network CIDR matched the value from thekubeadm-config.yaml
manifest - 192.168.0.0/16.At this point you managed to avoid overlapping networks, and as a result in iptables you will see entries with IP addresses from the three distinct networks - the 192.... IP addresses asignde to pods, the 10.0... IP addresses pre-assigned to your VMs and inherited by cluster critical components, and the service cluster IP addresses from the 10.96.../12 network.
Hope this (somewhat) clarifies what is happening in your cluster.
Regards,
-Chris0 -
Hi @chrispokorni, thank you for your insights. What you clarified is what I learned the 'hard way' I guess
What I maybe didn't clarify correctly in my second entry, is that I tried to fix it many times only deleting the resources created by the cilium plugin and then re-creating them again with the correct CIDR specified in thecilium-cni.yaml
but somehow deleting those resources didn't bring the VM to the previous state, meaning that the interfaces created by the plugin weren't deleted and when I re-created those resources, they were re-created with the first wrong applied configuration in thecilium-cni.yaml
file, although the correct CIDR was already saved in the file. I could only have it working when I re-created the VM completely from scratch, which I think shouldn't happen in real life. Is that the expected behavior of the plugin? I suspect it is not doing a correct clean up when it gets deleted.
Br, Virginia0 -
Hi @virginiatorres,
Have you tried resetting the cluster instead of a new VM provisioning? The reset removes most critical configuration settings from the VM, allowing you to reuse it to initialize a new cluster, without the need to re-provision the VM.
After removing the cni plugin you could try to reset the worker(s) the control plane nodes and perform a new
init
followed by a newjoin
. The sequence of steps would be:
1 - remove the cni plugin withkubectl delete -f ...
2 - delete the worker node(s) from the cluster withkubectl delete node worker-node-name
3 - reset control plane node withsudo kubeadm reset
4 - ensurekubeadm-config.yaml
andcilium-cni.yaml
manifests are updated (if needed) with the desired pod network CIDR (seems192.168.0.0/16
worked for your cluster)
5 - initialize the control plane with the fullsudo kubeadm init ...
command. Ensure all flags and options are supplied as presented in the lab guide.
6 - copy the newly createdadmin.conf
manifest over.kube/config
7 - launch again the cni plugin withkubectl apply -f ...
8 - retrieve the new join command from the control plane node
9 - join the cluster from the worker node with the fullsudo kubeadm join ...
command. Ensure all flags are supplied as presented in the lab guide, and the options are updated with the new values.Regards,
-Chris0
Categories
- All Categories
- 167 LFX Mentorship
- 167 LFX Mentorship: Linux Kernel
- 802 Linux Foundation IT Professional Programs
- 358 Cloud Engineer IT Professional Program
- 149 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.3K Training Courses
- 24 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 42 LFD102 Class Forum
- 198 LFD103 Class Forum
- 19 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 649 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 161 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 1 LFS142 Class Forum
- 2 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS148 Class Forum
- 2 LFS151 Class Forum
- 1 LFS157 Class Forum
- 1 LFS158 Class Forum
- 9 LFS162 Class Forum
- 2 LFS166 Class Forum
- 1 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 1 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 135 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 56 LFS216 Class Forum
- 48 LFS241 Class Forum
- 48 LFS242 Class Forum
- 37 LFS243 Class Forum
- 12 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- LFS248 Class Forum
- 43 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 141 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 9 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 149 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 5 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS147 Class Forum
- LFS274 Class Forum
- 3 LFS281 Class Forum
- LFW111 Class Forum
- 256 LFW211 Class Forum
- 182 LFW212 Class Forum
- 10 SKF100 Class Forum
- SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 104 Multimedia
- 174 Networking
- 87 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 66 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 55 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 362 Off Topic
- 114 Introductions
- 169 Small Talk
- 22 Study Material
- 507 Programming and Development
- 304 Kernel Development
- 204 Software Development
- 1.1K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 317 Installation
- 59 All In Program
- 59 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)