Exercise 11.2: Ingress Controller - getting error 503 Service Temporarily Unavailable
Step 10:
curl -H "Host: www.external.com" http://10.97.232.130
output
503 Service Temporarily Unavailable
503 Service Temporarily Unavailable
Expected Output
<!DOCTYPE html>
Welcome to nginx!
<
style>
I am not sure what the problem is the
kubectl get pods |grep ingress
myingress-ingress-nginx-controller-gmzmv 1/1 Running 0 33m
myingress-ingress-nginx-controller-q5jjk 1/1 Running 0 33m
myingress-ingress-nginx-controller-xxcq5 0/1 Evicted 0 69s
Not sure why I have an evicted pod
I have two worker nodes
I keep deleting the pod it seems to keep spawning a new one
Answers
-
Hi @bdkdavid,
Did you manage to successfully complete Step 1 of Lab Exercise 11.2?
If you have two worker nodes, is the control plane node still tainted with the default master taint? It may be possible that the DaemonSet controller attempts to run an ingress pod on the control plane node as well.
What are the states of your nodes in
kubectl get nodes
?What are the events showing for the evicted pod in the
kubectl describe pod <pod-name>
command?Regards,
-Chris0 -
my contol plan is still tainted: I added a second worker node
0 -
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubemaster03-containerd-runc Ready control-plane,master 16d v1.23.6 192.168.100.146 Ubuntu 20.04.4 LTS 5.4.0-110-generic containerd://1.6.4
kubeworker03-containerd-runc Ready 16d v1.23.6 192.168.100.147 Ubuntu 20.04.4 LTS 5.4.0-110-generic containerd://1.6.4
kubeworker04-containerd Ready 5d17h v1.23.6 192.168.100.148 Ubuntu 20.04.4 LTS 5.4.0-110-generic containerd://1.6.40 -
kubectl describe pod myingress-ingress-nginx-controller-nrcgx
Name: myingress-ingress-nginx-controller-nrcgx
Namespace: default
Priority: 0
Node: kubemaster03-containerd-runc/
Start Time: Tue, 17 May 2022 17:37:07 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=myingress
app.kubernetes.io/name=ingress-nginx
controller-revision-hash=75b8d7d4fb
pod-template-generation=2
Annotations: linkerd.io/inject: ingress
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
IP:
IPs:
Controlled By: DaemonSet/myingress-ingress-nginx-controller
Containers:
controller:
Image: k8s.gcr.io/ingress-nginx/controller:v1.2.0@sha256:d8196e3bc1e72547c5dec66d6556c0ff92a23f6d0919b206be170bc90d5f9185
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/myingress-ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/myingress-ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: myingress-ingress-nginx-controller-nrcgx (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7bnl (ro)
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: myingress-ingress-nginx-admission
Optional: false
kube-api-access-q7bnl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 2m10s kubelet The node had condition: [DiskPressure].
Normal Scheduled 2m10s default-scheduler Successfully assigned default/myingress-ingress-nginx-controller-nrcgx to kubemaster03-containerd-runc0 -
I do have a question about daemon sets
Does it also apply to control plane nodes?
Mine I left tainted because in a production environment you will use them for regular containers0 -
kubectl get pods
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-5dzdp 1/2 CrashLoopBackOff 604 (18s ago) 2d11h
myingress-ingress-nginx-controller-jtzd7 1/1 Running 8 (24m ago) 2d11h
myingress-ingress-nginx-controller-w77h4 0/2 Evicted 0 10m
web-one-7fb5455897-cndkh 1/1 Running 1 (29m ago) 26h
web-one-7fb5455897-nq4bh 1/1 Running 1 (29m ago) 26h
web-two-6565978c8b-xhhqh 1/1 Running 1 (29m ago) 26h
web-two-6565978c8b-xl8g5 1/1 Running 1 (29m ago) 26h0 -
when is your office hours
0 -
Hi @bdkdavid,
The
DiskPressure
node condition tells us that thekubemaster03-containerd-runc
node may be low on disk space. Provisioning the control plane node with additional disk space should help.I am noticing that your nodes have IP addresses from the 192.168.100.x subnet. What pod network is your CNI network plugin using? An overlap between the nodes' IP subnet and pods' IP subnet will also cause issues with your cluster.
Also noticing the
CrashLoopBackOff
on the firstmyingress-ingress-nginx-controller
pod, which may be caused bylinkerd inject
. What is the version of linkerd?You will find the office hours schedule and access information in the logistics course of your boot camp.
Regards,
-Chris0 -
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
192.168.0.0/24 192.168.1.0/24 192.168.2.0/24kubectl cluster-info dump | grep -m 1 cluster-cidr
"--cluster-cidr=192.168.0.0/16"0 -
linkerd version
Client version: stable-2.11.2
Server version: stable-2.11.20 -
Hi @bdkdavid,
From your latest comment, it seems that the nodes and pods IP subnets overlap, which should be avoided in a cluster.
The
linkerd inject
can be fixed for the ingress controller by running step 11.2.11 with two additional options (not needed if downgrading to earlier linkerd versions such as 2.9 or 2.10):kubectl get ds myingress-ingress-nginx-controller -o yaml | linkerd inject --ingress --skip-inbound-ports 443 --skip-outbound-ports 443 - | kubectl apply -f -
Regards,
-Chris1 -
kubectl describe pod myingress-ingress-nginx-controller-nrcgx
Name: myingress-ingress-nginx-controller-nrcgx
Namespace: default
Priority: 0
Node: kubemaster03-containerd-runc/
Start Time: Tue, 17 May 2022 17:37:07 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=myingress
app.kubernetes.io/name=ingress-nginx
controller-revision-hash=75b8d7d4fb
pod-template-generation=2
Annotations: linkerd.io/inject: ingress
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
IP:
IPs:
Controlled By: DaemonSet/myingress-ingress-nginx-controller
Containers:
controller:
Image: k8s.gcr.io/ingress-nginx/controller:v1.2.0@sha256:d8196e3bc1e72547c5dec66d6556c0ff92a23f6d0919b206be170bc90d5f9185
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/myingress-ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/myingress-ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: myingress-ingress-nginx-controller-nrcgx (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7bnl (ro)
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: myingress-ingress-nginx-admission
Optional: false
kube-api-access-q7bnl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 2m10s kubelet The node had condition: [DiskPressure].
Normal Scheduled 2m10s default-scheduler Successfully assigned default/myingress-ingress-nginx-controller-nrcgx to kubemaster03-containerd-runc0 -
I ran the above linkerd command my output is
kubectl get pods|grep nginx
myingress-ingress-nginx-controller-682jj 2/2 Running 0 2m49s
myingress-ingress-nginx-controller-9zzs9 0/2 Evicted 0 53s
myingress-ingress-nginx-controller-cljc2 2/2 Running 0 2m20s0 -
Also,
Two part question:
what do you recommend about the conflicting networks
also,
when we are installing calico in chapter 3 what changes do you recommend that made?
Even through they seem to be on different subnets
I want to rebuild my cluster better
What do you recommend?0 -
Hi @bdkdavid,
In order to prevent the overlapping pod network
192.168.0.0/16
with the node/VM network192.168.100.x/y
, there are two solutions (I personally would implement option 1):1 - Keep the hypervisor managed network to
192.168.100.x
(or similar private network), and un-comment- name: CALICO_IPV4POOL_CIDR
and thevalue: "..../16"
lines from thecalico.yaml
file, while updating the value to"10.200.0.0/16"
(in Step 12 of Lab 3.1). In addition, thekubeadm-config.yaml
file needs to reflect the same pod network on the last linepodSubnet: 10.200.0.0/16
(step 15.b of Lab 3.1).2 - Keep
calico.yaml
andkubeadm-config.yaml
as presented in the lab guide, but make changes to the DHCP server configuration on your hypervisor to use a distinct private network10.200.0.x/y
(or similar private subnet).Also, make sure you do not overlap with the default Services network managed by the control plane
10.96.0.0/12
(with IP addresses ranging from 10.96.0.0 to 10.111.255.254)I agree that the nodes and pods seem to be on different subnets, but eventually iptables will store routs to ranges/subnets instead of individual IP addresses, without being able to tell the difference when a particular IP of a range belongs to a node, and when to a pod.
The Running 2/2 confirms that the
linkerd inject
worked this time on two of the three DaemonSet replicas, but the Evicted DS replica is still related to insufficient disk space on the control plane node.As you rebuild your cluster, the control plane VM should be provisioned with more disk space than it has currently.
Regards,
-Chris0 -
df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 10255636 9680432 34532 100% /sudo -i
lvm
lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from 10.00 GiB (2560 extents) to 18.22 GiB (4665 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
lvm> exit
Exiting.resize2fs /dev/ubuntu-vg/ubuntu-lv
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 4776960 (4k) blocks long.df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 391M 2.8M 388M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 18G 9.3G 7.8G 55% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop0 68M 68M 0 100% /snap/lxd/21835
/dev/loop3 68M 68M 0 100% /snap/lxd/22753
/dev/loop5 62M 62M 0 100% /snap/core20/1434
/dev/loop4 45M 45M 0 100% /snap/snapd/15534
/dev/loop2 44M 44M 0 100% /snap/snapd/14978
/dev/loop1 62M 62M 0 100% /snap/core20/1328
/dev/sda2 1.8G 209M 1.5G 13% /boot
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/57dbd0edabbd010744c8fb71f9652b92d302b57d0999136ab46fb6968a257df0/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/6eed231afa228174899702ef210281be517c762e291ad4e842fed4aa65c0c4b2/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/e9189ca3e5244c7f93e3f0778fe240c4c6189a4e5b616bae2b28f7ef69533129/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/4d87b10d891d67a6b1f8937d97ddd97190bec80bf217b775faad8febdaa2429f/shm
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/e9189ca3e5244c7f93e3f0778fe240c4c6189a4e5b616bae2b28f7ef69533129/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/4d87b10d891d67a6b1f8937d97ddd97190bec80bf217b775faad8febdaa2429f/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/57dbd0edabbd010744c8fb71f9652b92d302b57d0999136ab46fb6968a257df0/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/6eed231afa228174899702ef210281be517c762e291ad4e842fed4aa65c0c4b2/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/2e891151c0613eade5cac5c82b488ec2477314cb5822de7610252e4dca6b3a57/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/7f7e10f375f5101988c8062e62aa0ebf7180d2d3ad8449bddcedb2d3cf22b466/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/316cebd8c5f1d7090d9618fe56cf01b03784bd372302ce030c27df1a36a3e59d/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/7e533f3237d511e5b8ac1f8976bb0035d8812910060b48634efd506cbdc8b481/rootfs
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/d3ae0de6-7380-4bb5-bf2e-b90de44d7b50/volumes/kubernetes.io~projected/kube-api-access-4fgxn
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/26fd1e057847b09ebc82f8cf4a267390a43828234a6a5361bb50b04043cbc109/shm
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/26fd1e057847b09ebc82f8cf4a267390a43828234a6a5361bb50b04043cbc109/rootfs
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/12c871df-3ded-419b-91bf-4c3e539ead2c/volumes/kubernetes.io~projected/kube-api-access-lxv99
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/0dda6999-f6b1-4312-babc-c624600616d2/volumes/kubernetes.io~projected/kube-api-access-khpgq
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/411377f55b8ded85ae1de7e661746a2ab5b81bb8e72df6bca4093e0aa6dcb4d3/shm
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/411377f55b8ded85ae1de7e661746a2ab5b81bb8e72df6bca4093e0aa6dcb4d3/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/8c4af21475246f9cfe5f0e7246ad7dab44fafc459dc59aee434f2b250a5620b4/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/db4736fbdee967f65f932b81173af512e6239ce9cfb9f9f911e0406185062244/rootfs
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/c0608e61902b62ca8058cc3c5e527c0025786d8d5b2fc52fdaf729179a90d60b/shm
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/c0608e61902b62ca8058cc3c5e527c0025786d8d5b2fc52fdaf729179a90d60b/rootfs
overlay 18G 9.3G 7.8G 55% /run/containerd/io.containerd.runtime.v2.task/k8s.io/83e5a388bc823dd00b4c15ac0fd5ffa194888011e942881ee40d864f0153a4ac/rootfs
tmpfs 391M 0 391M 0% /run/user/1000reboot
kubectl get pods | grep nginx
myingress-ingress-nginx-controller-682jj 2/2 Running 0 23h
myingress-ingress-nginx-controller-cljc2 2/2 Running 0 23h
myingress-ingress-nginx-controller-fg7gd 2/2 Running 1 (17h ago) 17h0 -
So to Keep my current configuration:
I need to edit two files in option one above:
calico.yaml
kubeadm-config.yamlHow do I reapply each?
They seem to be two different processes:
the kubeadm-config.yaml would kubectl editIs there a calico *ctl
What would that command look like?0 -
Also, I partitioned my hard drive with 20 GB but it seem to only see 10 GB
Can you clarify. Please?
This does not make any real sense to me!
df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 10255636 9680432 34532 100% /0 -
Hi @bdkdavid,
There is a calicoctl CLI tool, but it is not necessary for the edits I suggested earlier. Simply use vim, nano, or any editor to update the calico.yaml file, and then kubeadm-config.yaml.
They cannot be re-applied on an existing cluster, the process needs a new cluster bootstrapping - as in a new
kubeadm init
followed by the necessarykubeadm join
commands for worker nodes.From an earlier output I see the 18.2 GB lv and then the 10 GB lv. What happened in between? Is the hypervisor "aware" of such dynamic resize? If the VM was provisioned with 10 GB, then that's all it will be aware of, unless the hypervisor allows for the VM disk to be resized as well.
Regards,
-Chris0 -
In regards to method one of changing the Pod network below:
1 - Keep the hypervisor managed network to 192.168.100.x (or similar private network), and un-comment - name: CALICO_IPV4POOL_CIDR and the value: "..../16" lines from the calico.yaml file, while updating the value to "10.200.0.0/16" (in Step 12 of Lab 3.1). In addition, the kubeadm-config.yaml file needs to reflect the same pod network on the last line podSubnet: 10.200.0.0/16 (step 15.b of Lab 3.1).
Once I edit this files, does this happen automatically, or do I have to run a command.
What is the time for it to transition to new network?0 -
Hi @bdkdavid,
While there are cases where users attempted to modify/replace the existing pod CIDR of clusters of various Kubernetes distributions, I would recommend you rebuild the cluster, as mentioned earlier:
the process needs a new cluster bootstrapping - as in a new
kubeadm init
followed by the necessarykubeadm join
commands for worker nodes.Regards,
-Chris0
Categories
- All Categories
- 216 LFX Mentorship
- 216 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 44 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 692 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 137 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 23 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 129 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 21 LFS268 Class Forum
- 30 LFS269 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 757 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 804 Programming and Development
- 302 Kernel Development
- 484 Software Development
- 1.8K Software
- 242 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 93 All In Program
- 93 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)