Lab 2.3 Unable to Curl
Hello Experts,
I am trying to do Lab 2.3
ubuntu@k8instance1:~/LFD259/SOLUTIONS/s_02$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
basicpod 1/1 Running 0 5m23s 192.168.1.3 k8instance2
When i try to do curl
curl http://192.168.1.3
I get no out. It connection times out.
Comments
-
Hi,
Curl timing out from the master node to the worker node is an indication of a networking issue between your nodes. There may be an infrastructure firewall blocking traffic to some ports, or blocking some protocols. Or you may have firewalls running in the VMs.
Please make sure all traffic is allowed if you have a custom VPC network and Firewall rule at infra level, and disable all firewalls at the VMs' OS level.
Regards,
-Chris0 -
which particular ports do I have to see? Also when i see the ip address of my minon or worker node, I don't see set as 192.168.1.3, which I get when I do kubectl get pod -wide. I am runnin gthis in a cloud. All outgoing firewall/ports are open.
0 -
Have a custom VPC network created for the purpose of this course, attach an all open firewall rule - all ports all protocols all sources/destinations, and spin up your VM instances in this VPC.
192.168.1.3 is not a node IP, it is the IP of your pod, and it is probably running on the worker node.
Check any possible firewalls in your VMs as well.
Regards,
-Chris0 -
Hello. I have a all portocols and ports open on my vcn/vpc. The pod is running on my worker node. All fire wall open as well. How to trouble shoot.
0 -
.
0 -
Hi @stilwalli,
If the new custom VPC has a new custom firewall rule open to all traffic (all ports, all protocols, all sources/destinations) and your instances have disabled/inactive firewalls, then a good way to troubleshoot the networking between your nodes is to use netcat (nc) and Wireshark on your nodes to determine where is your traffic blocked.
Providing a snippet of your outputs may also help.
Regards,
-Chris0 -
I have the same issue on Azure VMs.
Azure Network Security Groups are good - all traffic inbound/outbound is allowed.
I guess it's smth with Calico setup probably.
Found here the link to Azure-Vnet CNI plugin, but with it has been installed - networking between nodes becomes completely broken (Pods can't be scheduled on the Workers). So I reverted Azure-Vnet plugin from my Nodes.
Here is what I have on Master nodesa@ub16:~$ ifconfig cali02b0884454b Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:1461 errors:0 dropped:0 overruns:0 frame:0 TX packets:1492 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:115291 (115.2 KB) TX bytes:549569 (549.5 KB) califb8a5d80f8f Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:1450 errors:0 dropped:0 overruns:0 frame:0 TX packets:1488 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:114409 (114.4 KB) TX bytes:546997 (546.9 KB) docker0 Link encap:Ethernet HWaddr 02:42:f1:7e:0e:78 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:0d:3a:a2:3b:d7 inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::20d:3aff:fea2:3bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:30029 errors:0 dropped:0 overruns:0 frame:0 TX packets:39385 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7448880 (7.4 MB) TX bytes:24883847 (24.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:219221 errors:0 dropped:0 overruns:0 frame:0 TX packets:219221 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:62199452 (62.1 MB) TX bytes:62199452 (62.1 MB) tunl0 Link encap:IPIP Tunnel HWaddr inet addr:192.168.0.1 Mask:255.255.255.255 UP RUNNING NOARP MTU:1440 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:300 (300.0 B)
and this is what I have on Worker node
sa@ub16-02:~$ ifconfig cali3fc1b4ac805 Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1647 (1.6 KB) TX bytes:1939 (1.9 KB) docker0 Link encap:Ethernet HWaddr 02:42:65:65:7e:18 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:0d:3a:a0:a0:f7 inet addr:10.0.0.5 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::20d:3aff:fea0:a0f7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38007 errors:0 dropped:0 overruns:0 frame:0 TX packets:39018 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:19645066 (19.6 MB) TX bytes:6811361 (6.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:6294 errors:0 dropped:0 overruns:0 frame:0 TX packets:6294 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:453032 (453.0 KB) TX bytes:453032 (453.0 KB) tunl0 Link encap:IPIP Tunnel HWaddr inet addr:192.168.1.1 Mask:255.255.255.255 UP RUNNING NOARP MTU:1440 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
I'm not sure if it's good or not as my networking knowledges are said to say poor.
Please let me know what I need to check / configure ?
Any help is appreciated. It actually blocks my Labs (will try to continue work on labs on the only master node for now....)0 -
Hi @vasyhin,
There is an earlier entry in this forum where @crixo posted a solution to the Calico issue with Kubernetes on Azure.
Check it out and see if it helps:https://forum.linuxfoundation.org/discussion/855882/labs-on-azure#latest
Regards,
-Chris0 -
If you happen to be on GCP, when I got to the curl step in Exercise 2.3 it did not work in my default VPC with default rules.
Thankfully project calico has documented what solved it for me.
Just follow the instructions up to the point of 1.2 Setting up GCE networking, then try the curl command again. Works for me.
1 -
Because of know issues with the default VPC and default firewall rules, GCE requirements for a custom VPC and firewall rule have been included in Exercise 2.1.
0 -
Eventually I was able to set up Azure VMs to work with Node-To-Pod (on different node) networking.
The fix was - option#2 from https://docs.projectcalico.org/v3.6/reference/public-cloud/azure#about-calico-on-azure - to use Flannel instead of Calico.
Simple follow this instructions. Technically you just need to callkubectl apply -f canal.yaml
- and that's all - networking is good.
Master node ifconfigroot@ub16:~# ifconfig docker0 Link encap:Ethernet HWaddr 02:42:44:ae:5c:e7 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:0d:3a:a2:3b:d7 inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::20d:3aff:fea2:3bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217050 errors:0 dropped:0 overruns:0 frame:0 TX packets:125007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:79910881 (79.9 MB) TX bytes:59916636 (59.9 MB) flannel.1 Link encap:Ethernet HWaddr ee:6a:57:01:1c:81 inet addr:192.168.0.0 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::ec6a:57ff:fe01:1c81/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:13 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2238 (2.2 KB) TX bytes:894 (894.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:538480 errors:0 dropped:0 overruns:0 frame:0 TX packets:538480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:149692520 (149.6 MB) TX bytes:149692520 (149.6 MB) tunl0 Link encap:IPIP Tunnel HWaddr inet addr:192.168.0.1 Mask:255.255.255.255 UP RUNNING NOARP MTU:1440 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:600 (600.0 B)
Master node route -n
root@ub16:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 168.63.129.16 10.0.0.1 255.255.255.255 UGH 0 0 0 eth0 169.254.169.254 10.0.0.1 255.255.255.255 UGH 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 * 192.168.1.0 192.168.1.0 255.255.255.0 UG 0 0 0 flannel.1
Worker node ifconfig
root@ub16-02:~# ifconfig cali3fc1b4ac805 Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:31 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2546 (2.5 KB) TX bytes:2264 (2.2 KB) cali840fd25a13b Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:1605 errors:0 dropped:0 overruns:0 frame:0 TX packets:1662 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126497 (126.4 KB) TX bytes:605941 (605.9 KB) calif0fcaea2af1 Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1 RX packets:1599 errors:0 dropped:0 overruns:0 frame:0 TX packets:1678 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126294 (126.2 KB) TX bytes:606640 (606.6 KB) docker0 Link encap:Ethernet HWaddr 02:42:f7:28:60:10 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:0d:3a:a0:a0:f7 inet addr:10.0.0.5 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::20d:3aff:fea0:a0f7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118396 errors:0 dropped:0 overruns:0 frame:0 TX packets:87652 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:113745169 (113.7 MB) TX bytes:11960941 (11.9 MB) flannel.1 Link encap:Ethernet HWaddr ce:ef:74:d4:d7:06 inet addr:192.168.1.0 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::ccef:74ff:fed4:d706/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:13 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:894 (894.0 B) TX bytes:2238 (2.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2233 errors:0 dropped:0 overruns:0 frame:0 TX packets:2233 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:160422 (160.4 KB) TX bytes:160422 (160.4 KB) tunl0 Link encap:IPIP Tunnel HWaddr inet addr:192.168.1.1 Mask:255.255.255.255 UP RUNNING NOARP MTU:1440 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Worker node route -n
root@ub16-02:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 168.63.129.16 10.0.0.1 255.255.255.255 UGH 0 0 0 eth0 169.254.169.254 10.0.0.1 255.255.255.255 UGH 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.0.0 192.168.0.0 255.255.255.0 UG 0 0 0 flannel.1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 * 192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali840fd25a13b 192.168.1.3 0.0.0.0 255.255.255.255 UH 0 0 0 calif0fcaea2af1 192.168.1.4 0.0.0.0 255.255.255.255 UH 0 0 0 cali3fc1b4ac805
1 -
As has been mentioned we do not test or configure for Azure as a lab system.
0 -
Ran thru this issue on 12-14-2022, following https://training.linuxfoundation.org/cm/LFD259/LabSetup-GCE.mp4 for GCE to create firewall rule after the VPN is up. Couldn't get it to work at the first time and scratched out pretty good amount of hair... =/
Decided to trash the VMs and VPC and all those temp-back-and-forth firewall rules at GCE. Started from scratch and got it to work this time.
So, the trick is that I have included the firewall rule option during the VPC creation.
At Firewall rules session, look for a name called "allow-custom" and EDIT it to include both the current subnet and 0.0.0.0/0 and make sure it's "Allow all".
Nothing needs to be changed inside the VM, I mean I didn't have to mess with the iptables and UFW at all. It just works out of the box.
I am now a happy camper!
Have fun and cheers!
0 -
I suffered the same (curl hanging) in 2.3 in AWS.
It turned out my AWS subnet had a security group which allowed TCP traffic, but no UDP traffic. Thus kubernetes operations worked fine (incl. worker node registering itself to the cp, pods getting started, etc.), but "application layer" connectivity didn't work. I can only guess Cilium uses UDP under the hood.Remedy: allow all traffic in the subnet - both TCP and UDP.
2 -
Hi @grzegon,
The Overview section of Lab 2.1 does recommend "no firewall" while the AWS demo video from the introductory chapter presents a SG configuration suitable for the lab environment.
Regards,
-Chris0 -
@grzegon said:
I suffered the same (curl hanging) in 2.3 in AWS.
It turned out my AWS subnet had a security group which allowed TCP traffic, but no UDP traffic. Thus kubernetes operations worked fine (incl. worker node registering itself to the cp, pods getting started, etc.), but "application layer" connectivity didn't work. I can only guess Cilium uses UDP under the hood.Remedy: allow all traffic in the subnet - both TCP and UDP.
This guy right here is a life saver. I was almost losing my mind because of this
1 -
@grzegon said:
I suffered the same (curl hanging) in 2.3 in AWS.
It turned out my AWS subnet had a security group which allowed TCP traffic, but no UDP traffic. Thus kubernetes operations worked fine (incl. worker node registering itself to the cp, pods getting started, etc.), but "application layer" connectivity didn't work. I can only guess Cilium uses UDP under the hood.Remedy: allow all traffic in the subnet - both TCP and UDP.
Great catch! Official Cilium documentation confirm your theory btw; https://docs.cilium.io/en/stable/operations/system_requirements/#firewall-rules
0 -
Hi @onur.zengin,
In the "Course Introduction" chapter of the course, the video titled "IMPORTANT: Using AWS to Set Up the Lab Environment" includes clear instructions for the SG configuration. The SG configuration section starts at timestamp 2:38.
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)