Welcome to the Linux Foundation Forum!

adapting the kernel to the present day

Look at what's happening now. Kernel engineers and leading developers have suddenly realized that integrating neural networks into the kernel is inevitable. But as it turns out, the kernel is completely unprepared for this. Why?

Because any proposal on forums about starting to utilize GPUs for some kernel tasks was immediately harshly criticized, blocked, or ignored. People said: "this is not a kernel task," "do it in userspace," "don't complicate things."

This is just one example. But almost every reasonable proposal in recent years said one thing: we need to gradually adapt the kernel to the tasks of the present time. The kernel is falling too far behind.

This approach, if started 5-10 years ago, would have helped find proper solutions. There would have been time for that. Today's rush wouldn't be necessary. And nothing good comes from rushing. In a rush, you create quick fixes just to tick a box.

A concrete example: working with GPUs. If they had started gradually, even just a little, moving some small kernel tasks to GPUs, we would have already encountered all the hidden problems:

Problems of memory synchronization between CPU and GPU.

The lack of proper infrastructure for floating-point operations in the kernel (and the kernel avoids it like fire!).

Security and isolation issues for code execution on an accelerator.

By uncovering these problems back then, the community could have calmly, without panic, assessed their real scale and started solving them gradually. Over the years. Develop stable APIs, build up expertise.

It's like doing the splits. If you do it smoothly, stretching every day, you'll succeed. If you try to do it abruptly and immediately—you'll guaranteed tear your ligaments. Now the Linux kernel is forced to "do the splits" abruptly, without any warm-up. The outcome is predictable: architectural injuries, clunky solutions, explosive complexity.

Hence the main questions:

Blindness or principle? Does the Linux community not see strategic goals at all? Is everything moving on the principle of "just avoid extra effort right now," and then let the chips fall where they may? Or is this a conscious, yet mistaken, philosophy?

Who's at the wheel? Can the same people who for decades refused to "lay the groundwork" now competently guide the kernel through this painful transition? Or do they need to radically reconsider their route and methods?
What will happen? In the next 2-3 years, will we see elegant, well-thought-out solutions for AI and accelerators in the kernel? Or will it be a dump of monolithic vendor drivers and quick fixes that will hinder development for another 20 years?

Answers

  • in other words, instead of mentally burying the founders of the core, you'd better deal with the real issues.

Categories

Upcoming Training