Welcome to the Linux Foundation Forum!

Difficulties of system tuning

The lesson alludes to the fact that memory tuning is complex at best and based on the overwhelming number of available parameters, I'd like to suggest notwithstanding malfunctioning hardware/software, that load balancing should be used as a short-term tool to solve performance issues.
In other words, restrict application usage until more/better hardware/software can be thrown at the problem(s).
Trying to manually tune today's operating systems is akin to a pilot thinking she can better manage an A380 jumbo than the automated systems included in the aircraft.

Perhaps time can be better spent trying to solve the Andries Brouwer conundrum :) !

Thoughts, anyone?

Comments

  • coop
    coop Posts: 915

    Some manual tuning will always be necessary. Operating systems, compilers, libraries etc are all written by human beings and require being continually revised and tuned. I can remember cases where no matter how hard I tried to customize and tune an application I could not wreak any more out of it than the system and compiler could. But I can also remember cases where I had to override those very same factors to increase performance, throughput, use less energy etc. There is always a competition. Plus the OS is always designed for a more generic workload than a specific system or application may need.

  • cfuchs
    cfuchs Posts: 15
    edited January 2022

    Reminds me of comparing C++ to Java 😉 But consider that Linux runs in the cloud as well as in embedded systems (on virtual hardware as well as on bare metal); I think your point is fully valid for virtual systems; on restricted systems, some system engineer is probably happy to have these tuning opportunities.

Categories

Upcoming Training