Welcome to the Linux Foundation Forum!

Kernel Caching problem

We have developed an application in the MATLAB environment to apply statistical analysis to large ( ie: 200+ GB ) data sets. This application is a freely distributed research tool in the neuroscience field.

We have encountered a problem with the Linux ( and Mac ) kernel caching capacity. In the MATLAB environment, prior to allocating memory in variable space, the application determines if there is sufficient contiguous available memory, and returns an error if not. To ensure safety, only available memory is checked, and the kernel cache, though available, is disregarded.

During several phases of the processing, data is read from and written to disk multiple times. This consumes the kernel cache at an average rate of 500MB per operation, and quickly caches 20+GB on my test workstation, causing the application to return an 'out of memory' error (even though the cached memory is still available).

While I have operated in the past using various cache clearing operations ( such as sudo echo 3 | tee /proc/sys/vm/drop_caches ), this is not an entirely desirable option for distribution for several reasons.

1) not all potential users may be on the sudoers list

2) not all distributions will allow permissions to a sudo user for this operation

3) even if a sudo works, some operations may take as much as an hour to conclude, making it paramount for the researcher to be available to re-enter the password. As a full analysis may take 16-20 hours, this is not a feasible option.

What I think is needed is an ability to turn off or otherwise limit the kernel caching on an as needed basis. So far, I have found no options for this, other than rebuilding the kernel, an option most labs will not have available.

I will also be discussing this issue with the MATLAB developers team, though a proper resolution for public distribution may require action from both ends ( as well as ours ).

The manual released to the researcher community discusses this issue in depth, and offers some temporary mitigations, but it is quite inadequate to a full multi pass analysis. Additionally, while neuroscience researchers are quite brilliant in their own field, some aspects of computing just seem to fall short of comprehension to them.

Comments

  • Hi,

    How do you handle your file writes? Have you considered using O_DIRECT within your output file handle? It is going to bypass linux buffering and cache and force DMA directly to physical memory.
  • Tarn
    Tarn Posts: 2
    Hi Stefan

    Nice tip!

    Unfortunately, all of the I/O for the data files is handled by the MATLAB internals ( using the save/load commands ). While I could re-write the C and/or java code for file I/O of this nature, it would not really be worthwhile, as individual users would have to compile this, and not all labs would allow users to do that. Additionally, the MATLAB internal uses compression and versioning that would be different under different versions of MATLAB and environment.

    I checked the settings and preferences within MATLAB to see if this option might have been available, but found nothing that would allow it. I will pass on to the MATLAB developers the suggestion to allow for the O_DIRECT setting in file save/load, if there was no specific reason for them not using it.

Categories

Upcoming Training