Welcome to the Linux Foundation Forum!

Linux hosts on VMware and disk subsystem timeouts.

tuxmania
tuxmania Posts: 19

This is perhaps a tough one to answer but it nags me sometimes.

When you have a Linux guest on a VMware host where the disk subsystem is under heavy load and takes to long to answer the Linux guest often remounts the disk as read-only.

My ventures into the various documentation, interweb and support sites hasnt given me a better solution than to throw more hardware at the problem.

I use "tune2fs -e continue /dev/sda" to avoid having the disk remounted RO in a vmware enviroment but i suspect it will continue on other faliures than timeouts then.

Is it possible to tune the timeout for disks or is that something thats hardcoded in the drivers?

Comments

  • tuxmania
    tuxmania Posts: 19
    I cant say im sure but i think the value for /sys/block/$i/device/timeout effect command timeouts only. Other kinds of timeouts arent handled by this i think. The default values i have seen has been 60 seconds with udev and 30 without but the timeouts that triggers the remounts has been much shorter than this.

    The only thing that has worked for me has been to use tune2fs but that doesnt work for eg. NSS filesystems on Novell Open Enterprise on a VMware guest.
  • I would be curious to see what the host's storage is up to. Usually in a virtual environment the kernel will remount the filesystem to r/o to protect itself when errors start showing up. You see this when the SCSI ibas having trouble due to whatever reason. Feedback comes back to the kernel and if it does not know how to handle the errors it protects itself.

    Most newer kernels of the 2.6.16 has been tweaked in error handling. However, if you are still having trouble you can add barrier support. The barrier transaction is a method the CPU uses to confirm messages are being received by peripheral devices. Usually they are sent when response have not been received.

    From one of Novell's TID (Technical Information Document) explains it pretty well:
    When a kernel update (as discussed above) is not an option, the problem can als be worked around by explicitly disabling barrier support for the affected filesystems, e.g. by specifying
    barrier=0
    in /etc/fstab's mount options field for the affected filesystems.

    Error handling code in the ext3 filesystem code is not properly handling the case where a device has stopped to accept barrier requests, which can happen with software RAID devices, LVM devices, device-mapper devices and with third party multipathing software like EMC PowerPath.
  • @tuxmania said:
    This is perhaps a tough one to answer but it nags me sometimes.

    When you have a Linux guest on a VMware host where the disk subsystem is under heavy load and takes to long to answer the Linux guest often remounts the disk as read-only.

    My ventures into the various documentation, interweb and support sites hasnt given me a better solution than to throw more hardware at the problem.

    I use "tune2fs -e continue /dev/sda" to avoid having the disk remounted RO in a vmware enviroment but i suspect it will continue on other faliures than timeouts then.

    Is it possible to tune the timeout for disks or is that something thats hardcoded in the drivers?

    That's a good question. I'll have to do some research --- ok. This is from an entry I found in the VMware user forums that might help.

Categories

Upcoming Training