Welcome to the Linux Foundation Forum!

NFSv4 over 10Gbps copper vs local disk

I'm considering buying our company some 10Gbps copper Ethernet equipment to provide a dedicated network for our NFSv4 server/clients.

We have fairly modern equipment (HP DL380 G7 server, DL360 G5 clients) with 10/15krpm SAS disks. I'm wondering what I should expect when comparing local disk performance to performance over a 10Gbps NFS network share.

Does anyone have experience with this equipment? Is there a reasonable way to know what to expect without having to buy the (very) expensive switch and I/O cards? I think the disks support at most 6Gbps throughput. Are there other bottlenecks that I should consider or is it reasonable to expect performance over NFS to be about as good as local disk performance?

Thanks.

Comments

  • mfillpot
    mfillpot Posts: 2,177
    I have not had any experience with the listed hardware, but when dealing with network disk usage you must fully consider all bottlenecks and potential fail points. If you use the network sharing on the drives you then become vulnerable to loss of service due to network failure which may or may not be acceptable on the server.

    The other thing to consider is what other traffic may be flowing through the routers that may drop the throughput to the disks, if the servers are in a reasonable busy network it can seriously degrade the performance.
  • ben
    ben Posts: 134
    What about network switches in the middle between your SAN and clients ?
    I've had very poor performances with Sun (now Oracle) SAN and local clients due to a messy/unconfigured switch, what do you've in the middle ?

    Ben
  • tsit
    tsit Posts: 2
    There will be no router, just a switch connecting the various servers. The network will be dedicated to NFS, so other than competing NFS clients, there will be no network traffic to worry about.

    Ben, thanks for mentioning your experiences with Sun equipment and the bit about your unconfigured switch. I'm the type to probably put the switch in place without configuring it (since it's such a simple network). I'll definitely take a look at the switch's config.
  • woboyle
    woboyle Posts: 501
    No matter how fast the network, it is not likely to compete well with fast storage systems and discs that are Sata/SAS/FC connected. Several factors come to the front.

    1. Raw disc throughput.
    2. Disc controller I/O capabilities

    These two above affect NFS as well as local file systems since the data has to (usually) reside on a disc somewhere, unless you are using RAM discs...

    3. Network packet overhead - every packet sent requires an ack return. Dropped packets require resends.
    4. Contributing to that is packet size - does your NFS implementation(s) support jumbo packets, or are they limited to the 1500 byte (more or less) standard TCP/IP and Ethernet packet sizes?
    5. Are you running your NFS mounts in sync mode (each write waits until commit-to-disc is assured)? For good performance over a high-speed network, you probably want to mount your shares in async mode.
    6. Is this a read-mostly environment, or write-intensive?

    In any case, this is going to be an expensive proposition, so if you can you might want to test out a similar configuration at an HP capacity planning center. They can set up an environment much as yours is since you are using mostly HP gear, and you can go there and test with your software, data, and system loads. They (used to) have centers in several locations in the US. I've used the ones in Paramus, New Jersey and in Cupertino, California, though I have to admit that the last time I did was about 9-10 years ago. An eon in Internet Time! :-)

Categories

Upcoming Training