Welcome to the Linux Foundation Forum!
NFSv4 over 10Gbps copper vs local disk
tsit
Posts: 2
in Networking
I'm considering buying our company some 10Gbps copper Ethernet equipment to provide a dedicated network for our NFSv4 server/clients.
We have fairly modern equipment (HP DL380 G7 server, DL360 G5 clients) with 10/15krpm SAS disks. I'm wondering what I should expect when comparing local disk performance to performance over a 10Gbps NFS network share.
Does anyone have experience with this equipment? Is there a reasonable way to know what to expect without having to buy the (very) expensive switch and I/O cards? I think the disks support at most 6Gbps throughput. Are there other bottlenecks that I should consider or is it reasonable to expect performance over NFS to be about as good as local disk performance?
Thanks.
0
Comments
-
I have not had any experience with the listed hardware, but when dealing with network disk usage you must fully consider all bottlenecks and potential fail points. If you use the network sharing on the drives you then become vulnerable to loss of service due to network failure which may or may not be acceptable on the server.
The other thing to consider is what other traffic may be flowing through the routers that may drop the throughput to the disks, if the servers are in a reasonable busy network it can seriously degrade the performance.0 -
What about network switches in the middle between your SAN and clients ?
I've had very poor performances with Sun (now Oracle) SAN and local clients due to a messy/unconfigured switch, what do you've in the middle ?
Ben0 -
There will be no router, just a switch connecting the various servers. The network will be dedicated to NFS, so other than competing NFS clients, there will be no network traffic to worry about.
Ben, thanks for mentioning your experiences with Sun equipment and the bit about your unconfigured switch. I'm the type to probably put the switch in place without configuring it (since it's such a simple network). I'll definitely take a look at the switch's config.0 -
No matter how fast the network, it is not likely to compete well with fast storage systems and discs that are Sata/SAS/FC connected. Several factors come to the front.
1. Raw disc throughput.
2. Disc controller I/O capabilities
These two above affect NFS as well as local file systems since the data has to (usually) reside on a disc somewhere, unless you are using RAM discs...
3. Network packet overhead - every packet sent requires an ack return. Dropped packets require resends.
4. Contributing to that is packet size - does your NFS implementation(s) support jumbo packets, or are they limited to the 1500 byte (more or less) standard TCP/IP and Ethernet packet sizes?
5. Are you running your NFS mounts in sync mode (each write waits until commit-to-disc is assured)? For good performance over a high-speed network, you probably want to mount your shares in async mode.
6. Is this a read-mostly environment, or write-intensive?
In any case, this is going to be an expensive proposition, so if you can you might want to test out a similar configuration at an HP capacity planning center. They can set up an environment much as yours is since you are using mostly HP gear, and you can go there and test with your software, data, and system loads. They (used to) have centers in several locations in the US. I've used the ones in Paramus, New Jersey and in Cupertino, California, though I have to admit that the last time I did was about 9-10 years ago. An eon in Internet Time! :-)0
Categories
- 9.9K All Categories
- 29 LFX Mentorship
- 82 LFX Mentorship: Linux Kernel
- 463 Linux Foundation Boot Camps
- 266 Cloud Engineer Boot Camp
- 93 Advanced Cloud Engineer Boot Camp
- 43 DevOps Engineer Boot Camp
- 28 Cloud Native Developer Boot Camp
- 1 Express Training Courses
- 1 Express Courses - Discussion Forum
- 1.6K Training Courses
- 18 LFC110 Class Forum
- 3 LFC131 Class Forum
- 19 LFD102 Class Forum
- 130 LFD103 Class Forum
- 9 LFD121 Class Forum
- 60 LFD201 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 544 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 20 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 35 LFS207 Class Forum
- 295 LFS211 Class Forum
- 53 LFS216 Class Forum
- 45 LFS241 Class Forum
- 39 LFS242 Class Forum
- 33 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 962 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 85 LFS260 Class Forum
- 124 LFS261 Class Forum
- 29 LFS262 Class Forum
- 78 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 16 LFS268 Class Forum
- 14 LFS269 Class Forum
- 193 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 205 LFW211 Class Forum
- 148 LFW212 Class Forum
- 890 Hardware
- 212 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 99 Printers & Scanners
- 85 Storage
- 747 Linux Distributions
- 88 Debian
- 64 Fedora
- 13 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 468 Linux System Administration
- 38 Cloud Computing
- 67 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 77 Network Management
- 107 System Management
- 48 Web Management
- 61 Mobile Computing
- 22 Android
- 25 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 525 Off Topic
- 127 Introductions
- 211 Small Talk
- 19 Study Material
- 782 Programming and Development
- 256 Kernel Development
- 492 Software Development
- 919 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 46 All In Program
- 46 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)