I spent the past few days wondering what my first post for SNIA on Ethernet Storage should be about, and finally arrived at this – despite the fact that it’s old news…
A couple of months ago Ben Hacker posted a great blog on The Server Room section of Intel’s communities site, talking about Intel and Microsoft’s performance benchmarking work on iSCSI over 10Gb Ethernet. The post is entitled “1,000,000 IOPS with iSCSI – That’s Not a Typo…” – check it out, it’s very enlightening.
Ben’s blog in turn pointed to a joint Microsoft/Intel webcast going into detail about the results. Again, it’s good information.
The thing that stuck me here was not so much that you can get wire speed 10Gb/s iSCSI performance today, or that large virtual server environments can require those performance levels – the thing that struck me was that conventional wisdom is continually challenged by commodity Ethernet advances.
Eight years ago, most people believed that special NICs would be required to overcome the overhead imposed by TCP/IP and iSCSI over Gigabit Ethernet. That turned out to be wrong. Standard on-board Ethernet ports and software iSCSI initiators worked just great. The conventional wisdom today is that special NICs and drivers are required to overcome the overhead imposed by TCP/IP and iSCSI over 10 Gigabit Ethernet. That’s also wrong. The Intel/Microsoft benchmark used standard on-board ports and software iSCSI initiators.
We seem to keep forgetting that Moore’s Law continually delivers more CPU cycles than we expect (exponentials are funny like that), and the huge commodity R&D infrastructure around Ethernet continually finds ways to improve network performance.
Any bets on whether we’ll need special NICs to support iSCSI over 40Gb Ethernet on commodity servers when that emerges as a mainstream host port? I think the answer is obvious…
Tags: iSCSI, performance, 10Gb Ethernet, Virtual server environments