Thursday, July 29, 2010

I/O Virtualization vs. Fibre Channel over Ethernet

by David Gross

Somehow vendors always want to converge networks just when IT managers are finding new applications for existing technologies. This results in new products banging against unintended uses for established technologies. In turn, vendors create new “convergence” tools to pull everything back together, which then bump up against the IT managers' new applications, and the cycle repeats. This has been a pattern in data networking for at least the last 15 years, where vendor visions of one big happy network have kept colliding with the operating reality of a network that keeps diverging and splitting into new forms and functions. If reality had played out like vendor PowerPoints of years past, we'd all have iSCSI SANs incorporated into IPv6 based-LANs, and Fibre Channel and InfiniBand would be heading to the history books with FDDI and ATM.

Like previous attempts to force networks together, current attempts to do so require expensive hardware. As I pointed out a couple of weeks ago, Fibre Channel over Ethernet looks good when modeled in a PowerPoint network diagram, but not so great when modeled in an Excel cost analysis, with its CNAs still topping $1,500, or about 3x the cost of 10 Gigabit Ethernet server NICs. But this is not the only way to glue disparate networks together, I/O Virtualization can achieve the same thing by using an InfiniBand director to capture all the Ethernet and Fibre Channel traffic.

I/O Virtualization can offer a lower cost/bit at the network level than FCoE, because it can run at up to 20 Gbps. However, I/O Virtualization directors are more expensive than standard InfiniBand directors. Xsigo's VP780, sold through Dell, sells for around $1,000 per DDR port, while a Voltaire (VOLT) 4036E Grid Director costs about $350 per faster QDR port. But this could change quickly once the technology matures.

One major difference in the configuration between standard InfiniBand and I/O Virtualization is that a typical InfiniBand director, such as a Voltaire 4036E, bridges to Ethernet in the switch, while Xsigo consolidates the I/O back at the server NIC. The cost of the additional HCA in the standard InfiniBand configuration is about $300 per port at DDR rates and $600 per port at QDR rates. A 10 Gigabit Ethernet server NIC costs around $500 today, and this price is dropping, although there is some variability based on which 10 Gigabit port type is chosen – CX4, 10GBASE-T, 10 GBASE-SR, etc. Either way, while it saves space over multiple adapters, the I/O Virtualization card still needs to cost under $800 at 10 Gigabit to match the costs of buying separate InfiniBand and Ethernet cards. Moreover, the business case depends heavily on which variant of 10 Gigabit Ethernet is in place. A server with mutliple 10GBASE-SR ports is going to offer a lot more opportunity for a lower cost alternative than one with multiple 10GBASE-CX4 ports.

I/O Virtualization can eliminate the practice of one NIC or HBA per virtual machine in a virtualized environment. However, the two major buyers of InfiniBand products, supercomputing centers and financial traders, have done little with server virtualization, and therefore don't stand to benefit greatly from I/O Virtualization.

While there is growing interest in I/O Virtualization, it runs the risk of bumping into some of the cost challenges that have slowed Fibre Channel over Ethernet. Moreover, industries like supercomputing and financial trading are sticking mostly to diverged hardware to obtain the best price/performance. Nonetheless, I/O Virtualization could offer an opportunity to bridge networks at the NIC level instead of at the switch, while still getting some of the price/performance benefits of InfiniBand.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.