By David Gross
I/O Virtualization vendor Xsigo recently announced it is releasing 40 Gigabit, QDR InfiniBand, Directors later this year. This is not only a big upgrade from the company's existing 10 Gigabit products, but also creates a significant bandwidth gap with the Fibre Channel-over-Ethernet products it is competing against.
The business justification for these devices is linked to two developments: server virtualization and multi-protocol networking. With server virtualization pushing up bits transmitted and received per server, there is a greater need to bring more capacity directly to the server, and with some of those bits going to storage,and others to the LAN, there is also a need to pull traffic together onto one network. (I just can't say the word "convergence" without cringing, seen far too many attempts to "converge networks" fail.)
Xsigo is using Mellanox silicon in its Directors, and represents an important attempt to move 40 Gigabit InfiniBand beyond the supercomputing cluster. While there are other efforts, such as RoCE, to advance InfiniBand in the enterprise data center, most involve either trying to compete with Ethernet on price/performance, or to place InfiniBand transmissions into Ethernet frames. This, however, is pure InfiniBand feeding a multi-protocol network for a specific application, I/O Virtualization, that cannot be done today with 40 Gigabit Ethernet.