Tuesday, October 19, 2010

Mellanox 40 Gigabit InfiniBand Switch Gets a Boost from IBM

By David Gross

There's been lots of news this past week on 40 Gigabit Ethernet Top-of-Rack switches.   Not to be left out, 40 Gigabit InfiniBand is also making news, with IBM (IBM) choosing Mellanox's (MLNX) IS5000 InfiniBand switch silicon for its iDataPlex and Intelligent Cluster platforms.

iDataPlex is often used in HPC environments, and this announcement is a strong endorsement for advancing 40 Gigabit InfiniBand in supercomputing clusters.   The platform incorporates both GPUs and CPUs, and IBM previously added QLogic's (QLGC) 12000-series of 40 Gigabit QDR InfiniBand switches to its Intelligent Cluster package. 

While list pricing for the 40 Gigabit Ethernet ToR switches is between $1,000 and $1,500 per port, street pricing for QLogic's 40 Gigabit InfiniBand switch is less than $300 per port.    While both the Ethernet and InfiniBand use QSFP transceivers, the price difference is likely the result of density - with the 12000-series supporting 2.88 Terabits per second send and receive, and 36 40G ports, compared to 1.2 Tbps and a maximum of four 40G ports on BLADE's recently announced RackSwitch G8264.    Additionally, the 12000 series comes as is, with very few configuration options.   However, with a little more production volume, the Ethernet ports should begin to creep down into the three figures as well.   I would also expect the applications for each to remain different, with the ToR switches serving the enterprise data center market, and the InfiniBand switches primarily going into supercomputing clusters.   In either case, we're still talking about very short-reach links, 40 Gigabit links in telco networks still cost over 1000x as much.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.