Showing posts with label Supercomputing. Show all posts
Showing posts with label Supercomputing. Show all posts

Thursday, September 2, 2010

New TACC Supercomputer Running 40G InfiniBand

by David Gross

The Texas Advanced Computing Center is expanding its Lonestar Supercomputing Cluster with a new system that will feature 22,656 compute node cores, up from 5,840 in the existing system, connected with a 40G Mellanox (MLNX) InfiniBand network that uses a fat-tree topology.

The new system will feature:

* 302 teraflops peak performance vs. 63 teraflops on the existing Lonestar
* 44.3 terabytes total memory vs. 11.6 TB on the existing system
* 1.2 petabytes raw disk vs. 106 TB on the existing system
* 90 million CPU hours/year to TeraGrid

What's interesting here is that processing capacity and disk space are all rising at a faster rate than maximum I/O speed, which has risen a fairly sluggish 4-fold in the last eight years. This suggests that the I/O bottleneck is likely to become an even larger problem in supercomputing as time goes on.

Tuesday, July 20, 2010

40/100G – A Major Shift to Parallel Optics?

by Lisa Huff

Parallel-optics have been around for more than a decade – remember SNAP12 and POP4? These were small 12 and four-fiber parallel-optics modules that were developed for telecom VSR applications. They never really caught on for Ethernet networks though. Other than a few CWDM solutions, volume applications for datacom transceivers have been serial short-wavelength ones. At 40G, this is changing.

High performance computing (HPC) centers have already adopted parallel optics at 40 and 120G using InfiniBand (IB) 4x and 12x DDR. And, they are continuing this trend through their next data rate upgrades – 80 and 240G. While in the past I thought of HPC as a small, somewhat niche market, I now think this is shifting due to two major trends:

  • IB technology has crossed over into 40 and 100-Gigabit Ethernet in the form of both active optical cable assemblies, CFP and CXP modules.
  • More and more medium-to-large enterprise data centers are starting to look like HPC clusters with masses of parallel processing
Many of the top transceiver manufacturers including Avago Technologies and Finisar, as well as some startups have released several products in the last year to support these variants with short-reach solutions. The initial offerings are AOC products using QSFP+ and CXP form factors, which both use VCSEL and PIN arrays. At least one, Reflex Photonics , has released a CFP module that also uses these devices. To date, the only other transceiver product that seems to be available so far is the QSFP+ 40G module from MergeOptics , which is a natural extension of its QSFP AOCs. These products are already being deployed for IB systems and are planned to be used for the initial 40G Ethernet networks as well.

Once parallel-optics based transceivers are deployed for 40/100G networks, will we ever return to serial transmission?

Tuesday, June 29, 2010

Mellanox Announces Record Message Rate for Supercomputing Applications

Through a test network built with its InfiniBand switches and adapters, Mellanox (MLNX) announced that it has transferred a record 90 million MPI messages per second.   MPI is a protocol used in the supercomputing industry to manage the transmission of data between computing nodes.    While I generally don't like to promote vendor marketing messages coming out of test labs, this is the sort of technical capability, along with low cost hardware, that has helped InfiniBand hold its niche in HPC and high-end data center networks against Ethernet.

Monday, June 28, 2010

Tokyo Institute of Technology Deploying Voltaire InfiniBand Switches

InfiniBand continues to hold firm in its supercomputing and financial trading niches, with Tokyo Institute of Technology rolling out Voltaire's (VOLT) 40 Gigabit switches in its 1,400 node TSUBAME 2.0 supercomputer. The deployment includes over 4,000 edge switches and 18 director switches.

With prices as low as $400 per 40G port, these short-reach InfiniBand platforms continue to serve a niche in low latency apps where many predicted Ethernet would take over. While 40/100 Gigabit Ethernet was ratified last week, the traditional data center market is still a few years away from mass 40 or 100 gigabit deployments with either protocol.

disclosure:no positions