By David Gross
InfiniBand IC supplier Mellanox announced today that it acquiring long-time customer, and fellow Israeli InfiniBand technology developer Voltaire. The acquisition price of $8.75 a share represents a more than 35% premium over Friday's close of $6.43, and is net of $42 million of cash held by Voltaire. Mellanox is financing the deal entirely out of its cash balance of $240 million.
Mellanox is down 4% on the news to $24 a share, while Voltaire is up 34% to $8.65, leaving very limited room for risk arbitrage on the deal.
While both companies have gotten into the Ethernet market over the last two years, the deal only makes sense in the context of InfiniBand, which as a niche technology, does not offer a chip supplier billions of ports over which to amortize development costs. Mellanox already offers both ICs and adapter cards. Moreover, and very importantly, InfiniBand switches are low cost, low memory, high performance boxes with stripped down operating systems and forwarding tables. The intellectual property router and Ethernet switch makers put into system design and network O/S is less valuable here as a result. The message acceleration software and management tools associated with InfiniBand devices require far less R&D than new ASICs or network operating systems for high-end, modular Ethernet switches.
What's likely to happen here is Wall Street will do its usual fretting over whether the proposed operating cost reductions will be achieved, which in this case are $10 million, whether the price is reasonable, and what customers will think. Additionally, at least 30 hedge fund managers are likely to ask the same questions on the strategic impact of owning switches, InfiniBand vs. Ethernet, and will seek "more color" on how the integration is going. But none of this will really matter. The key to success here will be the extent to which the new company focuses on InfiniBand. Outside of bridging products and maybe 40G NICs, there new company needs to stay out of the Ethernet market, which already has enough suppliers, and treat Fibre Channel-over-Ethernet as the toxic technology it already has proven to be for Brocade.
Showing posts with label InfiniBand. Show all posts
Showing posts with label InfiniBand. Show all posts
Monday, November 29, 2010
Wednesday, November 17, 2010
InfiniBand's Growth Slows in Supercomputing
By David Gross
The latest semi-annual Top500 survey is out this week, in conjunction with the SC10 show, and InfiniBand has posted fairly modest gains over the last six months, with implementations growing from 207 to 214 of the world's largest supercomputers.
In the June survey, InfiniBand showed major gains from the November 2009 tally, growing from 181 to 207 system interconnects, and 151 interconnects from the June 2009 count. Five years ago, InfiniBand was used in just 27 systems, and trailed not just Ethernet, but proprietary interconnect Myrinet. Back then, over 40% of the world's top 500 supercomputers used proprietary or custom interconnects, while today just 11% do. Ethernet has held steady over this period, dropping slightly from 250 to 228 of the top 500, and most of InfiniBand's gains have come up at the expense of Myrinet, Quadrics, and other proprietary interconnects.
While Ethernet still has a slight lead in number of systems, the average InfiniBand-connected supercomputer has approximately 70% more processors than the average Ethernet connected supercomputer. With proprietary interconnects essentially wiped out, any future share gains for InfiniBand will now have to come at Ethernet's expense.
The latest semi-annual Top500 survey is out this week, in conjunction with the SC10 show, and InfiniBand has posted fairly modest gains over the last six months, with implementations growing from 207 to 214 of the world's largest supercomputers.
In the June survey, InfiniBand showed major gains from the November 2009 tally, growing from 181 to 207 system interconnects, and 151 interconnects from the June 2009 count. Five years ago, InfiniBand was used in just 27 systems, and trailed not just Ethernet, but proprietary interconnect Myrinet. Back then, over 40% of the world's top 500 supercomputers used proprietary or custom interconnects, while today just 11% do. Ethernet has held steady over this period, dropping slightly from 250 to 228 of the top 500, and most of InfiniBand's gains have come up at the expense of Myrinet, Quadrics, and other proprietary interconnects.
While Ethernet still has a slight lead in number of systems, the average InfiniBand-connected supercomputer has approximately 70% more processors than the average Ethernet connected supercomputer. With proprietary interconnects essentially wiped out, any future share gains for InfiniBand will now have to come at Ethernet's expense.
Labels:
InfiniBand
Friday, November 5, 2010
InfiniBand, not 100 Gigabit Ethernet, to Dominate Market for CXP AOCs
By Lisa Huff
For those of you that believe that 100 Gigabit Ethernet is just around the corner, I have a bridge I want to sell you. We haven’t even seen the height of 10 Gigabit Ethernet adoption yet, and there are some equipment companies saying they will sell 100’s-of-thousands of CXP 100GBASE-SR10 ports in 2011. Are you kidding? What is the application and where is the need?
First, 10GigE has taken more than eight years to get to a million ports – we believe it could take 40G and 100G even longer. Second, even for clustering applications, which could potentially drive demand faster, 100GigE port-adoption won’t be that quick. Ethernet architecture is different than InfiniBand's – the density of an InfiniBand director-type switch provides over a two terabits per second, whereas the newly released 40GigE ones are mostly around 250 Gbps (due to both slower data rate and fewer ports). InfiniBand is also typically implemented with a CLOS architecture where you have equal bandwidth everywhere, while Ethernet is more often used in an aggregated network, so it ends up having a lot fewer higher-speed ports than lower speed ones. This is further supported by clustering applications that use ToR switches that are currently Gigabit connections to the servers with 10G uplinks to the network core. These will be upgraded to 10G downlinks and 40G uplinks first, but this won’t happen quickly.
While several router manufacturers claim to have the need for 100’s of thousands of 100GBASE-SR CXP ports in 2011, I have found no evidence of this. Who are their customers? In fact, even those companies that could use 100G ports today, i.e. Google, Facebook , IXCs, etc., would need six months to a year to evaluate router products before they would deploy them. Since these devices do not yet exist, the reality is that the market will really not begin to materialize until at least 2012. Right now, the majority of router connections are either Gigabit Ethernet or OC-48 (2.5G) or below, with OC-192 (10G) or 10GigE being implemented on an as-needed basis. Until routers transition through 10G, then probably 40G, 100G installations will be few and far between.
But, there is a market for CXP AOCs today – InfiniBand. This is becoming a volume market now and will continue to be the best opportunity for CXP AOCs (Active Optical Cables) for at least the next few years and probably over the lifetime of the CXP products. In fact, we expect the volume of InfiniBand CXP AOCs to be at about six million by 2015. By comparison, the total volume of Ethernet CXP AOCs is expected to be less than 100-thousand. While 100GigE clustering applications will initially use CXP AOCs, customers in these markets prefer to use pluggable modules mainly because they are used to structured cabling solutions, and like their flexibility and ease of use, so AOCs will quickly give way to pluggable modules as they are developed. 100GigE CXP ports may eventually eclipse InfiniBand once it permeates most data center distribution and core networks, but this will take longer than many equipment vendors expect.
For those of you that believe that 100 Gigabit Ethernet is just around the corner, I have a bridge I want to sell you. We haven’t even seen the height of 10 Gigabit Ethernet adoption yet, and there are some equipment companies saying they will sell 100’s-of-thousands of CXP 100GBASE-SR10 ports in 2011. Are you kidding? What is the application and where is the need?
First, 10GigE has taken more than eight years to get to a million ports – we believe it could take 40G and 100G even longer. Second, even for clustering applications, which could potentially drive demand faster, 100GigE port-adoption won’t be that quick. Ethernet architecture is different than InfiniBand's – the density of an InfiniBand director-type switch provides over a two terabits per second, whereas the newly released 40GigE ones are mostly around 250 Gbps (due to both slower data rate and fewer ports). InfiniBand is also typically implemented with a CLOS architecture where you have equal bandwidth everywhere, while Ethernet is more often used in an aggregated network, so it ends up having a lot fewer higher-speed ports than lower speed ones. This is further supported by clustering applications that use ToR switches that are currently Gigabit connections to the servers with 10G uplinks to the network core. These will be upgraded to 10G downlinks and 40G uplinks first, but this won’t happen quickly.
While several router manufacturers claim to have the need for 100’s of thousands of 100GBASE-SR CXP ports in 2011, I have found no evidence of this. Who are their customers? In fact, even those companies that could use 100G ports today, i.e. Google, Facebook , IXCs, etc., would need six months to a year to evaluate router products before they would deploy them. Since these devices do not yet exist, the reality is that the market will really not begin to materialize until at least 2012. Right now, the majority of router connections are either Gigabit Ethernet or OC-48 (2.5G) or below, with OC-192 (10G) or 10GigE being implemented on an as-needed basis. Until routers transition through 10G, then probably 40G, 100G installations will be few and far between.
But, there is a market for CXP AOCs today – InfiniBand. This is becoming a volume market now and will continue to be the best opportunity for CXP AOCs (Active Optical Cables) for at least the next few years and probably over the lifetime of the CXP products. In fact, we expect the volume of InfiniBand CXP AOCs to be at about six million by 2015. By comparison, the total volume of Ethernet CXP AOCs is expected to be less than 100-thousand. While 100GigE clustering applications will initially use CXP AOCs, customers in these markets prefer to use pluggable modules mainly because they are used to structured cabling solutions, and like their flexibility and ease of use, so AOCs will quickly give way to pluggable modules as they are developed. 100GigE CXP ports may eventually eclipse InfiniBand once it permeates most data center distribution and core networks, but this will take longer than many equipment vendors expect.
Labels:
100 Gigabit,
InfiniBand
Tuesday, November 2, 2010
Voltaire Revenue Up 25% to $18.1 Million, But Says "Ethernet" More than "InfiniBand"
By David Gross
Voltaire reported quarterly revenue yesterday of $18.1 million, up from $14.5 in the year ago quarter. The company also said annual revenue would come in near the top end of its previously issued guidance of $67-70 million.
The company is a leader in InfiniBand switch systems, but like its supplier Mellanox, is very eager to show that it's hedging its bets with Ethernet. Reading through the call transcript on Seeking Alpha, I counted seven mentions of "Ethernet" excluding the Q&A, and just three mentions of "InfiniBand", the technology Voltaire's long been associated with.
InfiniBand continues to gain share in supercomputing, and is used as the interconnect in 207 of the top 500 supercomputers, up from 121 two years ago, according to Top500.org. Yet Voltaire, Mellanox, and the InfiniBand Trade Association they're both affiliated with are terrified of being considered "niche" vendors, so they've released Ethernet products, in addition to developing InfiniBand-over-Ethernet, a.k.a. RDMA over Converged Ethernet, in spite of InfiniBand's strength.
Nonetheless, Voltaire's Ethernet strategy is fairly focused, and like the Juniper EX2500, its top-of-rack 6024 switch appears to be an OEM of the Blade RackSwitch G8124. But while the talk/actual-percent-of-revenue ratio is way out of line with cloud services, Ethernet's strength with PR people is far more impressive, especially considering that it's older than many of them are.
Voltaire reported quarterly revenue yesterday of $18.1 million, up from $14.5 in the year ago quarter. The company also said annual revenue would come in near the top end of its previously issued guidance of $67-70 million.
The company is a leader in InfiniBand switch systems, but like its supplier Mellanox, is very eager to show that it's hedging its bets with Ethernet. Reading through the call transcript on Seeking Alpha, I counted seven mentions of "Ethernet" excluding the Q&A, and just three mentions of "InfiniBand", the technology Voltaire's long been associated with.
InfiniBand continues to gain share in supercomputing, and is used as the interconnect in 207 of the top 500 supercomputers, up from 121 two years ago, according to Top500.org. Yet Voltaire, Mellanox, and the InfiniBand Trade Association they're both affiliated with are terrified of being considered "niche" vendors, so they've released Ethernet products, in addition to developing InfiniBand-over-Ethernet, a.k.a. RDMA over Converged Ethernet, in spite of InfiniBand's strength.
Nonetheless, Voltaire's Ethernet strategy is fairly focused, and like the Juniper EX2500, its top-of-rack 6024 switch appears to be an OEM of the Blade RackSwitch G8124. But while the talk/actual-percent-of-revenue ratio is way out of line with cloud services, Ethernet's strength with PR people is far more impressive, especially considering that it's older than many of them are.
Labels:
InfiniBand,
VOLT
Monday, October 25, 2010
Xsigo Releasing 40 Gigabit Directors in December
By David Gross
I/O Virtualization vendor Xsigo recently announced it is releasing 40 Gigabit, QDR InfiniBand, Directors later this year. This is not only a big upgrade from the company's existing 10 Gigabit products, but also creates a significant bandwidth gap with the Fibre Channel-over-Ethernet products it is competing against.
The business justification for these devices is linked to two developments: server virtualization and multi-protocol networking. With server virtualization pushing up bits transmitted and received per server, there is a greater need to bring more capacity directly to the server, and with some of those bits going to storage,and others to the LAN, there is also a need to pull traffic together onto one network. (I just can't say the word "convergence" without cringing, seen far too many attempts to "converge networks" fail.)
Xsigo is using Mellanox silicon in its Directors, and represents an important attempt to move 40 Gigabit InfiniBand beyond the supercomputing cluster. While there are other efforts, such as RoCE, to advance InfiniBand in the enterprise data center, most involve either trying to compete with Ethernet on price/performance, or to place InfiniBand transmissions into Ethernet frames. This, however, is pure InfiniBand feeding a multi-protocol network for a specific application, I/O Virtualization, that cannot be done today with 40 Gigabit Ethernet.
I/O Virtualization vendor Xsigo recently announced it is releasing 40 Gigabit, QDR InfiniBand, Directors later this year. This is not only a big upgrade from the company's existing 10 Gigabit products, but also creates a significant bandwidth gap with the Fibre Channel-over-Ethernet products it is competing against.
The business justification for these devices is linked to two developments: server virtualization and multi-protocol networking. With server virtualization pushing up bits transmitted and received per server, there is a greater need to bring more capacity directly to the server, and with some of those bits going to storage,and others to the LAN, there is also a need to pull traffic together onto one network. (I just can't say the word "convergence" without cringing, seen far too many attempts to "converge networks" fail.)
Xsigo is using Mellanox silicon in its Directors, and represents an important attempt to move 40 Gigabit InfiniBand beyond the supercomputing cluster. While there are other efforts, such as RoCE, to advance InfiniBand in the enterprise data center, most involve either trying to compete with Ethernet on price/performance, or to place InfiniBand transmissions into Ethernet frames. This, however, is pure InfiniBand feeding a multi-protocol network for a specific application, I/O Virtualization, that cannot be done today with 40 Gigabit Ethernet.
Labels:
InfiniBand,
MLNX,
Virtual I/O
Friday, October 22, 2010
Mellanox Revenue Up 16%, Continues to Advance 40 Gigabit in Data Centers
By David Gross
Mellanox (MLNX) reported quarterly revenue of $37.8 million Wednesday night, up 16% year-over-year, but down 5% sequentially, and slightly ahead of guidance of $37-$37.5 million.
The stock lost about a third of its value the day after its second quarter call in July, when it announced revenue would drop sequentially. Since then, the stock has rebounded 30%, making up most of the July losses.
The company has had a very strong balance sheet since going public in 2007, and it finished the quarter with $248 million of cash, up $38 million since the end of 2009, and with no debt.
Mellanox has been experiencing a bit of an awkward shift to mixing InfiniBand and Ethernet products, after having long been the leader in InfiniBand silicon. Having gotten a significant share of its revenue from 40 Gigabit QDR InfiniBand for the last two years, it has now crossed over to 40 Gigabit Ethernet, most notably with its ConnectX-2 EN 40G, which was the first 40 Gigabit Ethernet Server NIC. The company has also worked with the InfiniBand Trade Association to develop RoCE, or RDMA over Converged Ethernet, which is basically InfiniBand over Ethernet.
Traditionally a silicon vendor, Mellanox has also been transitioning to NICs and silicon products for Landed-On-Motherboard chips. But these shifts have gone back and forth the last couple quarters, and silicon shipments replacing higher revenue adapters is one reason why revenue was down sequentially. While the company has not always been clear on the extent to which it plans to focus on silicon vs. cards, I increasingly think its strong position in both supercomputing clusters and the rapidly developing market for 40 Gigabit networks should help restart its growth in 2011.
Mellanox (MLNX) reported quarterly revenue of $37.8 million Wednesday night, up 16% year-over-year, but down 5% sequentially, and slightly ahead of guidance of $37-$37.5 million.
The stock lost about a third of its value the day after its second quarter call in July, when it announced revenue would drop sequentially. Since then, the stock has rebounded 30%, making up most of the July losses.
The company has had a very strong balance sheet since going public in 2007, and it finished the quarter with $248 million of cash, up $38 million since the end of 2009, and with no debt.
Mellanox has been experiencing a bit of an awkward shift to mixing InfiniBand and Ethernet products, after having long been the leader in InfiniBand silicon. Having gotten a significant share of its revenue from 40 Gigabit QDR InfiniBand for the last two years, it has now crossed over to 40 Gigabit Ethernet, most notably with its ConnectX-2 EN 40G, which was the first 40 Gigabit Ethernet Server NIC. The company has also worked with the InfiniBand Trade Association to develop RoCE, or RDMA over Converged Ethernet, which is basically InfiniBand over Ethernet.
Traditionally a silicon vendor, Mellanox has also been transitioning to NICs and silicon products for Landed-On-Motherboard chips. But these shifts have gone back and forth the last couple quarters, and silicon shipments replacing higher revenue adapters is one reason why revenue was down sequentially. While the company has not always been clear on the extent to which it plans to focus on silicon vs. cards, I increasingly think its strong position in both supercomputing clusters and the rapidly developing market for 40 Gigabit networks should help restart its growth in 2011.
Labels:
40 Gigabit,
InfiniBand,
MLNX
Tuesday, October 19, 2010
Mellanox 40 Gigabit InfiniBand Switch Gets a Boost from IBM
By David Gross
There's been lots of news this past week on 40 Gigabit Ethernet Top-of-Rack switches. Not to be left out, 40 Gigabit InfiniBand is also making news, with IBM (IBM) choosing Mellanox's (MLNX) IS5000 InfiniBand switch silicon for its iDataPlex and Intelligent Cluster platforms.
iDataPlex is often used in HPC environments, and this announcement is a strong endorsement for advancing 40 Gigabit InfiniBand in supercomputing clusters. The platform incorporates both GPUs and CPUs, and IBM previously added QLogic's (QLGC) 12000-series of 40 Gigabit QDR InfiniBand switches to its Intelligent Cluster package.
While list pricing for the 40 Gigabit Ethernet ToR switches is between $1,000 and $1,500 per port, street pricing for QLogic's 40 Gigabit InfiniBand switch is less than $300 per port. While both the Ethernet and InfiniBand use QSFP transceivers, the price difference is likely the result of density - with the 12000-series supporting 2.88 Terabits per second send and receive, and 36 40G ports, compared to 1.2 Tbps and a maximum of four 40G ports on BLADE's recently announced RackSwitch G8264. Additionally, the 12000 series comes as is, with very few configuration options. However, with a little more production volume, the Ethernet ports should begin to creep down into the three figures as well. I would also expect the applications for each to remain different, with the ToR switches serving the enterprise data center market, and the InfiniBand switches primarily going into supercomputing clusters. In either case, we're still talking about very short-reach links, 40 Gigabit links in telco networks still cost over 1000x as much.
There's been lots of news this past week on 40 Gigabit Ethernet Top-of-Rack switches. Not to be left out, 40 Gigabit InfiniBand is also making news, with IBM (IBM) choosing Mellanox's (MLNX) IS5000 InfiniBand switch silicon for its iDataPlex and Intelligent Cluster platforms.
iDataPlex is often used in HPC environments, and this announcement is a strong endorsement for advancing 40 Gigabit InfiniBand in supercomputing clusters. The platform incorporates both GPUs and CPUs, and IBM previously added QLogic's (QLGC) 12000-series of 40 Gigabit QDR InfiniBand switches to its Intelligent Cluster package.
While list pricing for the 40 Gigabit Ethernet ToR switches is between $1,000 and $1,500 per port, street pricing for QLogic's 40 Gigabit InfiniBand switch is less than $300 per port. While both the Ethernet and InfiniBand use QSFP transceivers, the price difference is likely the result of density - with the 12000-series supporting 2.88 Terabits per second send and receive, and 36 40G ports, compared to 1.2 Tbps and a maximum of four 40G ports on BLADE's recently announced RackSwitch G8264. Additionally, the 12000 series comes as is, with very few configuration options. However, with a little more production volume, the Ethernet ports should begin to creep down into the three figures as well. I would also expect the applications for each to remain different, with the ToR switches serving the enterprise data center market, and the InfiniBand switches primarily going into supercomputing clusters. In either case, we're still talking about very short-reach links, 40 Gigabit links in telco networks still cost over 1000x as much.
Labels:
40 Gigabit,
InfiniBand,
MLNX
Tuesday, October 5, 2010
InfiniBand vs. Ethernet in the Data Center
By David Gross
One of the most interesting topics in data networking over the last few years has been the persistence of Fibre Channel and InfiniBand. Despite cries that both will disappear, neither has faded like ATM, FDDI, and Token Ring did in the 90s.
The case for stand-alone Fibre Channel (not Fibre Channel over Ethernet) remains very strong, especially with progress continuing on the 16G standard. With the ability to transmit large block transfers over 130 Megabytes, not to mention a loyal base of SAN managers, it remains better-suited than Ethernet for many non-attached storage applications. However, the case for InfiniBand still seems unclear to many, including some in the InfiniBand Trade Association, who developed RoCE, or RDMA over Converged Ethernet, which is essentially InfiniBand over Ethernet, out of fear that InfiniBand could struggle to survive on its own.
The fact that so many people compare InfiniBand to Ethernet ignores an important point - InfiniBand's growth has had little to do with taking out Ethernet, but rather taking share from proprietary and specialized supercomputing interconnects like Myrinet and Quadrics. Over the last five years, Gigabit Ethernet's share of interconnects on the Top500 supercomputers has actually increased slightly, from 214 to 227, according to Top500.org. While InfiniBand has soared from just 16 to 210 over the same time, it has mostly come at the expense of Myrinet, which has dropped from 139 to 4. Quadrics, a former Supercomputing favorite, went out of business last year. And even with decline of proprietary interconnects, just one of the top 100 supercomputers is now using Ethernet.
Additionally, while there is a lot of talk about latency and bandwidth, there is another key metric supercomputing networks are built around - MPI Message Passing rate. Mellanox (MLNX) recently announced it had the capability to transmit more than 90 million MPI messages per second. In addition to low cost 40 Gigabit ports, one reason why InfiniBand has such low latency is the protocol's own message size can be cut down to 256 bytes, and most supercomputers achieve high performance by breaking up requests into tiny fragments across multiple nodes. This InfiniBand benefit is directly opposite to the large block transfers Fibre Channel provides SANs, which have preserved that protocol's strength in storage.
Another reason why InfiniBand has offered such great price/performance in supercomputing, but is almost nonexistent in enterprise data centers, is it use of local addressing. InfiniBand switches frames based on a 16-bit local ID, with the expectation that they are not leaving the cluster. It is ultimately an I/O technology, not a networking technology. Ethernet, on the other hand, uses global 48-bit MAC addressing, and many of the frames coming in and out of data center servers are heading back and coming from the public Internet. While InfiniBand has a layer 3 global ID as well, it is built right into the InfiniBand stack. Ethernet's layer 3 forwarding has to be handled by IP, which means buying an expensive router, which is not justified if you've got a high performance network but are not sending much traffic out to the public network. And like supercomputing clusters, financial traders are not using InfiniBand to connect their web servers to public networks, but rather for private networks. Where those same financial firms need to connect public-facing web servers within data centers, they use Ethernet.
Comparing InfiniBand to Ethernet can create interesting debates, but it's mostly a theoretical argument, because it's not a decision many data center managers face now or will face in the future. InfiniBand is an I/O technology mostly serving high-end supercomputing clusters and trading networks, both of which are expanding and promise further growth for the technology. Ethernet is the dominant LAN switching technology in the data center, and no one is seriously talking about replacing it with InfiniBand. Therefore, when looking at the data center networking and supercomputing interconnect market simultaneously, it makes more sense to think about Ethernet and InfiniBand rather than Ethernet vs. InfiniBand.
One of the most interesting topics in data networking over the last few years has been the persistence of Fibre Channel and InfiniBand. Despite cries that both will disappear, neither has faded like ATM, FDDI, and Token Ring did in the 90s.
The case for stand-alone Fibre Channel (not Fibre Channel over Ethernet) remains very strong, especially with progress continuing on the 16G standard. With the ability to transmit large block transfers over 130 Megabytes, not to mention a loyal base of SAN managers, it remains better-suited than Ethernet for many non-attached storage applications. However, the case for InfiniBand still seems unclear to many, including some in the InfiniBand Trade Association, who developed RoCE, or RDMA over Converged Ethernet, which is essentially InfiniBand over Ethernet, out of fear that InfiniBand could struggle to survive on its own.
The fact that so many people compare InfiniBand to Ethernet ignores an important point - InfiniBand's growth has had little to do with taking out Ethernet, but rather taking share from proprietary and specialized supercomputing interconnects like Myrinet and Quadrics. Over the last five years, Gigabit Ethernet's share of interconnects on the Top500 supercomputers has actually increased slightly, from 214 to 227, according to Top500.org. While InfiniBand has soared from just 16 to 210 over the same time, it has mostly come at the expense of Myrinet, which has dropped from 139 to 4. Quadrics, a former Supercomputing favorite, went out of business last year. And even with decline of proprietary interconnects, just one of the top 100 supercomputers is now using Ethernet.
Additionally, while there is a lot of talk about latency and bandwidth, there is another key metric supercomputing networks are built around - MPI Message Passing rate. Mellanox (MLNX) recently announced it had the capability to transmit more than 90 million MPI messages per second. In addition to low cost 40 Gigabit ports, one reason why InfiniBand has such low latency is the protocol's own message size can be cut down to 256 bytes, and most supercomputers achieve high performance by breaking up requests into tiny fragments across multiple nodes. This InfiniBand benefit is directly opposite to the large block transfers Fibre Channel provides SANs, which have preserved that protocol's strength in storage.
Another reason why InfiniBand has offered such great price/performance in supercomputing, but is almost nonexistent in enterprise data centers, is it use of local addressing. InfiniBand switches frames based on a 16-bit local ID, with the expectation that they are not leaving the cluster. It is ultimately an I/O technology, not a networking technology. Ethernet, on the other hand, uses global 48-bit MAC addressing, and many of the frames coming in and out of data center servers are heading back and coming from the public Internet. While InfiniBand has a layer 3 global ID as well, it is built right into the InfiniBand stack. Ethernet's layer 3 forwarding has to be handled by IP, which means buying an expensive router, which is not justified if you've got a high performance network but are not sending much traffic out to the public network. And like supercomputing clusters, financial traders are not using InfiniBand to connect their web servers to public networks, but rather for private networks. Where those same financial firms need to connect public-facing web servers within data centers, they use Ethernet.
Comparing InfiniBand to Ethernet can create interesting debates, but it's mostly a theoretical argument, because it's not a decision many data center managers face now or will face in the future. InfiniBand is an I/O technology mostly serving high-end supercomputing clusters and trading networks, both of which are expanding and promise further growth for the technology. Ethernet is the dominant LAN switching technology in the data center, and no one is seriously talking about replacing it with InfiniBand. Therefore, when looking at the data center networking and supercomputing interconnect market simultaneously, it makes more sense to think about Ethernet and InfiniBand rather than Ethernet vs. InfiniBand.
Labels:
InfiniBand,
MLNX
Tuesday, September 28, 2010
Mellanox Up 35% Since Guiding Down on its Last Call
By David Gross
Buoyed by Oracle takeover rumors, Mellanox (MLNX) is coming back from the big hit it took earlier this quarter when it announced earnings and issued weak guidance. It closed yesterday at $19.86, up 35% from its July 22 close of $15.49, day in which it fell more than seven dollars from $22.94.
While I still think the company needs a clearer growth strategy, it remains reasonably valued compared to many data center high fliers, with an enterprise value/annualized revenue ratio of 2.75, and a pristine balance sheet with $230 million of cash and no debt.
When Sun strongly endorsed InfiniBand, no one seemed to care, and the conventional wisdom about Ethernet eventually taking over InfiniBand apps was still going strong. But now that the same architecture and same product lines are under Oracle's ownership, the endorsement is being taken much more seriously, even though the substance is no different - InfiniBand's share of top500 supercomputers has continued to grow at the same pace, and now tops 40%.
I've long argued that Mellanox needs to be a little bolder about associating itself with InfiniBand, because it dominates that niche's market for ICs, while it gets lost among a large crowd in the market for Ethernet cards, in spite of the company trying to get ahead of everyone else with 40 Gigabit Ethernet server NICs. Maybe now that there is growing appreciation for InfiniBand's role in high-speed networking, Mellanox won't be so shy about declaring its leadership in this technology.
Buoyed by Oracle takeover rumors, Mellanox (MLNX) is coming back from the big hit it took earlier this quarter when it announced earnings and issued weak guidance. It closed yesterday at $19.86, up 35% from its July 22 close of $15.49, day in which it fell more than seven dollars from $22.94.
While I still think the company needs a clearer growth strategy, it remains reasonably valued compared to many data center high fliers, with an enterprise value/annualized revenue ratio of 2.75, and a pristine balance sheet with $230 million of cash and no debt.
When Sun strongly endorsed InfiniBand, no one seemed to care, and the conventional wisdom about Ethernet eventually taking over InfiniBand apps was still going strong. But now that the same architecture and same product lines are under Oracle's ownership, the endorsement is being taken much more seriously, even though the substance is no different - InfiniBand's share of top500 supercomputers has continued to grow at the same pace, and now tops 40%.
I've long argued that Mellanox needs to be a little bolder about associating itself with InfiniBand, because it dominates that niche's market for ICs, while it gets lost among a large crowd in the market for Ethernet cards, in spite of the company trying to get ahead of everyone else with 40 Gigabit Ethernet server NICs. Maybe now that there is growing appreciation for InfiniBand's role in high-speed networking, Mellanox won't be so shy about declaring its leadership in this technology.
Labels:
InfiniBand,
MLNX
Thursday, September 2, 2010
New TACC Supercomputer Running 40G InfiniBand
by David Gross
The Texas Advanced Computing Center is expanding its Lonestar Supercomputing Cluster with a new system that will feature 22,656 compute node cores, up from 5,840 in the existing system, connected with a 40G Mellanox (MLNX) InfiniBand network that uses a fat-tree topology.
The new system will feature:
* 302 teraflops peak performance vs. 63 teraflops on the existing Lonestar
* 44.3 terabytes total memory vs. 11.6 TB on the existing system
* 1.2 petabytes raw disk vs. 106 TB on the existing system
* 90 million CPU hours/year to TeraGrid
What's interesting here is that processing capacity and disk space are all rising at a faster rate than maximum I/O speed, which has risen a fairly sluggish 4-fold in the last eight years. This suggests that the I/O bottleneck is likely to become an even larger problem in supercomputing as time goes on.
The Texas Advanced Computing Center is expanding its Lonestar Supercomputing Cluster with a new system that will feature 22,656 compute node cores, up from 5,840 in the existing system, connected with a 40G Mellanox (MLNX) InfiniBand network that uses a fat-tree topology.
The new system will feature:
* 302 teraflops peak performance vs. 63 teraflops on the existing Lonestar
* 44.3 terabytes total memory vs. 11.6 TB on the existing system
* 1.2 petabytes raw disk vs. 106 TB on the existing system
* 90 million CPU hours/year to TeraGrid
What's interesting here is that processing capacity and disk space are all rising at a faster rate than maximum I/O speed, which has risen a fairly sluggish 4-fold in the last eight years. This suggests that the I/O bottleneck is likely to become an even larger problem in supercomputing as time goes on.
Labels:
InfiniBand,
MLNX,
Supercomputing
Wednesday, August 4, 2010
Fibre Channel and InfiniBand vs. Ethernet
by David Gross
As successful as InfiniBand has been with supercomputing centers and financial traders, it has struggled to break into the mass market dominated by Ethernet. Fibre Channel has had great success in SANs, but isn't even trying to go after the mass market anymore. Yet when first standardized over ten years ago, both Fibre Channel and InfiniBand had far greater ambitions.
Through their respective industry associations, InfiniBand and Fibre Channel have responded by developing "over Ethernet" versions of their protocol in recognition of Ethernet's dominance. The InfiniBand Trade Association's RoCE, or RDMA over Converged Ethernet, could have just as easily been called InfiniBand over Ethernet.
The niches that Fibre Channel and InfiniBand currently fill though, aren't going away, but neither protocol is about to challenge Ethernet supremacy. Interestingly, the niche roles each protocol occupy have little to with bandwidth, cost, or price per bit. Fibre Channel has never been cost competitive with Ethernet, even when both topped out at a gigabit, yet has held on strong in storage. InfiniBand is cheaper per bit than Ethernet, particularly at 40G, but is struggling to break out of its supercomputing and financial trading niches.
More than cost per bit or port price, an interesting factor behind the development of the InfiniBand and Fibre Channel niches comes down to message size. InfiniBand is very closely tied to parallel computing, and the shorter messages that result from breaking up a transmission across multiple CPUs and GPUs. Fibre Channel is closely to tied to serial storage networks, particularly the large block transfers that cross SANs, which rely on the protocol's hardware-based error correction and detection, and generally require the link length that comes with a serial protocol. To use a somewhat cheesy analogy, you could say InfiniBand is a little sports car, Fibre Channel a long freight train, and Ethernet a Camry.
Fibre Channel grew on the back of serial storage networks, InfiniBand on parallel supercomputing networks. The biggest threat to either then is not Ethernet, but a revival of parallel SANs and serial supercomputing, neither of which will happen anytime soon.
As successful as InfiniBand has been with supercomputing centers and financial traders, it has struggled to break into the mass market dominated by Ethernet. Fibre Channel has had great success in SANs, but isn't even trying to go after the mass market anymore. Yet when first standardized over ten years ago, both Fibre Channel and InfiniBand had far greater ambitions.
Through their respective industry associations, InfiniBand and Fibre Channel have responded by developing "over Ethernet" versions of their protocol in recognition of Ethernet's dominance. The InfiniBand Trade Association's RoCE, or RDMA over Converged Ethernet, could have just as easily been called InfiniBand over Ethernet.
The niches that Fibre Channel and InfiniBand currently fill though, aren't going away, but neither protocol is about to challenge Ethernet supremacy. Interestingly, the niche roles each protocol occupy have little to with bandwidth, cost, or price per bit. Fibre Channel has never been cost competitive with Ethernet, even when both topped out at a gigabit, yet has held on strong in storage. InfiniBand is cheaper per bit than Ethernet, particularly at 40G, but is struggling to break out of its supercomputing and financial trading niches.
More than cost per bit or port price, an interesting factor behind the development of the InfiniBand and Fibre Channel niches comes down to message size. InfiniBand is very closely tied to parallel computing, and the shorter messages that result from breaking up a transmission across multiple CPUs and GPUs. Fibre Channel is closely to tied to serial storage networks, particularly the large block transfers that cross SANs, which rely on the protocol's hardware-based error correction and detection, and generally require the link length that comes with a serial protocol. To use a somewhat cheesy analogy, you could say InfiniBand is a little sports car, Fibre Channel a long freight train, and Ethernet a Camry.
Fibre Channel grew on the back of serial storage networks, InfiniBand on parallel supercomputing networks. The biggest threat to either then is not Ethernet, but a revival of parallel SANs and serial supercomputing, neither of which will happen anytime soon.
Labels:
10 Gigabit,
Fibre Channel over Ethernet,
InfiniBand
Friday, July 30, 2010
Top-of-Rack Switching – Better or Worse for Your Data Center? (Part 2)
by Lisa Huff
Last week I gave you some reasons why a Top-of-Rack (ToR) switching architecture could be higher cost for your data center network. Today, I’d like to discuss the advantages of using a ToR switching topology.
1) CLOS network architecture – in order to support this, you most likely need to use a ToR switch. For those not familiar with the CLOS network, it is a multi-stage network whose main advantage is that it requires fewer cross-points to produce a non-blocking structure. It is difficult and can be more costly to implement a non-blocking network without CLOS.
2) Latency and throughput – while it may be counter-intuitive, there may be less latency in your network if you use a ToR switch architecture. The main reason really has little to do with the switch itself and more to do with the data rate of your network. 10-Gigabit Ethernet typically has about 1/5th the latency that Gigabit Ethernet does. It also obviously has the capability of 10 times the throughput – provided that your switches can support line-rate (their backplanes can handle 10 Gbps). So implementing 10GigE in your equipment access layer of your data center can seriously reduce the amount of time data takes to get from initiator to destination. This, of course, is more critical in some vertical markets than in others – like the financial sector where micro-seconds can make a difference of millions of dollars made or lost.
A comment on latency and throughput however – if this is so critical, why not use InfiniBand instead of Ethernet ToR switches? It seems to me that InfiniBand already has these issues solved without adding another switch layer to the network.
Last week I gave you some reasons why a Top-of-Rack (ToR) switching architecture could be higher cost for your data center network. Today, I’d like to discuss the advantages of using a ToR switching topology.
1) CLOS network architecture – in order to support this, you most likely need to use a ToR switch. For those not familiar with the CLOS network, it is a multi-stage network whose main advantage is that it requires fewer cross-points to produce a non-blocking structure. It is difficult and can be more costly to implement a non-blocking network without CLOS.
2) Latency and throughput – while it may be counter-intuitive, there may be less latency in your network if you use a ToR switch architecture. The main reason really has little to do with the switch itself and more to do with the data rate of your network. 10-Gigabit Ethernet typically has about 1/5th the latency that Gigabit Ethernet does. It also obviously has the capability of 10 times the throughput – provided that your switches can support line-rate (their backplanes can handle 10 Gbps). So implementing 10GigE in your equipment access layer of your data center can seriously reduce the amount of time data takes to get from initiator to destination. This, of course, is more critical in some vertical markets than in others – like the financial sector where micro-seconds can make a difference of millions of dollars made or lost.
A comment on latency and throughput however – if this is so critical, why not use InfiniBand instead of Ethernet ToR switches? It seems to me that InfiniBand already has these issues solved without adding another switch layer to the network.
Labels:
10 Gigabit,
InfiniBand
Thursday, July 29, 2010
I/O Virtualization vs. Fibre Channel over Ethernet
by David Gross
Somehow vendors always want to converge networks just when IT managers are finding new applications for existing technologies. This results in new products banging against unintended uses for established technologies. In turn, vendors create new “convergence” tools to pull everything back together, which then bump up against the IT managers' new applications, and the cycle repeats. This has been a pattern in data networking for at least the last 15 years, where vendor visions of one big happy network have kept colliding with the operating reality of a network that keeps diverging and splitting into new forms and functions. If reality had played out like vendor PowerPoints of years past, we'd all have iSCSI SANs incorporated into IPv6 based-LANs, and Fibre Channel and InfiniBand would be heading to the history books with FDDI and ATM.
Like previous attempts to force networks together, current attempts to do so require expensive hardware. As I pointed out a couple of weeks ago, Fibre Channel over Ethernet looks good when modeled in a PowerPoint network diagram, but not so great when modeled in an Excel cost analysis, with its CNAs still topping $1,500, or about 3x the cost of 10 Gigabit Ethernet server NICs. But this is not the only way to glue disparate networks together, I/O Virtualization can achieve the same thing by using an InfiniBand director to capture all the Ethernet and Fibre Channel traffic.
I/O Virtualization can offer a lower cost/bit at the network level than FCoE, because it can run at up to 20 Gbps. However, I/O Virtualization directors are more expensive than standard InfiniBand directors. Xsigo's VP780, sold through Dell, sells for around $1,000 per DDR port, while a Voltaire (VOLT) 4036E Grid Director costs about $350 per faster QDR port. But this could change quickly once the technology matures.
One major difference in the configuration between standard InfiniBand and I/O Virtualization is that a typical InfiniBand director, such as a Voltaire 4036E, bridges to Ethernet in the switch, while Xsigo consolidates the I/O back at the server NIC. The cost of the additional HCA in the standard InfiniBand configuration is about $300 per port at DDR rates and $600 per port at QDR rates. A 10 Gigabit Ethernet server NIC costs around $500 today, and this price is dropping, although there is some variability based on which 10 Gigabit port type is chosen – CX4, 10GBASE-T, 10 GBASE-SR, etc. Either way, while it saves space over multiple adapters, the I/O Virtualization card still needs to cost under $800 at 10 Gigabit to match the costs of buying separate InfiniBand and Ethernet cards. Moreover, the business case depends heavily on which variant of 10 Gigabit Ethernet is in place. A server with mutliple 10GBASE-SR ports is going to offer a lot more opportunity for a lower cost alternative than one with multiple 10GBASE-CX4 ports.
I/O Virtualization can eliminate the practice of one NIC or HBA per virtual machine in a virtualized environment. However, the two major buyers of InfiniBand products, supercomputing centers and financial traders, have done little with server virtualization, and therefore don't stand to benefit greatly from I/O Virtualization.
While there is growing interest in I/O Virtualization, it runs the risk of bumping into some of the cost challenges that have slowed Fibre Channel over Ethernet. Moreover, industries like supercomputing and financial trading are sticking mostly to diverged hardware to obtain the best price/performance. Nonetheless, I/O Virtualization could offer an opportunity to bridge networks at the NIC level instead of at the switch, while still getting some of the price/performance benefits of InfiniBand.
Somehow vendors always want to converge networks just when IT managers are finding new applications for existing technologies. This results in new products banging against unintended uses for established technologies. In turn, vendors create new “convergence” tools to pull everything back together, which then bump up against the IT managers' new applications, and the cycle repeats. This has been a pattern in data networking for at least the last 15 years, where vendor visions of one big happy network have kept colliding with the operating reality of a network that keeps diverging and splitting into new forms and functions. If reality had played out like vendor PowerPoints of years past, we'd all have iSCSI SANs incorporated into IPv6 based-LANs, and Fibre Channel and InfiniBand would be heading to the history books with FDDI and ATM.
Like previous attempts to force networks together, current attempts to do so require expensive hardware. As I pointed out a couple of weeks ago, Fibre Channel over Ethernet looks good when modeled in a PowerPoint network diagram, but not so great when modeled in an Excel cost analysis, with its CNAs still topping $1,500, or about 3x the cost of 10 Gigabit Ethernet server NICs. But this is not the only way to glue disparate networks together, I/O Virtualization can achieve the same thing by using an InfiniBand director to capture all the Ethernet and Fibre Channel traffic.
I/O Virtualization can offer a lower cost/bit at the network level than FCoE, because it can run at up to 20 Gbps. However, I/O Virtualization directors are more expensive than standard InfiniBand directors. Xsigo's VP780, sold through Dell, sells for around $1,000 per DDR port, while a Voltaire (VOLT) 4036E Grid Director costs about $350 per faster QDR port. But this could change quickly once the technology matures.
One major difference in the configuration between standard InfiniBand and I/O Virtualization is that a typical InfiniBand director, such as a Voltaire 4036E, bridges to Ethernet in the switch, while Xsigo consolidates the I/O back at the server NIC. The cost of the additional HCA in the standard InfiniBand configuration is about $300 per port at DDR rates and $600 per port at QDR rates. A 10 Gigabit Ethernet server NIC costs around $500 today, and this price is dropping, although there is some variability based on which 10 Gigabit port type is chosen – CX4, 10GBASE-T, 10 GBASE-SR, etc. Either way, while it saves space over multiple adapters, the I/O Virtualization card still needs to cost under $800 at 10 Gigabit to match the costs of buying separate InfiniBand and Ethernet cards. Moreover, the business case depends heavily on which variant of 10 Gigabit Ethernet is in place. A server with mutliple 10GBASE-SR ports is going to offer a lot more opportunity for a lower cost alternative than one with multiple 10GBASE-CX4 ports.
I/O Virtualization can eliminate the practice of one NIC or HBA per virtual machine in a virtualized environment. However, the two major buyers of InfiniBand products, supercomputing centers and financial traders, have done little with server virtualization, and therefore don't stand to benefit greatly from I/O Virtualization.
While there is growing interest in I/O Virtualization, it runs the risk of bumping into some of the cost challenges that have slowed Fibre Channel over Ethernet. Moreover, industries like supercomputing and financial trading are sticking mostly to diverged hardware to obtain the best price/performance. Nonetheless, I/O Virtualization could offer an opportunity to bridge networks at the NIC level instead of at the switch, while still getting some of the price/performance benefits of InfiniBand.
Labels:
Fibre Channel over Ethernet,
InfiniBand,
VOLT
Wednesday, July 28, 2010
Voltaire Revenue Up 7% Sequentially, 54% Year-over-Year
InfiniBand switch maker Voltaire (VOLT) reported quarterly revenues this morning of $16.6 million, a 54% increase from its 2nd quarter of 2009, and it reaffirmed guidance of $67-70 million for 2010 revenue, reassuring investors who were spooked by Mellanox's (MLNX) downward guidance revision last week.
While it is making every effort to play nice with Ethernet, the company was one of the first to release QDR InfiniBand director and switch ports, which has helped its revenue reverse course off a downward trend it hit during 2008 and 2009.
While it is making every effort to play nice with Ethernet, the company was one of the first to release QDR InfiniBand director and switch ports, which has helped its revenue reverse course off a downward trend it hit during 2008 and 2009.
Labels:
InfiniBand,
VOLT
Friday, July 23, 2010
Does Mellanox Know How it Will Grow?
by David Gross
This week, Mellanox (MLNX) joined the list of data center hardware suppliers reporting double digit year-over-year revenue growth. Its top line grew 58% annually to $40 million this past quarter. However, unlike F5 Networks (FFIV) and EMC (EMC), which both guided up for their next reporting periods, Mellanox announced that it expected revenue to decline about 7% sequentially in the third quarter. While it claimed things should turn around in the 4th quarter, the stock was down 25% soon after the announcement.
The company claims that the reason for the temporary decline is a product shift to silicon and away from boards and host channel adapters. This represents a sharp reversal of the trend it saw for much of 2009, when adapter revenue grew while silicon revenue dropped. Given the timing of its 40/100 Gigabit and LAN-on-Motherboard product cycles, this trend could keep going back and forth in the future, which has a big impact on revenue because individual adapters sell for roughly ten times the price of individual semiconductors.
While investors reacted swiftly to the revenue announcement, a bigger concern is not the shifting revenue among product components, but the company's apparent lack of faith in the InfiniBand market is dominates. InfiniBand is a growing, high-end, niche technology. Among the world's 500 largest supercomputers, 42% use InfiniBand as their interconnect between server nodes. Two years ago, only 24% did, according to Top500.org. But InfiniBand's success in supercomputing has yet to translate into major wins in traditional data centers, where it runs into the mass of existing Ethernet switches.
The company's response, developed in conjunction with the InfiniBand Trade Association it is heavily involved with, has been to develop RDMA over Converged Ethernet, or RoCE, pronounced liked the character Sylvester Stallone played in the 70s and 80s. In many respects, RoCE is InfiniBand-over-Ethernet, it uses InfiniBand networking technologies, but slides them into Ethernet frames. Traditionally, pricing for one link technology over another has not been competitive, because low volume multi-protocol boards require more silicon and design work than single protocol equivalents. This has been seen in the more widely promoted Fibre Channel-over-Ethernet, where Converged Network Adapters based on that technology are still selling for about three times the price of standard 10 Gigabit Ethernet server NICs.
By investing in RoCE, Mellanox is basically saying InfiniBand will not be able to stimulate demand on its own in the data center, even though it offers remarkable price/performance in supercomputing clusters. Moreover, there is still plenty of opportunity for InfiniBand to have an impact as IT mangers begin to look at 40 Gigabit alternatives. But just as the company cannot seem to figure out if growth will come for adapter cards or silicon, its now going against its push for RoCE by touting a Google engineering presentation that highlighted the benefits of running pure InfiniBand in a data center network.
Mellanox has long been a high margin company with a dominant position in a niche technology, and its strong balance sheet reflects this heritage. And the stock's recent pummeling has sent the company's valuation down to just 2.5 times cash. But by sending out so many conflicting messages about chips vs. cards and InfiniBand vs. Ethernet, the question is not whether investors have confidence in the company's growth plans, but whether management does.
This week, Mellanox (MLNX) joined the list of data center hardware suppliers reporting double digit year-over-year revenue growth. Its top line grew 58% annually to $40 million this past quarter. However, unlike F5 Networks (FFIV) and EMC (EMC), which both guided up for their next reporting periods, Mellanox announced that it expected revenue to decline about 7% sequentially in the third quarter. While it claimed things should turn around in the 4th quarter, the stock was down 25% soon after the announcement.
The company claims that the reason for the temporary decline is a product shift to silicon and away from boards and host channel adapters. This represents a sharp reversal of the trend it saw for much of 2009, when adapter revenue grew while silicon revenue dropped. Given the timing of its 40/100 Gigabit and LAN-on-Motherboard product cycles, this trend could keep going back and forth in the future, which has a big impact on revenue because individual adapters sell for roughly ten times the price of individual semiconductors.
While investors reacted swiftly to the revenue announcement, a bigger concern is not the shifting revenue among product components, but the company's apparent lack of faith in the InfiniBand market is dominates. InfiniBand is a growing, high-end, niche technology. Among the world's 500 largest supercomputers, 42% use InfiniBand as their interconnect between server nodes. Two years ago, only 24% did, according to Top500.org. But InfiniBand's success in supercomputing has yet to translate into major wins in traditional data centers, where it runs into the mass of existing Ethernet switches.
The company's response, developed in conjunction with the InfiniBand Trade Association it is heavily involved with, has been to develop RDMA over Converged Ethernet, or RoCE, pronounced liked the character Sylvester Stallone played in the 70s and 80s. In many respects, RoCE is InfiniBand-over-Ethernet, it uses InfiniBand networking technologies, but slides them into Ethernet frames. Traditionally, pricing for one link technology over another has not been competitive, because low volume multi-protocol boards require more silicon and design work than single protocol equivalents. This has been seen in the more widely promoted Fibre Channel-over-Ethernet, where Converged Network Adapters based on that technology are still selling for about three times the price of standard 10 Gigabit Ethernet server NICs.
By investing in RoCE, Mellanox is basically saying InfiniBand will not be able to stimulate demand on its own in the data center, even though it offers remarkable price/performance in supercomputing clusters. Moreover, there is still plenty of opportunity for InfiniBand to have an impact as IT mangers begin to look at 40 Gigabit alternatives. But just as the company cannot seem to figure out if growth will come for adapter cards or silicon, its now going against its push for RoCE by touting a Google engineering presentation that highlighted the benefits of running pure InfiniBand in a data center network.
Mellanox has long been a high margin company with a dominant position in a niche technology, and its strong balance sheet reflects this heritage. And the stock's recent pummeling has sent the company's valuation down to just 2.5 times cash. But by sending out so many conflicting messages about chips vs. cards and InfiniBand vs. Ethernet, the question is not whether investors have confidence in the company's growth plans, but whether management does.
Labels:
EMC,
FFIV,
InfiniBand,
MLNX
Thursday, July 22, 2010
Top-of-Rack (ToR) Switching – Better or Worse for Your Data Center?
by Lisa Huff
Earlier this month I talked about how ToR switching has become popular in the data center and how large switch manufacturers are telling you that it’s really the only way to implement 10G Ethernet. I left that post with this question – “But is it the best possible network architecture for the data center?” And, I promised to get back to you with some assessments.
The answer, of course, is that it depends. Mainly, it depends on what you’re trying to achieve. There are many considerations when moving to this topology. Here are just a few:
2) Cost 2: While the installation of the structured cabling is expensive, if you choose the latest and greatest like CAT6A or CAT7 for copper and OM3 or OM4 for fiber, it typically lasts at least 10 years and could last longer. It can stay there for at least two and possibly three network equipment upgrades. How often do you think you’ll need to replace your ToR switches? Probably every three-to-five years.
3) Cost 3: Something most data center managers haven’t considered – heat. I’ve visited a couple of data centers that have implemented ToR switching only to see that after about a month, some of the ports were seeing very high BER’s to the point where they would fail. What was happening is that the switch is deeper than the servers that are stacked below it and was trapping the exhaust heat at the top of the rack where some “questionable” patch cords were being used. This heat caused out-of-spec insertion loss on these copper patch cords and therefore bit errors.
4) Cost 4: More switch ports than you can actually use within a rack. Some people call this oversubscription. My definition (and the industry’s) for oversubscription is just the opposite so I will not use this term. But, the complaint is this – cabinets are typically 42U or 48U. Each server, if you’re using pizza-box servers, are 1U or 2U. You need a UPS, which is typically 2U or 4U and your ToR switch takes up 1U or 2U. So, the maximum amount of servers you can have in a rack would realistically be 40. Most data centers have much less than this – around 30. In order to connect all of these servers to a ToR switch, you would need a 48-port switch. So you’re paying for eighteen ports that you will most likely never use. Or, what I’ve seen happen sometimes, is that data center managers want to use these extra ports so they connect them to servers in the next cabinet – now you have a cabling nightmare.
So I’ve listed some of the downside. In a future post, I’ll give you some advantages of ToR switching.
Labels:
Ethernet,
InfiniBand
Tuesday, July 20, 2010
40/100G – A Major Shift to Parallel Optics?
by Lisa Huff
Parallel-optics have been around for more than a decade – remember SNAP12 and POP4? These were small 12 and four-fiber parallel-optics modules that were developed for telecom VSR applications. They never really caught on for Ethernet networks though. Other than a few CWDM solutions, volume applications for datacom transceivers have been serial short-wavelength ones. At 40G, this is changing.High performance computing (HPC) centers have already adopted parallel optics at 40 and 120G using InfiniBand (IB) 4x and 12x DDR. And, they are continuing this trend through their next data rate upgrades – 80 and 240G. While in the past I thought of HPC as a small, somewhat niche market, I now think this is shifting due to two major trends:
- IB technology has crossed over into 40 and 100-Gigabit Ethernet in the form of both active optical cable assemblies, CFP and CXP modules.
- More and more medium-to-large enterprise data centers are starting to look like HPC clusters with masses of parallel processing
Once parallel-optics based transceivers are deployed for 40/100G networks, will we ever return to serial transmission?
Labels:
100 Gigabit,
InfiniBand,
Optical Components,
Supercomputing
Friday, July 9, 2010
Fibre Channel over Ethernet - Reducing Complexity or Adding Cost?
by David Gross
Data center servers typically have two or three network cards. Each of these adapters attaches to a different network element—one which supports storage over Fibre Channel, a second for Ethernet networking, and a third card for clustering, which typically runs over InfiniBand. At first glance, this mix of networks looks messy and duplicative, and has led to calls for a single platform that can address all of these applications on one adapter.
Five years ago, when the trend was "IP on everything", iSCSI was seen as the one protocol that could pull everything together. Today, with the trend being "Everything over Ethernet", Fibre Channel over Ethernet, or FCoE, is now hailed as the way to put multiple applications on one network. However, there is still growing momentum behind stand-alone InfiniBand and Ethernet, with little indication that the market is about to turn to a single grand network that does everything.
Stand-Alone Networks Still Growing
In spite of all the theoretical benefits of a single, "converged" network, InfiniBand, which is still thought by many to be an odd, outlier of a protocol, continues to grow within its niche. According to Top 500.org, the number of InfiniBand-connected CPU cores in large supercomputers grew 70% percent from June 2009 to June 2010, from 1.08 million to 1.8 million. QLogic (QLGC) and Voltaire (VOLT) recently announced major InfiniBand switch deployments at the University of Edinburgh and Tokyo Institute of Technology respectively, while Mellanox (MLNX) recently publicized a Google (GOOG) initiative that's looking at InfiniBand as a low power way of expanding data center networks.
InfiniBand remains financially competitive because of its switch port costs. With 40 gigabit InfiniBand ports available for $400, there is a growing, not declining, incentive to deploy them.
In addition to low prices for single protocol switch ports, another challenge facing the "converged" network is low prices for single protocol server cards. While having multiple adapters on each server might seem wasteful, 10 Gigabit Ethernet server NICs have come down in price dramatically over the last few years, with street pricing on short-reach fiber cards dropping under $500, and prices on copper CX4 adapters falling under $400. Fibre Channel over Ethernet Converged Network Adapters, meanwhile, still cost over $1,500. The diverged network architecture, while looking terrible in vendor PowerPoints, can actually look very good in capital budgets.
Data Centers are Not Labor-Intensive
In addition to capital cost considerations, many of the operational savings from combining Local Area Networks and Storage Area Networks can be difficult to achieve, because most data centers are already running at exceptionally high productivity levels. Stand-alone data centers, like those Yahoo (YHOO) and Microsoft (MSFT) are currently building in upstate New York and Iowa, cost nearly $1,000 per square foot to construct - more than a Manhattan office tower. Additionally, they employ about one operations worker for every 2,000 square feet of space, ten times less than a traditional office where each worker has about 200 square feet. This also means the data center owners are spending about $2 million in capital for every person employed at the facility.
Among publicly traded hosting providers, many are reporting significant revenue growth without having to staff up significantly. Rackspace (RAX), for example, saw revenue increase by 18% in 2009, when it reported $629 million in sales. But it only increased its work force of “Rackers”, as the company calls its employees, by 6%. At the same time, capex remained very high at $185 million, or a lofty 29% of revenue. In the data center, labor costs are being dramatically overshadowed by capital outlays, making the potential operational savings of LAN/SAN integration a nice benefit, but not as pressing a financial requirement as improving IRRs on capital investments.
FCoE does not look like a complete bust, there are likely to be areas where it makes sense because of its flexibility, such as servers on the edge of the SAN. A lot of work has gone into making sure FCoE fits in well with existing networks, but much more effort is needed to make sure it fits in well with capital budgets.
Chief Technology Analyst Lisa Huff contributed to this article
Data center servers typically have two or three network cards. Each of these adapters attaches to a different network element—one which supports storage over Fibre Channel, a second for Ethernet networking, and a third card for clustering, which typically runs over InfiniBand. At first glance, this mix of networks looks messy and duplicative, and has led to calls for a single platform that can address all of these applications on one adapter.
Five years ago, when the trend was "IP on everything", iSCSI was seen as the one protocol that could pull everything together. Today, with the trend being "Everything over Ethernet", Fibre Channel over Ethernet, or FCoE, is now hailed as the way to put multiple applications on one network. However, there is still growing momentum behind stand-alone InfiniBand and Ethernet, with little indication that the market is about to turn to a single grand network that does everything.
Stand-Alone Networks Still Growing
In spite of all the theoretical benefits of a single, "converged" network, InfiniBand, which is still thought by many to be an odd, outlier of a protocol, continues to grow within its niche. According to Top 500.org, the number of InfiniBand-connected CPU cores in large supercomputers grew 70% percent from June 2009 to June 2010, from 1.08 million to 1.8 million. QLogic (QLGC) and Voltaire (VOLT) recently announced major InfiniBand switch deployments at the University of Edinburgh and Tokyo Institute of Technology respectively, while Mellanox (MLNX) recently publicized a Google (GOOG) initiative that's looking at InfiniBand as a low power way of expanding data center networks.
InfiniBand remains financially competitive because of its switch port costs. With 40 gigabit InfiniBand ports available for $400, there is a growing, not declining, incentive to deploy them.
In addition to low prices for single protocol switch ports, another challenge facing the "converged" network is low prices for single protocol server cards. While having multiple adapters on each server might seem wasteful, 10 Gigabit Ethernet server NICs have come down in price dramatically over the last few years, with street pricing on short-reach fiber cards dropping under $500, and prices on copper CX4 adapters falling under $400. Fibre Channel over Ethernet Converged Network Adapters, meanwhile, still cost over $1,500. The diverged network architecture, while looking terrible in vendor PowerPoints, can actually look very good in capital budgets.
Data Centers are Not Labor-Intensive
In addition to capital cost considerations, many of the operational savings from combining Local Area Networks and Storage Area Networks can be difficult to achieve, because most data centers are already running at exceptionally high productivity levels. Stand-alone data centers, like those Yahoo (YHOO) and Microsoft (MSFT) are currently building in upstate New York and Iowa, cost nearly $1,000 per square foot to construct - more than a Manhattan office tower. Additionally, they employ about one operations worker for every 2,000 square feet of space, ten times less than a traditional office where each worker has about 200 square feet. This also means the data center owners are spending about $2 million in capital for every person employed at the facility.
Among publicly traded hosting providers, many are reporting significant revenue growth without having to staff up significantly. Rackspace (RAX), for example, saw revenue increase by 18% in 2009, when it reported $629 million in sales. But it only increased its work force of “Rackers”, as the company calls its employees, by 6%. At the same time, capex remained very high at $185 million, or a lofty 29% of revenue. In the data center, labor costs are being dramatically overshadowed by capital outlays, making the potential operational savings of LAN/SAN integration a nice benefit, but not as pressing a financial requirement as improving IRRs on capital investments.
FCoE does not look like a complete bust, there are likely to be areas where it makes sense because of its flexibility, such as servers on the edge of the SAN. A lot of work has gone into making sure FCoE fits in well with existing networks, but much more effort is needed to make sure it fits in well with capital budgets.
Chief Technology Analyst Lisa Huff contributed to this article
Labels:
Fibre Channel over Ethernet,
InfiniBand,
Microsoft,
RAX,
Yahoo
Monday, July 5, 2010
ICO, not TCO
One of the most overused sales metrics in technology is “TCO”, or total cost of ownership. For decades now, sales teams have been abusing financial principles in order to get their business customers to buy hardware or software products using this made up metric. There are countless TCO models flying around Silicon Valley, and their ability to move products has more to do with sales skill than financial reality.
But in investigating products that have lasted far longer than the hype accompanying their release, one thing I've found consistently is that they don't have a low TCO, but a low incremental cost of ownership, or what I'll just call ICO, because like Washington, Silicon Valley likes three letter acronyms.
So what is ICO exactly?
Now calculating TCO often involves beating up Excel, plugging data points into macros, and ensuring that in the process, spreadsheet inputs lose all connection to operating reality. Incremental cost of ownership is a highly technical calculation that looks something like this:
Total amount of money needed to purchase product + 0 = Incremental Cost of Ownership
In looking at products and protocols that have been successful over the years, one thing stands out. Not their TCO, but their ICO. From Cisco's 6500 series switches to Gigabit Ethernet, it is hard to find a widely deployed technology that does not have a reasonable incremental cost of ownership. It's important to note that this does not mean the products sell for low margins, Cisco's gross margins have been in the 60s and its net margins in the teens for years, it's hardly scraping by. The same is true for many of its competitors and suppliers. But in spite of all its industry strength, Cisco could do little to save technologies like ATM and RPR, which promised lower TCO though collapsing network layers, but required buying switches with very high incremental costs.
If it weren't for low incremental costs, there would not be much to say about data center networks. The reason linking up is so attractive in data centers instead of through traditional enterprise or carrier networks is because short-reach hardware components are exponentially cheaper. 40 Gig on a sub-20 meter InfiniBand port will only set you back $400, a quarter of the bandwidth on a 10 kilometer metro link will costs about 30 times as much per port. No clever model or PowerPoint will change this.
Just as IRR accompanies ROI in most well-thought out business cases, ICO should accompany any TCO presentation. Regardless of overall economic conditions, products that have promised future opex savings have typically lost out to those bringing high returns on invested capital today. By presenting both ICO and TCO, vendors would show customers that they are addressing the financial risks they deploying networking hardware, rather than just relying on promises of future opex savings.
But in investigating products that have lasted far longer than the hype accompanying their release, one thing I've found consistently is that they don't have a low TCO, but a low incremental cost of ownership, or what I'll just call ICO, because like Washington, Silicon Valley likes three letter acronyms.
So what is ICO exactly?
Now calculating TCO often involves beating up Excel, plugging data points into macros, and ensuring that in the process, spreadsheet inputs lose all connection to operating reality. Incremental cost of ownership is a highly technical calculation that looks something like this:
Total amount of money needed to purchase product + 0 = Incremental Cost of Ownership
In looking at products and protocols that have been successful over the years, one thing stands out. Not their TCO, but their ICO. From Cisco's 6500 series switches to Gigabit Ethernet, it is hard to find a widely deployed technology that does not have a reasonable incremental cost of ownership. It's important to note that this does not mean the products sell for low margins, Cisco's gross margins have been in the 60s and its net margins in the teens for years, it's hardly scraping by. The same is true for many of its competitors and suppliers. But in spite of all its industry strength, Cisco could do little to save technologies like ATM and RPR, which promised lower TCO though collapsing network layers, but required buying switches with very high incremental costs.
If it weren't for low incremental costs, there would not be much to say about data center networks. The reason linking up is so attractive in data centers instead of through traditional enterprise or carrier networks is because short-reach hardware components are exponentially cheaper. 40 Gig on a sub-20 meter InfiniBand port will only set you back $400, a quarter of the bandwidth on a 10 kilometer metro link will costs about 30 times as much per port. No clever model or PowerPoint will change this.
Just as IRR accompanies ROI in most well-thought out business cases, ICO should accompany any TCO presentation. Regardless of overall economic conditions, products that have promised future opex savings have typically lost out to those bringing high returns on invested capital today. By presenting both ICO and TCO, vendors would show customers that they are addressing the financial risks they deploying networking hardware, rather than just relying on promises of future opex savings.
Labels:
40 Gigabit,
InfiniBand
Wednesday, June 30, 2010
Is the InfiniBand Bandwagon Actually Growing?
by David Gross
Looks like UK IT magazine The Register is now drinking the InfiniBand Kool-Aid. They're excited about the InfiniBand Trade Association (IBTA) roadmap to 312 Gbps, and how much faster this will be than the recently ratified 100 Gigabit Ethernet.
One of the points I've been making to people looking at the costs of these technologies is that the defining economic trait
Looks like UK IT magazine The Register is now drinking the InfiniBand Kool-Aid. They're excited about the InfiniBand Trade Association (IBTA) roadmap to 312 Gbps, and how much faster this will be than the recently ratified 100 Gigabit Ethernet.
One of the points I've been making to people looking at the costs of these technologies is that the defining economic trait
Labels:
InfiniBand,
MLNX
Subscribe to:
Posts (Atom)