By David Gross
InfiniBand IC supplier Mellanox announced today that it acquiring long-time customer, and fellow Israeli InfiniBand technology developer Voltaire. The acquisition price of $8.75 a share represents a more than 35% premium over Friday's close of $6.43, and is net of $42 million of cash held by Voltaire. Mellanox is financing the deal entirely out of its cash balance of $240 million.
Mellanox is down 4% on the news to $24 a share, while Voltaire is up 34% to $8.65, leaving very limited room for risk arbitrage on the deal.
While both companies have gotten into the Ethernet market over the last two years, the deal only makes sense in the context of InfiniBand, which as a niche technology, does not offer a chip supplier billions of ports over which to amortize development costs. Mellanox already offers both ICs and adapter cards. Moreover, and very importantly, InfiniBand switches are low cost, low memory, high performance boxes with stripped down operating systems and forwarding tables. The intellectual property router and Ethernet switch makers put into system design and network O/S is less valuable here as a result. The message acceleration software and management tools associated with InfiniBand devices require far less R&D than new ASICs or network operating systems for high-end, modular Ethernet switches.
What's likely to happen here is Wall Street will do its usual fretting over whether the proposed operating cost reductions will be achieved, which in this case are $10 million, whether the price is reasonable, and what customers will think. Additionally, at least 30 hedge fund managers are likely to ask the same questions on the strategic impact of owning switches, InfiniBand vs. Ethernet, and will seek "more color" on how the integration is going. But none of this will really matter. The key to success here will be the extent to which the new company focuses on InfiniBand. Outside of bridging products and maybe 40G NICs, there new company needs to stay out of the Ethernet market, which already has enough suppliers, and treat Fibre Channel-over-Ethernet as the toxic technology it already has proven to be for Brocade.
Showing posts with label MLNX. Show all posts
Showing posts with label MLNX. Show all posts
Monday, November 29, 2010
Monday, October 25, 2010
Xsigo Releasing 40 Gigabit Directors in December
By David Gross
I/O Virtualization vendor Xsigo recently announced it is releasing 40 Gigabit, QDR InfiniBand, Directors later this year. This is not only a big upgrade from the company's existing 10 Gigabit products, but also creates a significant bandwidth gap with the Fibre Channel-over-Ethernet products it is competing against.
The business justification for these devices is linked to two developments: server virtualization and multi-protocol networking. With server virtualization pushing up bits transmitted and received per server, there is a greater need to bring more capacity directly to the server, and with some of those bits going to storage,and others to the LAN, there is also a need to pull traffic together onto one network. (I just can't say the word "convergence" without cringing, seen far too many attempts to "converge networks" fail.)
Xsigo is using Mellanox silicon in its Directors, and represents an important attempt to move 40 Gigabit InfiniBand beyond the supercomputing cluster. While there are other efforts, such as RoCE, to advance InfiniBand in the enterprise data center, most involve either trying to compete with Ethernet on price/performance, or to place InfiniBand transmissions into Ethernet frames. This, however, is pure InfiniBand feeding a multi-protocol network for a specific application, I/O Virtualization, that cannot be done today with 40 Gigabit Ethernet.
I/O Virtualization vendor Xsigo recently announced it is releasing 40 Gigabit, QDR InfiniBand, Directors later this year. This is not only a big upgrade from the company's existing 10 Gigabit products, but also creates a significant bandwidth gap with the Fibre Channel-over-Ethernet products it is competing against.
The business justification for these devices is linked to two developments: server virtualization and multi-protocol networking. With server virtualization pushing up bits transmitted and received per server, there is a greater need to bring more capacity directly to the server, and with some of those bits going to storage,and others to the LAN, there is also a need to pull traffic together onto one network. (I just can't say the word "convergence" without cringing, seen far too many attempts to "converge networks" fail.)
Xsigo is using Mellanox silicon in its Directors, and represents an important attempt to move 40 Gigabit InfiniBand beyond the supercomputing cluster. While there are other efforts, such as RoCE, to advance InfiniBand in the enterprise data center, most involve either trying to compete with Ethernet on price/performance, or to place InfiniBand transmissions into Ethernet frames. This, however, is pure InfiniBand feeding a multi-protocol network for a specific application, I/O Virtualization, that cannot be done today with 40 Gigabit Ethernet.
Labels:
InfiniBand,
MLNX,
Virtual I/O
Friday, October 22, 2010
Mellanox Revenue Up 16%, Continues to Advance 40 Gigabit in Data Centers
By David Gross
Mellanox (MLNX) reported quarterly revenue of $37.8 million Wednesday night, up 16% year-over-year, but down 5% sequentially, and slightly ahead of guidance of $37-$37.5 million.
The stock lost about a third of its value the day after its second quarter call in July, when it announced revenue would drop sequentially. Since then, the stock has rebounded 30%, making up most of the July losses.
The company has had a very strong balance sheet since going public in 2007, and it finished the quarter with $248 million of cash, up $38 million since the end of 2009, and with no debt.
Mellanox has been experiencing a bit of an awkward shift to mixing InfiniBand and Ethernet products, after having long been the leader in InfiniBand silicon. Having gotten a significant share of its revenue from 40 Gigabit QDR InfiniBand for the last two years, it has now crossed over to 40 Gigabit Ethernet, most notably with its ConnectX-2 EN 40G, which was the first 40 Gigabit Ethernet Server NIC. The company has also worked with the InfiniBand Trade Association to develop RoCE, or RDMA over Converged Ethernet, which is basically InfiniBand over Ethernet.
Traditionally a silicon vendor, Mellanox has also been transitioning to NICs and silicon products for Landed-On-Motherboard chips. But these shifts have gone back and forth the last couple quarters, and silicon shipments replacing higher revenue adapters is one reason why revenue was down sequentially. While the company has not always been clear on the extent to which it plans to focus on silicon vs. cards, I increasingly think its strong position in both supercomputing clusters and the rapidly developing market for 40 Gigabit networks should help restart its growth in 2011.
Mellanox (MLNX) reported quarterly revenue of $37.8 million Wednesday night, up 16% year-over-year, but down 5% sequentially, and slightly ahead of guidance of $37-$37.5 million.
The stock lost about a third of its value the day after its second quarter call in July, when it announced revenue would drop sequentially. Since then, the stock has rebounded 30%, making up most of the July losses.
The company has had a very strong balance sheet since going public in 2007, and it finished the quarter with $248 million of cash, up $38 million since the end of 2009, and with no debt.
Mellanox has been experiencing a bit of an awkward shift to mixing InfiniBand and Ethernet products, after having long been the leader in InfiniBand silicon. Having gotten a significant share of its revenue from 40 Gigabit QDR InfiniBand for the last two years, it has now crossed over to 40 Gigabit Ethernet, most notably with its ConnectX-2 EN 40G, which was the first 40 Gigabit Ethernet Server NIC. The company has also worked with the InfiniBand Trade Association to develop RoCE, or RDMA over Converged Ethernet, which is basically InfiniBand over Ethernet.
Traditionally a silicon vendor, Mellanox has also been transitioning to NICs and silicon products for Landed-On-Motherboard chips. But these shifts have gone back and forth the last couple quarters, and silicon shipments replacing higher revenue adapters is one reason why revenue was down sequentially. While the company has not always been clear on the extent to which it plans to focus on silicon vs. cards, I increasingly think its strong position in both supercomputing clusters and the rapidly developing market for 40 Gigabit networks should help restart its growth in 2011.
Labels:
40 Gigabit,
InfiniBand,
MLNX
Tuesday, October 19, 2010
Mellanox 40 Gigabit InfiniBand Switch Gets a Boost from IBM
By David Gross
There's been lots of news this past week on 40 Gigabit Ethernet Top-of-Rack switches. Not to be left out, 40 Gigabit InfiniBand is also making news, with IBM (IBM) choosing Mellanox's (MLNX) IS5000 InfiniBand switch silicon for its iDataPlex and Intelligent Cluster platforms.
iDataPlex is often used in HPC environments, and this announcement is a strong endorsement for advancing 40 Gigabit InfiniBand in supercomputing clusters. The platform incorporates both GPUs and CPUs, and IBM previously added QLogic's (QLGC) 12000-series of 40 Gigabit QDR InfiniBand switches to its Intelligent Cluster package.
While list pricing for the 40 Gigabit Ethernet ToR switches is between $1,000 and $1,500 per port, street pricing for QLogic's 40 Gigabit InfiniBand switch is less than $300 per port. While both the Ethernet and InfiniBand use QSFP transceivers, the price difference is likely the result of density - with the 12000-series supporting 2.88 Terabits per second send and receive, and 36 40G ports, compared to 1.2 Tbps and a maximum of four 40G ports on BLADE's recently announced RackSwitch G8264. Additionally, the 12000 series comes as is, with very few configuration options. However, with a little more production volume, the Ethernet ports should begin to creep down into the three figures as well. I would also expect the applications for each to remain different, with the ToR switches serving the enterprise data center market, and the InfiniBand switches primarily going into supercomputing clusters. In either case, we're still talking about very short-reach links, 40 Gigabit links in telco networks still cost over 1000x as much.
There's been lots of news this past week on 40 Gigabit Ethernet Top-of-Rack switches. Not to be left out, 40 Gigabit InfiniBand is also making news, with IBM (IBM) choosing Mellanox's (MLNX) IS5000 InfiniBand switch silicon for its iDataPlex and Intelligent Cluster platforms.
iDataPlex is often used in HPC environments, and this announcement is a strong endorsement for advancing 40 Gigabit InfiniBand in supercomputing clusters. The platform incorporates both GPUs and CPUs, and IBM previously added QLogic's (QLGC) 12000-series of 40 Gigabit QDR InfiniBand switches to its Intelligent Cluster package.
While list pricing for the 40 Gigabit Ethernet ToR switches is between $1,000 and $1,500 per port, street pricing for QLogic's 40 Gigabit InfiniBand switch is less than $300 per port. While both the Ethernet and InfiniBand use QSFP transceivers, the price difference is likely the result of density - with the 12000-series supporting 2.88 Terabits per second send and receive, and 36 40G ports, compared to 1.2 Tbps and a maximum of four 40G ports on BLADE's recently announced RackSwitch G8264. Additionally, the 12000 series comes as is, with very few configuration options. However, with a little more production volume, the Ethernet ports should begin to creep down into the three figures as well. I would also expect the applications for each to remain different, with the ToR switches serving the enterprise data center market, and the InfiniBand switches primarily going into supercomputing clusters. In either case, we're still talking about very short-reach links, 40 Gigabit links in telco networks still cost over 1000x as much.
Labels:
40 Gigabit,
InfiniBand,
MLNX
Tuesday, October 5, 2010
InfiniBand vs. Ethernet in the Data Center
By David Gross
One of the most interesting topics in data networking over the last few years has been the persistence of Fibre Channel and InfiniBand. Despite cries that both will disappear, neither has faded like ATM, FDDI, and Token Ring did in the 90s.
The case for stand-alone Fibre Channel (not Fibre Channel over Ethernet) remains very strong, especially with progress continuing on the 16G standard. With the ability to transmit large block transfers over 130 Megabytes, not to mention a loyal base of SAN managers, it remains better-suited than Ethernet for many non-attached storage applications. However, the case for InfiniBand still seems unclear to many, including some in the InfiniBand Trade Association, who developed RoCE, or RDMA over Converged Ethernet, which is essentially InfiniBand over Ethernet, out of fear that InfiniBand could struggle to survive on its own.
The fact that so many people compare InfiniBand to Ethernet ignores an important point - InfiniBand's growth has had little to do with taking out Ethernet, but rather taking share from proprietary and specialized supercomputing interconnects like Myrinet and Quadrics. Over the last five years, Gigabit Ethernet's share of interconnects on the Top500 supercomputers has actually increased slightly, from 214 to 227, according to Top500.org. While InfiniBand has soared from just 16 to 210 over the same time, it has mostly come at the expense of Myrinet, which has dropped from 139 to 4. Quadrics, a former Supercomputing favorite, went out of business last year. And even with decline of proprietary interconnects, just one of the top 100 supercomputers is now using Ethernet.
Additionally, while there is a lot of talk about latency and bandwidth, there is another key metric supercomputing networks are built around - MPI Message Passing rate. Mellanox (MLNX) recently announced it had the capability to transmit more than 90 million MPI messages per second. In addition to low cost 40 Gigabit ports, one reason why InfiniBand has such low latency is the protocol's own message size can be cut down to 256 bytes, and most supercomputers achieve high performance by breaking up requests into tiny fragments across multiple nodes. This InfiniBand benefit is directly opposite to the large block transfers Fibre Channel provides SANs, which have preserved that protocol's strength in storage.
Another reason why InfiniBand has offered such great price/performance in supercomputing, but is almost nonexistent in enterprise data centers, is it use of local addressing. InfiniBand switches frames based on a 16-bit local ID, with the expectation that they are not leaving the cluster. It is ultimately an I/O technology, not a networking technology. Ethernet, on the other hand, uses global 48-bit MAC addressing, and many of the frames coming in and out of data center servers are heading back and coming from the public Internet. While InfiniBand has a layer 3 global ID as well, it is built right into the InfiniBand stack. Ethernet's layer 3 forwarding has to be handled by IP, which means buying an expensive router, which is not justified if you've got a high performance network but are not sending much traffic out to the public network. And like supercomputing clusters, financial traders are not using InfiniBand to connect their web servers to public networks, but rather for private networks. Where those same financial firms need to connect public-facing web servers within data centers, they use Ethernet.
Comparing InfiniBand to Ethernet can create interesting debates, but it's mostly a theoretical argument, because it's not a decision many data center managers face now or will face in the future. InfiniBand is an I/O technology mostly serving high-end supercomputing clusters and trading networks, both of which are expanding and promise further growth for the technology. Ethernet is the dominant LAN switching technology in the data center, and no one is seriously talking about replacing it with InfiniBand. Therefore, when looking at the data center networking and supercomputing interconnect market simultaneously, it makes more sense to think about Ethernet and InfiniBand rather than Ethernet vs. InfiniBand.
One of the most interesting topics in data networking over the last few years has been the persistence of Fibre Channel and InfiniBand. Despite cries that both will disappear, neither has faded like ATM, FDDI, and Token Ring did in the 90s.
The case for stand-alone Fibre Channel (not Fibre Channel over Ethernet) remains very strong, especially with progress continuing on the 16G standard. With the ability to transmit large block transfers over 130 Megabytes, not to mention a loyal base of SAN managers, it remains better-suited than Ethernet for many non-attached storage applications. However, the case for InfiniBand still seems unclear to many, including some in the InfiniBand Trade Association, who developed RoCE, or RDMA over Converged Ethernet, which is essentially InfiniBand over Ethernet, out of fear that InfiniBand could struggle to survive on its own.
The fact that so many people compare InfiniBand to Ethernet ignores an important point - InfiniBand's growth has had little to do with taking out Ethernet, but rather taking share from proprietary and specialized supercomputing interconnects like Myrinet and Quadrics. Over the last five years, Gigabit Ethernet's share of interconnects on the Top500 supercomputers has actually increased slightly, from 214 to 227, according to Top500.org. While InfiniBand has soared from just 16 to 210 over the same time, it has mostly come at the expense of Myrinet, which has dropped from 139 to 4. Quadrics, a former Supercomputing favorite, went out of business last year. And even with decline of proprietary interconnects, just one of the top 100 supercomputers is now using Ethernet.
Additionally, while there is a lot of talk about latency and bandwidth, there is another key metric supercomputing networks are built around - MPI Message Passing rate. Mellanox (MLNX) recently announced it had the capability to transmit more than 90 million MPI messages per second. In addition to low cost 40 Gigabit ports, one reason why InfiniBand has such low latency is the protocol's own message size can be cut down to 256 bytes, and most supercomputers achieve high performance by breaking up requests into tiny fragments across multiple nodes. This InfiniBand benefit is directly opposite to the large block transfers Fibre Channel provides SANs, which have preserved that protocol's strength in storage.
Another reason why InfiniBand has offered such great price/performance in supercomputing, but is almost nonexistent in enterprise data centers, is it use of local addressing. InfiniBand switches frames based on a 16-bit local ID, with the expectation that they are not leaving the cluster. It is ultimately an I/O technology, not a networking technology. Ethernet, on the other hand, uses global 48-bit MAC addressing, and many of the frames coming in and out of data center servers are heading back and coming from the public Internet. While InfiniBand has a layer 3 global ID as well, it is built right into the InfiniBand stack. Ethernet's layer 3 forwarding has to be handled by IP, which means buying an expensive router, which is not justified if you've got a high performance network but are not sending much traffic out to the public network. And like supercomputing clusters, financial traders are not using InfiniBand to connect their web servers to public networks, but rather for private networks. Where those same financial firms need to connect public-facing web servers within data centers, they use Ethernet.
Comparing InfiniBand to Ethernet can create interesting debates, but it's mostly a theoretical argument, because it's not a decision many data center managers face now or will face in the future. InfiniBand is an I/O technology mostly serving high-end supercomputing clusters and trading networks, both of which are expanding and promise further growth for the technology. Ethernet is the dominant LAN switching technology in the data center, and no one is seriously talking about replacing it with InfiniBand. Therefore, when looking at the data center networking and supercomputing interconnect market simultaneously, it makes more sense to think about Ethernet and InfiniBand rather than Ethernet vs. InfiniBand.
Labels:
InfiniBand,
MLNX
Tuesday, September 28, 2010
Mellanox Up 35% Since Guiding Down on its Last Call
By David Gross
Buoyed by Oracle takeover rumors, Mellanox (MLNX) is coming back from the big hit it took earlier this quarter when it announced earnings and issued weak guidance. It closed yesterday at $19.86, up 35% from its July 22 close of $15.49, day in which it fell more than seven dollars from $22.94.
While I still think the company needs a clearer growth strategy, it remains reasonably valued compared to many data center high fliers, with an enterprise value/annualized revenue ratio of 2.75, and a pristine balance sheet with $230 million of cash and no debt.
When Sun strongly endorsed InfiniBand, no one seemed to care, and the conventional wisdom about Ethernet eventually taking over InfiniBand apps was still going strong. But now that the same architecture and same product lines are under Oracle's ownership, the endorsement is being taken much more seriously, even though the substance is no different - InfiniBand's share of top500 supercomputers has continued to grow at the same pace, and now tops 40%.
I've long argued that Mellanox needs to be a little bolder about associating itself with InfiniBand, because it dominates that niche's market for ICs, while it gets lost among a large crowd in the market for Ethernet cards, in spite of the company trying to get ahead of everyone else with 40 Gigabit Ethernet server NICs. Maybe now that there is growing appreciation for InfiniBand's role in high-speed networking, Mellanox won't be so shy about declaring its leadership in this technology.
Buoyed by Oracle takeover rumors, Mellanox (MLNX) is coming back from the big hit it took earlier this quarter when it announced earnings and issued weak guidance. It closed yesterday at $19.86, up 35% from its July 22 close of $15.49, day in which it fell more than seven dollars from $22.94.
While I still think the company needs a clearer growth strategy, it remains reasonably valued compared to many data center high fliers, with an enterprise value/annualized revenue ratio of 2.75, and a pristine balance sheet with $230 million of cash and no debt.
When Sun strongly endorsed InfiniBand, no one seemed to care, and the conventional wisdom about Ethernet eventually taking over InfiniBand apps was still going strong. But now that the same architecture and same product lines are under Oracle's ownership, the endorsement is being taken much more seriously, even though the substance is no different - InfiniBand's share of top500 supercomputers has continued to grow at the same pace, and now tops 40%.
I've long argued that Mellanox needs to be a little bolder about associating itself with InfiniBand, because it dominates that niche's market for ICs, while it gets lost among a large crowd in the market for Ethernet cards, in spite of the company trying to get ahead of everyone else with 40 Gigabit Ethernet server NICs. Maybe now that there is growing appreciation for InfiniBand's role in high-speed networking, Mellanox won't be so shy about declaring its leadership in this technology.
Labels:
InfiniBand,
MLNX
Monday, September 27, 2010
PLX Technologies Acquires Teranetics for $36 Million
PLX Technologies (PLXT), a leading supplier of PCI Express silicon, is acquiring privately-held 10GBASE-T chip designer Teranetics for $36 million. While this move solidifies PLXT's presence as a data center networking chip supplier, it is not a great exit for Teranetics' investors. The company has raised $61 million since 2003, and along with Solarflare and Aquantia, makes up the leading triumvirate of 10GBASE-T chip suppliers. While 10GBASE-T power consumption per port has dropped as low as 3 watts as production has shifted to smaller process geometries, the technology is still advancing at a slow pace in the data center.
Teranetics key customers include Mellanox (MLNX), to whom it supplies the PHY for that vendor's 10GBASE-T LOM product. PLXT reported revenues of $30 million last quarter and has a market cap of $139 million.
Teranetics key customers include Mellanox (MLNX), to whom it supplies the PHY for that vendor's 10GBASE-T LOM product. PLXT reported revenues of $30 million last quarter and has a market cap of $139 million.
Thursday, September 2, 2010
New TACC Supercomputer Running 40G InfiniBand
by David Gross
The Texas Advanced Computing Center is expanding its Lonestar Supercomputing Cluster with a new system that will feature 22,656 compute node cores, up from 5,840 in the existing system, connected with a 40G Mellanox (MLNX) InfiniBand network that uses a fat-tree topology.
The new system will feature:
* 302 teraflops peak performance vs. 63 teraflops on the existing Lonestar
* 44.3 terabytes total memory vs. 11.6 TB on the existing system
* 1.2 petabytes raw disk vs. 106 TB on the existing system
* 90 million CPU hours/year to TeraGrid
What's interesting here is that processing capacity and disk space are all rising at a faster rate than maximum I/O speed, which has risen a fairly sluggish 4-fold in the last eight years. This suggests that the I/O bottleneck is likely to become an even larger problem in supercomputing as time goes on.
The Texas Advanced Computing Center is expanding its Lonestar Supercomputing Cluster with a new system that will feature 22,656 compute node cores, up from 5,840 in the existing system, connected with a 40G Mellanox (MLNX) InfiniBand network that uses a fat-tree topology.
The new system will feature:
* 302 teraflops peak performance vs. 63 teraflops on the existing Lonestar
* 44.3 terabytes total memory vs. 11.6 TB on the existing system
* 1.2 petabytes raw disk vs. 106 TB on the existing system
* 90 million CPU hours/year to TeraGrid
What's interesting here is that processing capacity and disk space are all rising at a faster rate than maximum I/O speed, which has risen a fairly sluggish 4-fold in the last eight years. This suggests that the I/O bottleneck is likely to become an even larger problem in supercomputing as time goes on.
Labels:
InfiniBand,
MLNX,
Supercomputing
Monday, August 2, 2010
Revenue Growth Slowing for Data Center Technologists
While data centers continue to defy the economy, revenue growth is clearly slowing down for the technology suppliers. Sequential rates have been much lower on an annualized basis than Year-over-Year growth rates, and the 2nd quarter typically gets a rebound off the 1st quarter lull. Mellanox (MLNX) reported strong sequential growth, but guided down for the next quarter, and VMWare (VMW), which is majority owned by EMC (EMC) stated that license revenue would be flat.
I'll have another post with the service providers after Rackspace (RAX) reports, but the trend looks similar there - with growth continuing, but at a slowing pace.
Y/Y Rev Growth Sequential Sequential
Rev Growth Rev Growth Inventory Growth
MLNX 58% 10% -3%
VOLT 54% 7% -14%
QLGC 16% -2% 26%
FFIV 46% 12% 8%
EMC 24% 3% -4%
I'll have another post with the service providers after Rackspace (RAX) reports, but the trend looks similar there - with growth continuing, but at a slowing pace.
Y/Y Rev Growth Sequential Sequential
Rev Growth Rev Growth Inventory Growth
MLNX 58% 10% -3%
VOLT 54% 7% -14%
QLGC 16% -2% 26%
FFIV 46% 12% 8%
EMC 24% 3% -4%
Friday, July 23, 2010
Does Mellanox Know How it Will Grow?
by David Gross
This week, Mellanox (MLNX) joined the list of data center hardware suppliers reporting double digit year-over-year revenue growth. Its top line grew 58% annually to $40 million this past quarter. However, unlike F5 Networks (FFIV) and EMC (EMC), which both guided up for their next reporting periods, Mellanox announced that it expected revenue to decline about 7% sequentially in the third quarter. While it claimed things should turn around in the 4th quarter, the stock was down 25% soon after the announcement.
The company claims that the reason for the temporary decline is a product shift to silicon and away from boards and host channel adapters. This represents a sharp reversal of the trend it saw for much of 2009, when adapter revenue grew while silicon revenue dropped. Given the timing of its 40/100 Gigabit and LAN-on-Motherboard product cycles, this trend could keep going back and forth in the future, which has a big impact on revenue because individual adapters sell for roughly ten times the price of individual semiconductors.
While investors reacted swiftly to the revenue announcement, a bigger concern is not the shifting revenue among product components, but the company's apparent lack of faith in the InfiniBand market is dominates. InfiniBand is a growing, high-end, niche technology. Among the world's 500 largest supercomputers, 42% use InfiniBand as their interconnect between server nodes. Two years ago, only 24% did, according to Top500.org. But InfiniBand's success in supercomputing has yet to translate into major wins in traditional data centers, where it runs into the mass of existing Ethernet switches.
The company's response, developed in conjunction with the InfiniBand Trade Association it is heavily involved with, has been to develop RDMA over Converged Ethernet, or RoCE, pronounced liked the character Sylvester Stallone played in the 70s and 80s. In many respects, RoCE is InfiniBand-over-Ethernet, it uses InfiniBand networking technologies, but slides them into Ethernet frames. Traditionally, pricing for one link technology over another has not been competitive, because low volume multi-protocol boards require more silicon and design work than single protocol equivalents. This has been seen in the more widely promoted Fibre Channel-over-Ethernet, where Converged Network Adapters based on that technology are still selling for about three times the price of standard 10 Gigabit Ethernet server NICs.
By investing in RoCE, Mellanox is basically saying InfiniBand will not be able to stimulate demand on its own in the data center, even though it offers remarkable price/performance in supercomputing clusters. Moreover, there is still plenty of opportunity for InfiniBand to have an impact as IT mangers begin to look at 40 Gigabit alternatives. But just as the company cannot seem to figure out if growth will come for adapter cards or silicon, its now going against its push for RoCE by touting a Google engineering presentation that highlighted the benefits of running pure InfiniBand in a data center network.
Mellanox has long been a high margin company with a dominant position in a niche technology, and its strong balance sheet reflects this heritage. And the stock's recent pummeling has sent the company's valuation down to just 2.5 times cash. But by sending out so many conflicting messages about chips vs. cards and InfiniBand vs. Ethernet, the question is not whether investors have confidence in the company's growth plans, but whether management does.
This week, Mellanox (MLNX) joined the list of data center hardware suppliers reporting double digit year-over-year revenue growth. Its top line grew 58% annually to $40 million this past quarter. However, unlike F5 Networks (FFIV) and EMC (EMC), which both guided up for their next reporting periods, Mellanox announced that it expected revenue to decline about 7% sequentially in the third quarter. While it claimed things should turn around in the 4th quarter, the stock was down 25% soon after the announcement.
The company claims that the reason for the temporary decline is a product shift to silicon and away from boards and host channel adapters. This represents a sharp reversal of the trend it saw for much of 2009, when adapter revenue grew while silicon revenue dropped. Given the timing of its 40/100 Gigabit and LAN-on-Motherboard product cycles, this trend could keep going back and forth in the future, which has a big impact on revenue because individual adapters sell for roughly ten times the price of individual semiconductors.
While investors reacted swiftly to the revenue announcement, a bigger concern is not the shifting revenue among product components, but the company's apparent lack of faith in the InfiniBand market is dominates. InfiniBand is a growing, high-end, niche technology. Among the world's 500 largest supercomputers, 42% use InfiniBand as their interconnect between server nodes. Two years ago, only 24% did, according to Top500.org. But InfiniBand's success in supercomputing has yet to translate into major wins in traditional data centers, where it runs into the mass of existing Ethernet switches.
The company's response, developed in conjunction with the InfiniBand Trade Association it is heavily involved with, has been to develop RDMA over Converged Ethernet, or RoCE, pronounced liked the character Sylvester Stallone played in the 70s and 80s. In many respects, RoCE is InfiniBand-over-Ethernet, it uses InfiniBand networking technologies, but slides them into Ethernet frames. Traditionally, pricing for one link technology over another has not been competitive, because low volume multi-protocol boards require more silicon and design work than single protocol equivalents. This has been seen in the more widely promoted Fibre Channel-over-Ethernet, where Converged Network Adapters based on that technology are still selling for about three times the price of standard 10 Gigabit Ethernet server NICs.
By investing in RoCE, Mellanox is basically saying InfiniBand will not be able to stimulate demand on its own in the data center, even though it offers remarkable price/performance in supercomputing clusters. Moreover, there is still plenty of opportunity for InfiniBand to have an impact as IT mangers begin to look at 40 Gigabit alternatives. But just as the company cannot seem to figure out if growth will come for adapter cards or silicon, its now going against its push for RoCE by touting a Google engineering presentation that highlighted the benefits of running pure InfiniBand in a data center network.
Mellanox has long been a high margin company with a dominant position in a niche technology, and its strong balance sheet reflects this heritage. And the stock's recent pummeling has sent the company's valuation down to just 2.5 times cash. But by sending out so many conflicting messages about chips vs. cards and InfiniBand vs. Ethernet, the question is not whether investors have confidence in the company's growth plans, but whether management does.
Labels:
EMC,
FFIV,
InfiniBand,
MLNX
Thursday, July 22, 2010
QLogic Revenue Up 16% Year-over-Year, But Reduces Next Qtr Guidance
by David Gross
QLogic (QLGC) had a strong quarter, growing its top line 16% from a year ago to $143 million. However it's getting hit in after hours trading as it took revenue guidance down for next quarter from $150 million to a range of $143 million to $147 million. It's not getting beaten as badly as rival Mellanox (MLNX), which fell 32% today after guiding for a 7% sequential revenue decline, but it's down about 11% off this news from the earnings call.
QLogic (QLGC) had a strong quarter, growing its top line 16% from a year ago to $143 million. However it's getting hit in after hours trading as it took revenue guidance down for next quarter from $150 million to a range of $143 million to $147 million. It's not getting beaten as badly as rival Mellanox (MLNX), which fell 32% today after guiding for a 7% sequential revenue decline, but it's down about 11% off this news from the earnings call.
Wednesday, July 21, 2010
Mellanox Revenue up 58% year-over year, F5 up 46% UPDATED
Mellanox (MLNX) and F5 (FFIV), both dependent on data centers and HPC both reported strong revenue growth today, with Mellanox's top line up 10% sequentially and 58% year-over-year, and F5 up 12% sequential and 46% y/y. But the similarities end there. Mellanox guided for a 7% revenue decline next quarter due to product mix, while F5 guided up, 5% past current estimates. After hours trading reflected these two very different outlooks for the upcoming quarter.
Mellanox blamed a shift in product mix from boards to chips, as it ramps up volumes of its silicon for LAN-on-Motherboard cards, which sell for far less than its InfiniBand HCAs. Still, InfiniBand's share of Top500 Supercomputing interconnects is rising, from 30% in June 2009 to 36% in November 2009 to 42% in June 2010. If this business is all going to QLogic (QLGC), we'll find out tomorrow, when that supplier reports.
Mellanox blamed a shift in product mix from boards to chips, as it ramps up volumes of its silicon for LAN-on-Motherboard cards, which sell for far less than its InfiniBand HCAs. Still, InfiniBand's share of Top500 Supercomputing interconnects is rising, from 30% in June 2009 to 36% in November 2009 to 42% in June 2010. If this business is all going to QLogic (QLGC), we'll find out tomorrow, when that supplier reports.
Wednesday, June 30, 2010
Is the InfiniBand Bandwagon Actually Growing?
by David Gross
Looks like UK IT magazine The Register is now drinking the InfiniBand Kool-Aid. They're excited about the InfiniBand Trade Association (IBTA) roadmap to 312 Gbps, and how much faster this will be than the recently ratified 100 Gigabit Ethernet.
One of the points I've been making to people looking at the costs of these technologies is that the defining economic trait
Looks like UK IT magazine The Register is now drinking the InfiniBand Kool-Aid. They're excited about the InfiniBand Trade Association (IBTA) roadmap to 312 Gbps, and how much faster this will be than the recently ratified 100 Gigabit Ethernet.
One of the points I've been making to people looking at the costs of these technologies is that the defining economic trait
Labels:
InfiniBand,
MLNX
Tuesday, June 29, 2010
Google Network Architects Endorse InfiniBand
I've made the point in a few recent posts that InfiniBand is sticking around, and hopes that it will fade away like Token Ring, FDDI, or ATM are very much misplaced. Now in addition to the financial traders and supercomputing centers who've shown strong support for the technology, Google has released a white paper on how to
Labels:
Google,
InfiniBand,
MLNX
Monday, June 28, 2010
Will 100 Gigabit Create New Data Center Networking Startups? (Part 2)
In the earlier post on 100 Gigabit network, I discussed how the shift from designing in ASICs to FPGAs, while seemingly technical, is having major business implications for data center networkers. In this post, I look at the impact on FPGA developers.
As ASICs become more challenging economically, designers are increasingly turning to FPGAs to handle highspeed packet processing, which is an important development for Xilinx (XLNX) as the leading developer of FPGAs. By trading out the ASIC design cost, and replacing it with a royalty fee for an FPGA, circuit board and system designers are taking out a large capital expenditure and replacing it with an ongoing operating expense, which is the typical outsourcing economic equation. The increase in outsourcing is helping not just networkers, but their chip suppliers maintain some staying power, an important consideration for data centers managers concerned about suppliers' financial viability. Xilinx, for example, has been reporting strong gross margins, close to 65%, and the company has bounced around the 60s for most of the last decade. While this is lower than some other chip makers, including Mellanox (MLNX), Xilinx has made up for it by keeping R&D expenses to just 20% of revenue, compared to 30% of revenue at Mellanox. As a result, it has been able to post consistent net margins of around 20%. Rival Altera (ALTR) has posted similar margins, although its revenue remains about 20% lower than Xilinx’s. One factor behind Xilinx's lower gross margins is that inventory turns for the FPGA vendors are typically 5, which is not much higher than optical components suppliers. While other chip makers simply order wafers that ship to the fab suppliers with whom they have contracted, Xilinx and Altera must stock additional inventory for their distributors because their products require additional programmability before final use.
With FPGAs, Designers to Replace an Increasing Fixed Cost with an Ongoing Variable Cost
As ASICs become more challenging economically, designers are increasingly turning to FPGAs to handle highspeed packet processing, which is an important development for Xilinx (XLNX) as the leading developer of FPGAs. By trading out the ASIC design cost, and replacing it with a royalty fee for an FPGA, circuit board and system designers are taking out a large capital expenditure and replacing it with an ongoing operating expense, which is the typical outsourcing economic equation. The increase in outsourcing is helping not just networkers, but their chip suppliers maintain some staying power, an important consideration for data centers managers concerned about suppliers' financial viability. Xilinx, for example, has been reporting strong gross margins, close to 65%, and the company has bounced around the 60s for most of the last decade. While this is lower than some other chip makers, including Mellanox (MLNX), Xilinx has made up for it by keeping R&D expenses to just 20% of revenue, compared to 30% of revenue at Mellanox. As a result, it has been able to post consistent net margins of around 20%. Rival Altera (ALTR) has posted similar margins, although its revenue remains about 20% lower than Xilinx’s. One factor behind Xilinx's lower gross margins is that inventory turns for the FPGA vendors are typically 5, which is not much higher than optical components suppliers. While other chip makers simply order wafers that ship to the fab suppliers with whom they have contracted, Xilinx and Altera must stock additional inventory for their distributors because their products require additional programmability before final use.
As ASICs become more challenging economically, designers are increasingly turning to FPGAs to handle highspeed packet processing, which is an important development for Xilinx (XLNX) as the leading developer of FPGAs. By trading out the ASIC design cost, and replacing it with a royalty fee for an FPGA, circuit board and system designers are taking out a large capital expenditure and replacing it with an ongoing operating expense, which is the typical outsourcing economic equation. The increase in outsourcing is helping not just networkers, but their chip suppliers maintain some staying power, an important consideration for data centers managers concerned about suppliers' financial viability. Xilinx, for example, has been reporting strong gross margins, close to 65%, and the company has bounced around the 60s for most of the last decade. While this is lower than some other chip makers, including Mellanox (MLNX), Xilinx has made up for it by keeping R&D expenses to just 20% of revenue, compared to 30% of revenue at Mellanox. As a result, it has been able to post consistent net margins of around 20%. Rival Altera (ALTR) has posted similar margins, although its revenue remains about 20% lower than Xilinx’s. One factor behind Xilinx's lower gross margins is that inventory turns for the FPGA vendors are typically 5, which is not much higher than optical components suppliers. While other chip makers simply order wafers that ship to the fab suppliers with whom they have contracted, Xilinx and Altera must stock additional inventory for their distributors because their products require additional programmability before final use.
With FPGAs, Designers to Replace an Increasing Fixed Cost with an Ongoing Variable Cost
As ASICs become more challenging economically, designers are increasingly turning to FPGAs to handle highspeed packet processing, which is an important development for Xilinx (XLNX) as the leading developer of FPGAs. By trading out the ASIC design cost, and replacing it with a royalty fee for an FPGA, circuit board and system designers are taking out a large capital expenditure and replacing it with an ongoing operating expense, which is the typical outsourcing economic equation. The increase in outsourcing is helping not just networkers, but their chip suppliers maintain some staying power, an important consideration for data centers managers concerned about suppliers' financial viability. Xilinx, for example, has been reporting strong gross margins, close to 65%, and the company has bounced around the 60s for most of the last decade. While this is lower than some other chip makers, including Mellanox (MLNX), Xilinx has made up for it by keeping R&D expenses to just 20% of revenue, compared to 30% of revenue at Mellanox. As a result, it has been able to post consistent net margins of around 20%. Rival Altera (ALTR) has posted similar margins, although its revenue remains about 20% lower than Xilinx’s. One factor behind Xilinx's lower gross margins is that inventory turns for the FPGA vendors are typically 5, which is not much higher than optical components suppliers. While other chip makers simply order wafers that ship to the fab suppliers with whom they have contracted, Xilinx and Altera must stock additional inventory for their distributors because their products require additional programmability before final use.
Labels:
100 Gigabit,
ALTR,
MLNX,
XLNX
Subscribe to:
Posts (Atom)