Friday, July 9, 2010

Fibre Channel over Ethernet - Reducing Complexity or Adding Cost?

by David Gross

Data center servers typically have two or three network cards. Each of these adapters attaches to a different network element—one which supports storage over Fibre Channel, a second for Ethernet networking, and a third card for clustering, which typically runs over InfiniBand.  At first glance, this mix of networks looks messy and duplicative, and has led to calls for a single platform that can address all of these applications on one adapter.

Five years ago, when the trend was "IP on everything", iSCSI was seen as the one protocol that could pull everything together.   Today, with the trend being "Everything over Ethernet", Fibre Channel over Ethernet, or FCoE, is now hailed as the way to put multiple applications on one network.    However, there is still growing momentum behind stand-alone InfiniBand and Ethernet, with little indication that the market is about to turn to a single grand network that does everything.

Stand-Alone Networks Still Growing
In spite of all the theoretical benefits of a single, "converged" network, InfiniBand, which is still thought by many to be an odd, outlier of a protocol, continues to grow within its niche.  According to Top 500.org, the number of InfiniBand-connected CPU cores in large supercomputers grew 70% percent from June 2009 to June 2010, from 1.08 million to 1.8 million.    QLogic (QLGC) and Voltaire (VOLT) recently announced major InfiniBand switch deployments at the University of Edinburgh and Tokyo Institute of Technology respectively, while Mellanox (MLNX) recently publicized a Google (GOOG) initiative that's looking at InfiniBand as a low power way of expanding data center networks.

InfiniBand remains financially competitive because of its switch port costs.  With 40 gigabit InfiniBand ports available for $400, there is a growing, not declining, incentive to deploy them.

In addition to low prices for single protocol switch ports, another challenge facing the "converged" network is low prices for single protocol server cards.  While having multiple adapters on each server might seem wasteful, 10 Gigabit Ethernet server NICs have come down in price dramatically over the last few years, with street pricing on short-reach fiber cards dropping under $500, and prices on copper CX4 adapters falling under $400. Fibre Channel over Ethernet Converged Network Adapters, meanwhile, still cost over $1,500. The diverged network architecture, while looking terrible in vendor PowerPoints, can actually look very good in capital budgets.

Data Centers are Not Labor-Intensive
In addition to capital cost considerations, many of the operational savings from combining Local Area Networks and Storage Area Networks can be difficult to achieve, because most data centers are already running at exceptionally high productivity levels. Stand-alone data centers, like those Yahoo (YHOO) and Microsoft (MSFT) are currently building in upstate New York and Iowa, cost nearly $1,000 per square foot to construct - more than a Manhattan office tower.   Additionally, they employ about one operations worker for every 2,000 square feet of space, ten times less than a traditional office where each worker has about 200 square feet. This also means the data center owners are spending about $2 million in capital for every person employed at the facility.

Among publicly traded hosting providers, many are reporting significant revenue growth without having to staff up significantly. Rackspace (RAX), for example, saw revenue increase by 18% in 2009, when it reported $629 million in sales. But it only increased its work force of “Rackers”, as the company calls its employees, by 6%. At the same time, capex remained very high at $185 million, or a lofty 29% of revenue. In the data center, labor costs are being dramatically overshadowed by capital outlays, making the potential operational savings of LAN/SAN integration a nice benefit, but not as pressing a financial requirement as improving IRRs on capital investments.

FCoE does not look like a complete bust, there are likely to be areas where it makes sense because of its flexibility, such as servers on the edge of the SAN. A lot of work has gone into making sure FCoE fits in well with existing networks, but much more effort is needed to make sure it fits in well with capital budgets.

Chief Technology Analyst Lisa Huff contributed to this article

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.