Monday, August 30, 2010

How the 40/100G Ethernet Shift to Parallel Optics Affects Data Center Cabling

by Lisa Huff

Most data centers are cabled with at least Category 5e and some MMF. To upgrade to 10G, data center managers need to either test their entire installed base of Category 5e to make sure it is 10G-worthy or replace it with Category 6A or 7. And their MMF should be of at least the OM3 (2000 MHz.km) variety or the 300m optical-reach is in question. Unless, they want to use 10GBASE-LX4 or LRM modules that are about 10x the price of 10GBASE-SR devices. But what happens when you want to look beyond 10G to the next upgrade?

Last month I talked about how at 40/100G there is a shift to parallel-optics. Unlike today’s two-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet specify multiple parallel 10G connections that are aggregated. 40GBASE-SR4 will use four 10G fibers to send and four 10G fibers to receive, while 100GBASE-SR10 will use ten 10G fibers in each direction.

What this means to the data center operator is that they may need to install new cable. Unless they’ve started to install pre-terminated fiber assemblies using the 12-position MPO connectors – these can be re-used if polarity is chosen carefully. Polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.

There are three polarity methods defined in the TIA standard and each has its advantages and disadvantages, but only two of the three will allow you to easily reuse your installed pre-term assemblies for 40/100G – methods A or B. I’ll explain why in my subsequent posts.

Friday, August 27, 2010

Fiber vs. Copper in the Data Center

by Lisa Huff

Fiber proponents have been saying that the end of copper is near for decades now, only to be proven wrong time and time again. And, at 10G, some were so bold as to say that copper will die because it burns too much power. Well, I listened to their argument that at 8 to 14W per port, copper just won’t cut it. But, now that the 10GBASE-T chip manufacturers have reduced their power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed due to cooling issues. Now, that doesn’t seem to be an issue, evidenced by 24-port configurations that have been released by all the major players.


What this means is that twisted-pair copper will have its place in data center networks for many years to come – especially based on our recent cost analysis detailed in our new report Data Center Cabling Cost Analysis - Copper Still Has Its Place. It compares 10GBASE-SR to 10GBASE-T and includes analysis on the higher power consumption. This report is the first in a series for our Data Center Network Operator service.

Thursday, August 26, 2010

10GBASE-T to Obsolete SFP+ Direct Attach?

by Lisa Huff

If history repeats itself, like it often does in Ethernet networking, the maturing of 10GBASE-T technology will allow equipment manufacturers to get to a more cost-effective solution than the current optical one. This will accelerate the adoption rate of 10GigE in the data center for both Top-of-Rack (ToR) and End-of-Row (EoR) configurations. But will 10GBASE-T supplant SFP+ Direct Attach in the data center? Good question and one that I believe we’re on the brink of getting an answer to.


I’ve noted in previous posts that the main reason 10GBASE-T hasn’t been adopted in switches sooner was because of the chips high power consumption (more than 5W). But with the advent of 40-nm processing and innovative quad-port circuit design, chip makers like Aquantia, Broadcom (BRCM), Chelsio, SolarFlare and Tehuti are all touting devices that consume less than 3W. This enables the medium-port-density that switch manufacturers need for their first blades with 10GBASE-T. And indeed, these implementations are starting to appear.


Blade Networks, Brocade Communications (BRCD), Cisco (CSCO), and Extreme Networks (EXTR) all have various products with 10GBASE-T ports. But all of these OEMs say that while 10GBASE-T may be more cost-effective on the surface, end users may still choose the SFP+ direct attach for the flexibility it allows them in their networks. The whole idea of the pluggable modules – you can use copper, short-reach optics or long-reach optics. So if you need to make a lot of moves, adds and changes, you don’t have to buy new equipment, just switch modules or cable assemblies. This is important in the data center environment. I believe 10GBASE-T will make a dent in SFP+ Direct Attach shipments, but they will coexist, just like their predecessors for Gigabit Ethernet (SFP and 1000BASE-T) are doing right now.


More of a question for me is, how will finally having 10GBASE-T influence the 10GBASE-SR market?

Wednesday, August 25, 2010

Digital Realty Acquires Two Data Centers for $400 Per Square Foot

DLR announced yesterday that it had acquired two facilities fully leased to telecom carriers for $50.3 million. One of the buildings is a 69,700 square foot building in San Jose, the other a 56,000 square foot facility the company refers to as a "primary switch facility" without providing any greater detail.

The $400 per square foot purchase price is just over half of the $789 per square foot the company paid earlier this year for the 919,000 square foot Rockwood Capital portfolio of buildings. The difference likely reflects the lower lease rates paid by telecom providers to house switches and routers compared to what corporations pay for servers, storage, and enterprise networking equipment. This is also represents an economic opportunity for carriers who can conserve the capital that would have otherwise gone into building central offices.

Tuesday, August 24, 2010

When a Standard Isn’t Exactly a Standard

by Lisa Huff

I’ve noted in a couple of posts now that equipment manufacturers charge a lot more for optical modules they sell to end users than what they actually pay for them from transceiver suppliers. Considering the pains OEMs go through to “qualify” their vendors, a healthy markup in the early stages of a new product adoption can be warranted. But, I’m not so sure keeping it at more than 5x the price five years down the road can be justified. And is it sustainable? Some transceiver manufacturers sell products at gross margins in the 20-percent range, while their biggest customers (OEMs) enjoy more like 40 percent.

And guess what, there’s not much the suppliers can do. It is well known that Cisco (CSCO), Brocade (BRCD) and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”

I’ve experienced this first hand. I had a brand new HP (HPQ) ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.

Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?

One added note - there are at least two equipment manufacturers that I know of that support an open standard: Blade Networks and Extreme Networks (EXTR). While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.

Monday, August 23, 2010

Active Optical Cables, Part 3

by Lisa Huff

The last of my summary of AOC Implementations, you can read Part 1 here and Part 2 here.

Reflex Photonics has gained an early customer base in InfiniBand and PCI Express extender applications with its SNAP 12 products, and is using the existing customer base to increase awareness of InterBoard products for data center customers. In developing InterBoard, Reflex Photonics moved into coarser channel implementations to meet industry AOC standards. The four-channel cables terminate in an array of 850nm VCSELs that use QSFP connectors suitable for both InfiniBand DDR and 40G Ethernet. What is also interesting about Reflex’s InterBoard is that it contains its optical engine technology, LightAble.

Zarlink (now part of Tyco) began its ZLynx product line with a CX4 interconnect, but quickly added QSFP as the module was standardized. Zarlink is unique in anticipating possible customer interest in dissimilar terminations by offering CX4-to-QSFP cables. Zarlink product developers say they will take the same attitude as CXP applications emerge. While most AOCs will use identical termination on both ends of the cable, the company will explore customer demand for hybrid connectors. Before it was acquired by Tyco, Zarlink was working on 40G implementations that were expected to be released this year. No announcements have been made as of yet, though. Tyco had its own QSFP AOC, namely the Paralight. It remains to be seen how Tyco will merge these product lines.

The first implementations of 40G Ethernet have indeed materialized as AOCs, but are expected to transition into actual optical modules as soon as transceiver manufacturers are ready with their products. What is nice for the end user is that if they want to implement 40G today, they can with AOCs and the same ports will then accept optical modules later if needed. InfiniBand AOC products are expected to stay as AOCs and not transition into optical modules, mainly because most of these connections are less than 30m so are easier to pull through pathways and spaces.

According to CIR, the market for AOCs is expected to be about $180 million (a rather small market for so many entrants) this year, most of which will be for data centers. However, by 2013, it is expected to grow to more than $1-billion – a steep climb and one that will need a lot of suppliers if it is actually going to happen.

Thursday, August 19, 2010

CME Moves Into New Data Center

Just a couple weeks after NYSE Euronext moved into its new data center in Mahwah, NJ, CME Group (which includes the Chicago Merc, NY Merc, and Chicago Board of Trade) has moved into its new data center in Aurora, Illinois. The new facility will house the exchange's Globex platform, and is intended to reduce execution times as well as provide much needed floor space.

The recession has done little to stop the growth in activity on the group's exchanges, with Clearing and Transaction Revenue growing 28% this past quarter from the same period a year ago.

Wednesday, August 18, 2010

Data Center Markets Can't Be Measured Like Office Markets

As real estate firms develop data center practices, one of the trends I'm noticing is that they like to break down vacancies and space by metro markets, as if data centers were office buildings. Most of the people developing these studies have a background in commercial real estate, not data centers or technology, so this perspective is not surprising. But it's still wrong.

The reason you can't look at data centers like office buildings is because the key considerations for data center locations are access to power and bandwidth, while the key considerations for office buildings are access to highways or transit. And there is transportation infrastructure in every city, but there are huge variances in power and bandwidth among cities, especially those places with major peering exchanges.

Power and bandwidth access create their own demand to a much greater extent than roads and rails do. The Google, Microsoft, Yahoo data centers in the Pacific Northwest are there for the access to hydro and wind power. No one said The Dalles, Oregon was an "underserved" market before Google moved in, but this is the approach commercial real estate firms take - to treat everything like a simple supply/demand equation, and failing to adjust for the fact that the primary tenants of data centers are computers, not people.

Bandwidth access is another huge factor that gets overlooked with the underserved/at capacity mentality these old school commercial real estate people bring with them. Northern Virginia is a data center hub because of MAE East, and that exchange's peering legacy. As a result, you have to be here if you're a co-lo provider and want to provide access to multiple IP transit, peering, and fiber providers. But you don't need to be here if you're offering managed services. These important distinctions get lost when everything is treated like some sort of real estate blur.

Additionally, bandwidth creates another issue - a market's rarely "underserved", because many data centers are distant to the corporate owners paying to use them. To suggest, as Grubb and Ellis does, that Minneapolis is "underserved" makes as much sense as saying agribusiness companies are underserved in the office market. They're both nonsensical statements. Moreover, many large companies want geographic redundancy with their data centers, even if most of their employees are in one location. Data gets backed up and replicated to multiple places, but companies don't usually keep an extra accounting group in Miami just in case there's a snowstorm in Minnesota and their workers can't get into the office.

The absurd approaches commercial real estate firms use to assess data centers are just outdated methods they've carried over from the office market. Few people in industry take these numbers seriously, and investors shouldn't either.

Tuesday, August 17, 2010

Active Optical Cables, Part 2

by Lisa Huff

(If you haven't seen Part 1, you can read it here.)

Avago Technologies (AVGO) had a late entry into the AOC market with its 10GBASE-CX4 replacement and QSFP+ products. But they have a rich history in parallel optics so have quickly come up to speed their products. While they may have been somewhat late to market, Avago has an existing customer base to peddle its wares to.

Finisar’s (FNSR) products include Quadwire and Cwire AOCs to address early adoption of 40G and 100G. Quadwire is Finisar’s mainstream product, both in terms of its use of the VCSEL arrays the company produces in volume at its Texas fab, and in terms of its use of the popular QSFP form factor.

The high end of the Finisar product line is designed to exploit anticipated interest in 100G Ethernet and 12-channel QDR InfiniBand. Cwire offers an aggregate data rate of 150 Gbps and a CXP interface. Not only does this represent the direction of high-end enterprise cluster design, but it allows Finisar to utilize the most integrated VCSEL arrays it manufactures. The 12-channel array also represents the most cost-effective per-laser manufacturing option, allowing Finisar to take advantage of its expertise in designing large VCSEL-arrays. The benefit in high channel count can also be seen in power dissipation. While the single serial channel of Laserwire dissipates 500mW per end, the 12-channel Cwire dissipates less than 3W per end – half the power dissipation per channel.

MergeOptics (now part of FCI) was born of the old Infineon which was once a powerhouse in the optical transceiver markets—both telecom and datacom. It emerged in 2006 with its SFP and then SFP+ products and is now one of the first entrants for 40G and 100G AOCs. Unlike most of its competitors, it is focused on 10G and above products so can bring them to market rather quickly. Its technology is being leveraged for InfiniBand and Ethernet applications.

Monday, August 16, 2010

Akamai Insiders Have Bought Nearly 88,000 Shares This Month

I've made the point in a few recent articles that Wall Street and many industry observers are overestimating the threat that Level 3 and Limelight pose to Akamai (AKAM), just as they overestimated Cisco, AOL, and Inktomi's threats to the company then years ago.

CDNs require large support organizations dedicated to the service, which makes it challenging to simply bundle them with bandwidth. This is why after all these years, Verizon is still reselling Akamai, not competing against it, and why the top two providers of this now decade+ old service are not telcos. Moreover, in Akamai's case, its bandwidth costs are just 16% of revenue, compared to 33% for Limelight (LLNW), a figure that is not declining significantly.

Insiders at Akamai have endorsed this view, and bought over 87,950 shares since the post-earnings call sell-off. The biggest purchase came last Wednesday from Director Peter Kight, who bought 47,950 shares at $41.70, CEO Paul Sagan bought 15,000 shares a week ago Wednesday, and Director David Kenny bought 25,000 shares at $38.78 on August 4th.

While AKAM is not cheap, the market continues to overestimate its competitors' strength, and the insiders are buying on the dips.

Friday, August 13, 2010

Terremark Revenue Up 20% Y/Y

Terremark (TMRK) reported revenue of $79 million yesterday, with its cloud service up to $6.5 million, which represented 19% sequential growth. Still under 10% of revenue though, and cloud does a lot more to create press intrigue than signing up a government agency to a co-lo contract, and carries an excessive press release/revenue ratio.

As a resident of the DC area, I thought it was kind of funny how they described their Culpeper NAP as being "outside" downtown Washington in the 10-Q. It's more than just outside, it's almost 70 miles away! Still, this distance is one reason why the facility is popular with the Federal government. Federal customers accounted for exactly 20% of revenue for the quarter, down from 22% a year earlier.

Colocation is growing slightly faster than managed, with colo revenue now topping 40% of total. While I'm not a big fan of the all-things-to-all people managed + colo + cloud strategy, Terremark pulls it off better than anyone else due to their focus on the Miami-Latin America and Federal markets.

Thursday, August 12, 2010

Active Optical Cables, Part 1

by Lisa Huff

Active Optical Cables (AOC) are typically defined as a fiber subsystem intended for reaches of 3 to 300 meters, but Luxtera and Finisar (FNSR) both promise products stretching a kilometer or more for campus-network solutions. However, I don’t believe AOCs beyond 300 meters will get much traction in the market due to issues with trying to pull these delicate transceiver ends through kilometers of pathways and spaces (conduit or tray), around all types of obstacles. AOCs main applications in high-speed networks are in the data center, including (and probably most relevant) high-performance computing (HPC) clusters.

Intel (AOCs now part of Emcore (EMKR) and Luxtera were among the first to promote AOCs for consumer and data-center markets. Zarlink (its optical products group is now part of Tyco Electronics) launched its AOC effort in 2007, Finisar introduced three varieties of vertical-market AOCs in 2009, and Avago (AVGO) announced its QSFP+ AOC in late 2009. Other participants include Lightwire, MergeOptics/FCI and Reflex Photonics. And, of late, we’ve even seen structured cabling companies like Siemon introduce some of these products, albiet by the looks of it, it is partnering with Luxtera to do so.

The QSFP+ form factor continues to be an enabler for 40G AOCs and in fact, was the first “form factor” released for this data rate. Since the QSFP+ supports Ethernet, Fibre Channel, InfiniBand and SAS, it will be an economic solution for all protocols. This AOC combines the QSFP physical module with management interfaces extendable to 40G, common protocols to support multiple physical layers in a single module and operates at 10G per lane producing a cost-effective solution. A significant ramp in quad data rate InfiniBand and 40G Ethernet will start to accelerate volume applications for these products. QSFP+ AOCs also give an easier path to market for tranceiver vendors, as they allow them to control both ends of the optical link, which is much easier to design - there are two less compliance points.

A summary of some of the product implentations of AOCs for high-data-rate networks:

Emcore has incorporated its existing technology into a pre-terminated active assembly using the current InfiniBand and 10GBASE-CX4 copper connector. So, what is presently being connected by copper can be replaced immediately by an active optical cable assembly. For 40G InfiniBand, this will turn into the CXP connection. The QDR 40 cable from Emcore was announced in mid-June 2008 and according to the company, has been shipping to select customers since early 2009. Yet, it does not seem to be a released product since the only reference to it on the Emcore Web site is its initial press release - no specifications are available there.

Luxtera is addressing the data center market with both InfiniBand- and Ethernet-compliant AOCs. It uses its CMOS photonics at 1490nm wavelength and a high-density QSFP+ directly attached to multi-fiber (ribbon or loose-tube) SMF. This is suitable for 40G applications and has proven a cost-effective solution for data centers that have discovered the difficulty with copper cables. Although the specifications for copper interconnects support 10m of cable, in reality there are both performance issues and mechanical problems with them.

To be continued in my next post...

Wednesday, August 11, 2010

Gigabit Transceivers

by Lisa Huff

In our rush to want to discuss all the new technologies, it seems to me that analysts have forgotten that part of our job is to also point out ongoing trends in existing products. So while talking about Gigabit transceivers might not be as appealing as talking about Terabit Ethernet, it’s also a necessity – especially since, without these devices and the continuing revenue they produce, we wouldn’t have 40/100G or even 10G Ethernet. So what are the important points to make about Gigabit transceivers?

* The market for Gigabit Ethernet transceivers (copper and optical) is expected to be about $2.5-billion in 2010 according to CIR, but it is also supposed to start declining in 2011 when more 10GigE will take its place.

* Pricing for a 1000BASE-SX SFP module is now at about $20 for OEMs. End users still pay Cisco (CSCO) or Brocade (BRCD) or their agents about 8x that much (more about this later).

* Low pricing makes it difficult on profit margins so transceiver vendors hope to make it up in volume.

* While SFP is certainly the preferred form factor, there is still a decent amount of GBIC modules being sold.

* SFP direct-attach copper cable assemblies have become an option for top-of-rack switches to servers instead of using UTP Category patch or fiber cabling, although the majority of implementations today are still UTP patch cords, mainly because the connections within the rack are still 100M with the uplink being Gigabit Ethernet of the 1000BASE-SX variety.

* While 10/100/1000 ports are the norm for desktop and laptop computers, most of these devices are still connected back through standard Category 5e or 6 cabling to 100M switch ports in the telecom room.

* Gigabit Fibre Channel business is pretty much non-existent now. It was quickly replaced by 2G and has progressed through 4G and 8G is expected to become the volume application this year. Look for more on Fibre Channel in future posts.

* Avago Technologies (AVGO) and Finisar (FNSR) top the list of vendors for 1000BASE-SR devices. JDSU (JDSU) has all but disappeared from the scene, mainly because they have de-emphasized this business in favor of their telecom products. In fact, rumor has it that JDSU is shopping its datacom transceiver business and has been for some time.

A note on JDSU: It appears that the optical components giant has taken the technology that was developed at IBM, E2O and Picolight and thrown it away. Picolight was once a leader in parallel optics and, along with E2O, long-wavelength VCSELs. IBM pioneered v-groove technology and the oxide layer that enabled the next leap in speed and improved reliability for 850nm VCSELs. All of these technologies look like they are destined to die a slow, painful death after being acquired by JDSU. The company’s attention is clearly focused on its tunable technology and telecom applications, which is where, of course, it started. JDSU has never had a good reputation for assimilating acquisitions, so none of this should be a surprise. I was optimistic when JDSU bought these companies thinking that now these emerging technologies would be supported by a larger pocketbook. What is the reasoning for JDSU deemphasizing the technologies it acquired? Is it trying to get rid of short-reach competition in hopes that all optical networking would move towards long-wavelength devices? This would have been naïve; the likes of Finisar, Avago, MergeOptics and others would still be supporting 850nm optics and there remains a healthy market for them in enterprise networks and data centers—albeit a very competitive one as stated above.

Tuesday, August 10, 2010

Cisco Earnings Preview

by David Gross

I normally don't like to write these "previews". Neither Lisa nor I think it matters whether a company's EPS comes in a penny short or higher, or if revenue comes in 1.2% higher than expected. Part of the objective of this site is to raise the debate away from the MBA conventional wisdom that leads investors to buy and sell with the herd, and does little to advance understanding of the industry in general. But with Cisco (CSCO) reporting Wednesday after the close, there is one very important metric everyone should be paying attention to - the percent of product revenue coming from switches and routers.

One of the things that's always amazed me about Cisco is how few hedge fund portfolio managers, Wall Street analysts, and other financial types can't tell me what the company's proprietary technologies are. Cisco's monopoly is built on developing proprietary routing and switching protocols that force the customer to buy more Cisco routers and switches to connect them to. The key technologies that accomplish this are EIGRP in routing, and ISL and VTP in Ethernet switching. These protocols, along with a few others, are to Cisco what Windows is to Microsoft. Moreover, just because I buy a Windows PC doesn't mean my neighbor has to. But if I buy a Cisco Ethernet switch and I run ISL and VTP on my VLAN, the neighboring switches can't have a Brocade logo and still work. But instead of touting these technologies, Cisco IR presentations are filled with all kind of gibberish about e-learning, e-health, and borderless networks. Yet these issues just divert investors' attention away from the key technologies that make Cisco so dominant in routing and switching.

Cisco has dominated switching since it bought the company that pioneered the concept of switched Ethernet, Kalpana, in 1994, in addition to buying Crescendo Communications in 1993. It's dominated routing since it surpassed Bay Networks around the same time. For all the other acquisitions the company has made, and all the new products its launched, it still gets 64% of its product revenue from switches and routers. Moreover, those have been its fastest growing products recently. Last quarter, switch revenue was up 40% year-over-year, routing revenue 31%, while advanced technologies was up just 18%. Switches and routers were down to 62% of total after the quarter ended April 25, 2009, but have been gaining again with their faster growth.

Cisco now sells over $20 billion of switches and routers a year - and growing. There is no one it can acquire to water down this massive market where it has massive market share. It could very well end up dominating the fast growing telepresence industry, but even if its telepresence revenue reached $1 billion in 2012, routers and switches could surpass $25 billion by then.

In the data center, Cisco is about to repeat history with its proprietary FabricPath, which is an "enhanced" version of the IETF's TRILL routing protocol, just like ISL was an enhanced version of the IEEE's 802.1q, and IGRP was an enhanced version of the IETF's RIP. John Chambers and Cisco executives will not be talking about EIGRP, ISL, or VTP, on the call, but proprietary routing and switching technologies are far more important to Cisco's future than any of the futuristic applications they will be discussing. For investors and observers, understanding the significance of these technologies is as important as understanding the significance of Windows if you're investing in Microsoft.

Monday, August 9, 2010

Cisco and AOL vs. Akamai

by David Gross

Exactly ten years ago in August 2000, industry leaders were concerned about Akamai's (AKAM) domination of the CDN business, and formed two separate coalitions to do something about it. Cisco (CSCO) created the "Content Alliance", which included most of the major business ISPs of the time, such as Cable & Wireless, Genuity and PSINet. AOL and Inktomi created the "Content Bridge". Akamai's chief competitor at the time, Digital Island, joined both groups.

The conventional wisdom among analysts and Wall Streeters was that Akamai wouldn't be able to stand the competitive threats, and with the world turning against the company, it would struggle to hold its market share, let alone survive. Moreover, Cisco wanted to take matters to the IETF, to neutralize the market value of Akamai's patents.

Akamai's biggest problem back then wasn't these content groups trying to destroy its business, but its own over-expansion. It didn't need any help from AOL or Cisco when it came to wrecking its balance sheet and income statement. And successive generations of competitors haven't stopped it from improving its financials. In 2000, the company spent 47% of its revenue on bandwidth and colo fees, in 2010, it spends 16%. In 2000, it produced 62 cents of revenue for every dollar of property, plant, and equipment on its books. In 2010, it produces five dollars of revenue for every dollar of PP&E.

The conventional wisdom chorus that fretted about Cisco and AOL ten years ago, is now worrying about Limelight (LLNW) and Level 3 (LVLT). Level 3's CDN business is the old Digital Island service, three owners later. While Akamai was in the process of growing fourfold between 2003 and 2009, the Digital Island CDN was being passed through the hands of Cable & Wireless, Savvis, and Level 3, which cut the growth of what would otherwise have been a much stronger competitor. Limelight did grow faster than Akamai last quarter, and is now 1/6th the size of its larger competitor. However, Limelight's network is far more centralized with 76 POPs compared to 1,200 for Akamai. While there are operational benefits to both approaches, Akamai's is far more cost effective, with its bandwidth and colo fees amounting to just 16% of revenue, compared to 33% for Limelight.

After ten years of worrying about Akamai's competition, investors would be better off finding the next company that will grow on the back of a major cost advantage, because no one who's competed directly against Akamai the last decade has developed one.

Friday, August 6, 2010

When Does Passive Optical LAN Make Sense?

by Lisa Huff

Are you purchasing a Transparent LAN service? Then you probably want to consider POL. Inter-building Transparent LANs often have distance limitations, which are currently overcome by installing significantly more expensive 1310 and 1550nm transceivers. As an active network, these higher cost modules are needed on both ends of every connection, and where seven-to-eight buildings are involved, the dollars spent can add up quickly.

With one long reach transceiver needed at the central office (or fed out of an enterprise data center), POLs can offer significant savings in multi-building campus environments. It is important to note how much more expensive 1550nm modules are as compared to their 850nm counterparts. At 10-Gigabit, a 10GBASE-SR (850nm) optical module costs approximately $1300/port (switch + transceiver cost). A comparable 10GBASE-ER (1550nm) longer reach device that is needed for an inter-building connection costs around $11,000/port (switch + transceiver) or nearly ten times as much. When connecting multiple buildings in a campus setting, these costs add up quickly, and a POL network can be a much more economical solution. The POL system uses 1310/1550nm diplexer optics and while more expensive than 850nm can still cover entire campuses at a fraction of the cost of the 1550nm Ethernet-based transceivers. And, since the signal from these devices can be split to as many as 64 users instead of 1, the cost-per-end-user is drastically reduced.

Passive optical LANs are being touted by their equipment suppliers as THE most cost effective solution for medium-to-large enterprises. According to Motorola, you can save more than 30-percent on your network infrastructure and as your number of users increases, so does your savings.

In our recent research for our POL report, we found that there is a subset of vertical markets – specifically, not-for-profits – that may be ripe to implement this technology. But how does this affect the data center network?

We’ve done our own cost analysis and the reason why POL is so cost effective as compared to a traditional switched-Ethernet network is because you can eliminate lots of copper and MMF cabling as well as workgroup switches. But, in the data center, you still need to connect to your WAN routers. With a POL, you could cover as many as 96 end users with one 4-port blade in an enterprise aggregation switch and ONE 10G uplink port to the WAN router. The equivalent switched-Ethernet network would need four workgroup switches connected to a core switch through 12 uplink Gigabit Ethernet ports and TWO 10G uplink ports from the core switch to the WAN router. So by installing POL, you may be able to cut your router uplink ports in half. I wouldn’t mind saving 10’s of thousands of dollars on router ports – would you?

Of course, this is all assuming a totally non-blocking architecture, which, in reality, isn’t necessarily needed. A switched-Ethernet oversubscribed network covering a 132-user, 4-floor building is still less expensive than a POL. For the details, see our POL report.

Thursday, August 5, 2010

Internap Needs to Figure Out What Business It's In

Internap (INAP) reported yesterday that quarterly revenues had declined 6% year-over-year to $61 million. The drop was expected as the company is trying to transform itself from a reseller of other provider's assets to a facilities-based data center provider. As part of that initiative, it has committed to a $50 million capital program.

Financially, Internap looks like the anti-Level 3 (LVLT). It shares Level 3's broad product line, but historically it has taken a hit on its income statement for reselling services at low margins, while Level 3 has taken a hit on its balance sheet by building large networks. Internap has much lower debt/revenue and much higher revenue/PP&E than Level 3 as a result. But now Level 3 is cutting capex to shore up its balance sheet, while Internap is increasing capex to improve its income statement.

Internap has never been a typical reseller. It developed a proprietary algorithm for routing traffic across multiple networks, but then extended the marketing concept behind this strategy to become a reseller of a broad range of IP and hosting services. Not unlike most resellers, it has a fairly broad product catalog. But transforming itself to more of a facilities-based provider means more than building up the capital budget, because it will not get a reasonable return on assets as without growing market share substantially.

Offering IP services, colocation, managed hosting, and CDNs. Internap is like Level 3, Equinix (EQIX), Rackspace (RAX), and Akamai (AKAM) rolled into one company, but without the market leadership in any of these services. To date, this has made it a balance sheet strong, but income statement weak distributor of other company's assets. But as it now builds on its own, it has to do more than commit capital to data centers, it has to look at where it can develop some kind of cost advantage over its competitors, and that won't happen in all four services.

Unlike its larger competitors, Internap owns proprietary routing software - MIRO (Managed Internet Route Optimizer) - that tackles many of the latency problems associated with BGP. But there is no reason to limit this technology to just its own service. If makes more sense economically to spread MIRO's development costs to other companies by selling it to them directly, not as part of a monthly IP service where the market has demonstrated it will not pay much premium for a proprietary technology.

In the collocation market, Internap has long relied on locating at existing facilities built by companies like Equinix and Switch and Data (which is now part of Equinix). But as long as its selling IP services, it will never truly be carrier neutral, which has been a key selling point for Equinix's service. Moreover, a $50 million boost in capital investment is not going to be enough to match the billions Equinix and Telx have already invested. It will likely have to compete on price, which is unpleasant if you are reselling, but deadly if you're selling access to your own assets.

Instead of competing against Akamai, Equinix, and Rackspace, Internap really should be competing against someone like privately-held Packet Design, which is selling its proprietary routing software to large carriers and enterprises alike. Routing is still an expensive, high latency, but necessary long distance network function, and Packet Design has had success solving corporate customers pain points with Cisco's proprietary EIGRP, and carriers' challenges with BGP. I know Internap is not about to shut down its network and just start selling its software, but there is a unique technology sitting within the company that is being stifled by the requirement that no one else can have access to it.

Wednesday, August 4, 2010

End-to-End Fibre Channel over Ethernet?

by David Gross

Chris Mellor at The Register has a good analysis of some of the challenges of developing End-to-End FCoE networks. The few FCoE implementations that have shipped to date bring Fibre Channel and Ethernet together at the CNA, and then split the Ethernet and Fibre Channel traffic at an Ethernet switch, allowing the LAN bytes to go one way, and the Fibre Channel bytes to go their own way back to the SAN. End-to-End wouldn't just mean one NIC/CNA as current FCoE does, but one switch, and one network. Reminds me a lot of the God Box concept we saw 10 years ago in telecom networks, and ATM's promise of LAN/WAN integration in the mid-90s.

There are a number of operational challenges relating to frame prioritization. The IEEE is addressing this in part through 802.1Qbb. However, Fibre Channel transmissions are not like the 150 byte trade orders that often fill InfiniBand networks. As I mentioned in this morning's article, FC is the long freight train of data networking, allowing up to 65,536 frames per sequence. At 2,112 byes per frame, this means everyone could get stuck behind a sequence as long as 138 Megabytes while it crosses the wire. Kind of like sitting at the railroad junction in your Toyota while you wait anxiously for the caboose go by. Now it's one thing to do this on specific server-to-switch links, but across the entire network?

Additionally, in order to get around the additional congestion created by Spanning Tree, a routable protocol will be needed to open up ports that would otherwise be disabled to prevent looping, a point Mellor mentions in the Register article. But what he doesn't mention, and what I don't get, is how the switch manufacturers will deal with the cost of the added memory needed to handle this. It doesn't matter if you decide to go with TRILL or Fabric Path, if you're storing routes in a table, you'll need more memory in the switch, which can add significant hardware costs. While Clos architectures are mostly used in supercomputing, not enterprise data centers, they are designed to limit memory requirements, because they don't force each switch to build a table of all known routes across a network. This makes the switches more cost effective (it's one reason why InfiniBand and ToR switches go for less than $500 per 10G port). Switch memory is a precious resource, and no one wants to see data center switches heading for the price levels of layer three boxes that can hold 300,000 BGP4 routes.

The device to handle these TRILL requests is called a Router Bridge, or RBridge. Reminds me the "switch routers" of the early 2000s that targeted telco networks and offered switching capabilities at high router prices. The RBridge is going to need some expensive ASICs to handle all the added features the IEEE is developing, in addition to more memory for the routing tables.

With the economics already looking very challenged, Mellor's piece in The Register ends by pointing out that End-to-End FCoE will also require merging the SAN managers with the LAN managers. I've lived through attempts to merge IP specialists with optical specialists, and they never made it anywhere. While Fibre Channel and Ethernet experts have more in common with each other than packet and optical transport ops., most FC people are exceptionally knowledgeable about tape drives, RAID arrays, and other storage technologies in addition to FC itself. I wouldn't want to be at the meeting where they're asked to also be general networking experts, while giving up control of the SANs they know so well.

All in all, I'm extremely skeptical of end-to-end FCoE. It sounds good in theory, it looks good in PowerPoint, but with higher ASIC costs, added memory costs, not to mention attempts to tie together different operating groups, it will likely make diverged networks look even better in cost comparisons.

Fibre Channel and InfiniBand vs. Ethernet

by David Gross

As successful as InfiniBand has been with supercomputing centers and financial traders, it has struggled to break into the mass market dominated by Ethernet. Fibre Channel has had great success in SANs, but isn't even trying to go after the mass market anymore. Yet when first standardized over ten years ago, both Fibre Channel and InfiniBand had far greater ambitions.

Through their respective industry associations, InfiniBand and Fibre Channel have responded by developing "over Ethernet" versions of their protocol in recognition of Ethernet's dominance. The InfiniBand Trade Association's RoCE, or RDMA over Converged Ethernet, could have just as easily been called InfiniBand over Ethernet.

The niches that Fibre Channel and InfiniBand currently fill though, aren't going away, but neither protocol is about to challenge Ethernet supremacy. Interestingly, the niche roles each protocol occupy have little to with bandwidth, cost, or price per bit. Fibre Channel has never been cost competitive with Ethernet, even when both topped out at a gigabit, yet has held on strong in storage. InfiniBand is cheaper per bit than Ethernet, particularly at 40G, but is struggling to break out of its supercomputing and financial trading niches.

More than cost per bit or port price, an interesting factor behind the development of the InfiniBand and Fibre Channel niches comes down to message size. InfiniBand is very closely tied to parallel computing, and the shorter messages that result from breaking up a transmission across multiple CPUs and GPUs. Fibre Channel is closely to tied to serial storage networks, particularly the large block transfers that cross SANs, which rely on the protocol's hardware-based error correction and detection, and generally require the link length that comes with a serial protocol. To use a somewhat cheesy analogy, you could say InfiniBand is a little sports car, Fibre Channel a long freight train, and Ethernet a Camry.

Fibre Channel grew on the back of serial storage networks, InfiniBand on parallel supercomputing networks. The biggest threat to either then is not Ethernet, but a revival of parallel SANs and serial supercomputing, neither of which will happen anytime soon.

Tuesday, August 3, 2010

DuPont Fabros Revenue Up 21% Y/Y to $59 Million

DuPont Fabros (DFT) is approaching a quarter billion dollar annual run rate. In its earnings release, the company narrowd annual guidance slightly to a range of $1.30 to $1.40 from a previous range of $1.25 to $1.45.

The stock has traded flat after hours. The call will be held tomorrow morning at 10am.

Laser-optimized Multimode Fiber (LOMF)

by Lisa Huff

Right now, there are three standardized types of LOMF in addition to FDDI-grade fiber, which is not laser optimized. So first, what does laser-optimized actually mean? In basic terms, it just means that the fiber was designed to be used with lasers, and in the case of MMF, typically VCSELs. FDDI-grade fiber pre-dated the use of VCSELs so it is not laser-optimized - it was intended for utilization with LEDs. Lasers were adopted as the light source of choice when scientists and engineers realized that LEDs became very unstable when trying to modulate them at data rates beyond 100 Mbps. They originally tried to use the same lasers that were being used in CD players, but these turned out to be unstable at Gigabit data rates as well. In the early 1990s, the development of the VCSEL enabled these higher data rates.

As the light sources evolved, the fiber progressed with them. So, for 850nm operation today we have four choices:

1. OM1 (FDDI): Minimum OFL Bandwidth of 200 MHz•km; 10G Minimum Optical Reach of 33m
2. OM2: Minimum OFL BW of 500; 10G Minimum Optical Reach of 82m
3. OM3: Minimum OFL BW of 1500; 10G Minimum Optical Reach of 300m
4. OM4: Minimum OFL BW of 3500; 10G Minimum Optical Reach of 550m

As you can see, the bandwidth of the fiber is intimately tied to what type of light source is used and the optical reach is dependent on both bandwidth and data rate. And, while OM1 fiber wasn’t necessarily designed to be used with lasers, it works fine with them, albeit at a shorter distance than with LOMF. Of note as well is the fact that there are a few cable manufacturers that also provide what I would call OM1+ cable that is 62.5-micron, but is laser-optimized, so may have some improved bandwidth and reach.

All this leads to a very important point – when specifying a cabling system for your networks and data centers, it is important to understand not only the fiber you’re going to install, but also the equipment you’re trying to connect. Just because you're "only" installing Gigabit systems and you've used OM1 fiber for years, doesn't mean it's the best solution (or even the most economical) for today and tomorrow.

Monday, August 2, 2010

Revenue Growth Slowing for Data Center Technologists

While data centers continue to defy the economy, revenue growth is clearly slowing down for the technology suppliers. Sequential rates have been much lower on an annualized basis than Year-over-Year growth rates, and the 2nd quarter typically gets a rebound off the 1st quarter lull. Mellanox (MLNX) reported strong sequential growth, but guided down for the next quarter, and VMWare (VMW), which is majority owned by EMC (EMC) stated that license revenue would be flat.

I'll have another post with the service providers after Rackspace (RAX) reports, but the trend looks similar there - with growth continuing, but at a slowing pace.


Y/Y Rev Growth Sequential Sequential
Rev Growth Rev Growth Inventory Growth
MLNX 58% 10% -3%
VOLT 54% 7% -14%
QLGC 16% -2% 26%
FFIV 46% 12% 8%
EMC 24% 3% -4%

Sunday, August 1, 2010

Data Center TCO - why no IRR?

by David Gross

In our first post on TCO Models a few weeks ago, I mentioned how these things rarely account for the time value of money, and how this can create dramatic distortions in the actual costs of the products the models are supposed to support.

I've looked through a few more models lately, and have yet to see an IRR, or Internal Rate of Return. In one otherwise informative model on data center TCO, the author decided to sum capex and opex together into one big TCO number. When I worked in capital budgeting, we would have tossed any project authorization request out the window if it did this, and most marketing people knew this.

The most important financial metric to anyone planning capital for equipment is not TCO or ROI, but IRR. Yet it's always missing! It's no wonder so many TCO models do little to improve market share for products that supposedly have the competition beat on cost. The first step to changing this is for vendors to stop beating their chests about incredible savings, and to incorporate IRR and the time value of money into their analysis.