Thursday, December 30, 2010

Zacks Upgrades Equinix to "Outperform"

By David Gross

I'd rather read the Internal Revenue Code than a typical sell-side research note.   At least the I.R.S. is specific, and doesn't create buzzwords and catchphrases like "first mover", "secular growth", and "low hanging fruit".   And even though we've had a series of tax reforms through the years, no one's ever referred to a "next generation" tax code .   Also, when it tells you to do something, the tax collection agency is far better at telling you what to do next than an investment research firm.  "You owe us" leaves far less doubt about where to send your money than the analyst note informing you that a stock is a "near-term accumulate".  

In the tradition of vague analyst-speak, Zacks recently announced it had upgraded Equinix from neutral to outperform.   Its reasoning - the company beat consensus revenue by 0.4%,  issued encouraging guidance, and is still expanding.   No wait, that's not quite right.  It was "continuous efforts to expand the current facilities".   Not sure how to interpret this.    I mean, I've been to the Ashburn campus in the last few days and any effort - continuous or not - to expand DC2 will push them through the pine trees and into DFT's ACC4 building. 

The note then goes through Equinix's financial ratios, remarks how it's well-positioned or something, but I can't tell you what it said after that because I received an e-mail about a Nigerian prince leaving me the sum of exactly $1,307,465.27, which was a lot more interesting.  And specific!

Equinix is up, or in Wall Street language "moving in positive territory" by 57 cents this afternoon, on very light volume of 400,000 shares.  

Wednesday, December 29, 2010

Microsoft Gets Final Approval for West Des Moines Data Center

By David Gross

This summer, Microsoft announced that it was resuming construction on the Iowa data center it had postponed completing during the recession.   Now The Des Moines Register is reporting that Microsoft has received final approval for the West Des Moines data center, and that the company is obligated to complete construction on the facility by December 2012.

The City of West Des Moines is kicking in $8 million worth of roads and water main extensions to serve the facility, which will be financed through bonds secured by the site's property taxes.   The project will cost $200 million to complete, which suggests this will only be phase 1, or a scaled down version of the initially proposed 500,000 square foot site.

The data center has been a high profile economic development project for the State of Iowa, which also hosts a Google facility two hours west in Council Bluffs.

Monday, December 27, 2010

PAETEC Opens 5th Data Center in Milwaukee

By David Gross

With the Cincinnati Bell-Cyrus One and Windstream-Hosted Solutions deals of the past year, we've seen growing interest in the data center market among independent telcos.   But many CLECs aren't just providing connectivity into facilities, they're expanding their own regional data center offerings.   Paetec, which at $1.5 billion in annual revenue is one of the larger remaining CLECs, recently announced it had opened its 5th data center, a 92,000 square foot project in Milwaukee.

The new facility is Paetec's first in the Midwest, and is targeted at businesses throughout the region, from Chicago to St. Louis to Minneapolis.  The company's existing buildings are in Pennsylvania, Massachusetts, and Texas, and is plans to expand to Arizona next year.

XO Connects with Baltimore Technology Park

By David Gross

XO announced last week that it is providing connectivity to Baltimore Technology Park, a carrier-neutral co-lo facility located in that city's downtown.   BTP, and its sister site, the Philadelphia Technology Park, offer regional versions of what providers like Equinix and CoreSite offer in the big data center markets, allowing local businesses to cross-connect and colocate without having to reach Northern New Jersey or Northern Virginia.  By bringing in XO and the same selection of carriers available at an Equinix site, albeit without the massive peering exchange, these data centers offer regional Fortune 500 businesses, hospitals, universities, and other local businesses a comparable service to what financial traders and major websites get at Equinix sites in the larger markets.

In September, Lisa and I visited the Philadelphia Technology Park, which is located within the Philadelphia Navy Yard.  More information on that site, as well as the one in Baltimore, is available at and

Wednesday, December 22, 2010

Everything Saves Energy Costs

By David Gross

It's always amusing to see what techniques are used to sell products in this industry.   For many years, the made up "TCO" metric, developed in sales and marketing, not finance, has made its way into all kinds of places.   The term "ROI" has also been used and abused, with very little mention of the metric that really matters to any capital expenditure, which is IRR.  When technology, sales, and financial measurement have met, the results haven't been pretty.

The latest financial tag being attached to products is saving energy costs.   And I'm not just talking about new CRACs (Computer Room A/C), UPS systems, or PDUs, but cabling, networks, really anything that is physically near a data center, can be sold as a device to cut your energy bill.   As I pointed out a few weeks ago, there are some tremendously energy efficient network products which make little sense to deploy unless you want to re-design your network.   A Voltaire 4036 InfiniBand switch, for example, has a nameplate capacity of .18 Watts per Gbps, less than a tenth of a typical Ethernet switch.   Only problem is that deploying an InfiniBand cluster doesn't make financial or operational sense for many data centers. 

Now my favorite recent example is from a recent Processor article, where a Cisco exec claims you should deploy Fibre Channel-over-Ethernet because it reduces energy costs 30%.   Yes, deploy a network that's likely to degrade performance of your SAN and LAN, and increase the capital costs of both by forcing you to buy expensive switches and CNAs (Converged Network Adapters).   This made me laugh because this was EXACTLY the argument used for the failed "God Boxes" of the early 2000s.   Buy one big monster instead of multiple smaller devices, and save power because it's one piece of hardware, not seven or eight.   The capital returns on doing this were atrocious, and the market performance of those products reflected this.   Moreover, the power savings are theoretical, not based on operating networks.

It's no secret that energy efficiency is important to any data center.   But like anything else, it's a trade-off.   You can have high response times, a 100 Meg network, and lightly loaded racks and use very little energy.  

The Processor article goes on to say that locating your data center at a renewable power source is a great way to reduce your carbon footprint.   This comes from a RackForce Networks exec.   Economically, this also cuts your variable power cost down to almost zero, especially with wind power, which has remarkably low O&M costs.   However, this does not mean everyone will follow Google, Microsoft, Yahoo, and Verizon to Lake Erie or the Columbia River Valley.  The trade-off is that you also have to put more capital into fiber and network than you do in Santa Clara or Ashburn.   Not to mention the building itself.   For this reason, it makes little sense to talk about energy savings generically, but rather to determine how the trade-offs change when you go from Equinix or DLR to your own building, and vice versa.

The 10X10 MSA: Niche, Distraction or the Right Answer? (Continued)

By Lisa Huff
While Vipul has a point that this new MSA is probably a distraction, it is difficult to deny that there is a market for cost-effective devices with optical reaches between 100m and 10km. In fact, 100m to 300m is the market that multi-mode fiber has served so well for the last 20 years. And, 300m to 2km has been a niche for lower-cost 1310nm single mode products like 1000BASE-LX. So I have a slightly different opinion about this 10x10 MSA and whether it’s a niche, distraction or the right answer.

In a recent article written on Optical Reflection, Pauline Rigby quotes Google’s senior network architect, Bikash Koley. About 100GBASE-SR10, he says 100m isn’t long enough for Google – that it won’t even cover room-to-room connections and that “ribbon fibres are hard to deploy, hard to manage, hard to terminate and hard to connect. We don’t like them.” There is an answer for this ribbon-fiber problem – don’t use it. There are many optical fiber manufacturers that now provide round multi-fiber cables that are only “ribbonized” at the ends for use with the 12-position MPO connector and are much easier to install – Berk-Tek, A Nexans Company, AFL and even Corning have released products that address this concern. But, the 100m optical reach is another matter.

I have to agree with Google about one other thing – 4x25G QSFP+ solutions are at least four years away from reality (and I would say probably even longer). This solution will eventually have the low-cost, low-power and high-density Google requires, but not quick enough. I think something needs to be done to address Google’s and others requirements between 300m and 2km in the short term, but I also believe that it needs to be standardized. There is no IEEE variant that would currently cover a 10x10G single mode device. However, there is an effort currently going on in the IEEE for 40G over SMF up to 2km. Perhaps the members of the MSA should look to work with this group to expand its work or start a new related project to cover 100G for 2km as well? I know this was thrown out of the IEEE before, but so were 1000BASE-T and 10GBASE-T initially.

So what I'm saying is that the market is more than a niche - hundreds of millions of dollars of LOMF sales at 1G and 10G would attest to that. And it's more than a distraction because there is a need. But I don't think it's entirely the right answer without an IEEE variant to back it up.

Let us know what you think.

Tuesday, December 21, 2010

CoreSite Declares Dividend, Yield Just Under 4%

by David Gross

CoreSite declared a dividend of 13 cents this week, giving the company an annualized yield just under 4% based on today's closing price of $13.60.  The stock has risen nearly ten percent since yesterday morning when the dividend was announced.   Nonetheless, it remains below the $16 level at which it IPO'd a couple months ago. 

Overall, the last few weeks have been rough for data center REITs.  In addition to CoreSite struggling to get back to its IPO price, Digital Realty is down over 4% over the last month, and DuPont Fabros is down over 7%.   DLR's yield is up to 4.33% as a result of its weak performance during autumn.    It began the season over $61, and is down to $48.94 as we head into winter.

The 10X10 MSA: Niche, Distraction or the Right Answer?

By Vipul Bhatt, Guest Blogger

{For today’s blog, our guest author is Vipul Bhatt. Lisa has known Vipul for several years, since when he was the Director of High Speed Optical Subsystems at Finisar. He has served as the Chair of Optical PMD Subgroup of IEEE 802.3ah Ethernet in the First Mile (EFM), and the Chair of Equalization Ad Hoc of IEEE 802.3ae 10G Ethernet. He can be reached at}  

If you are interested in guest blogging here, please contact us at mail at

Last week, Google, JDSU, Brocade and Santur Corp announced the 10X10 Multi-Source Agreement (MSA) to establish sources of 100G transceivers. It will have 10 optical lanes of 10G each. Their focus is on using single mode fiber to achieve a link length of up to 2 km. The key idea is that a transceiver based on 10 lanes of 10G will have lower power consumption and cost because it doesn’t need the 10:4 gearbox and 25G components. But is this a good idea? What is the tradeoff? Based on my conversations with colleagues in the industry, it seems there are three different opinions emerging about how this will play out. I will label them as niche, distraction, or the right answer. Here is a paraphrasing of those three opinions.

It’s a niche: It’s a solution optimized for giant data centers – we’re talking about a minority of data centers (a) that are [already] rich in single mode fiber, (b) where the 100-meter reach of multi-mode 100GBASE-SR10 is inadequate, and (c) where the need for enormous bandwidth is so urgent that the density of 10G ports is not enough, and 100G ports can be consumed in respectable quantities in 2011.

It’s a distraction: Why create another MSA that is less comprehensive in scope than CFP, when the CFP has sufficient support and momentum already? Ethernet addresses various needs – large campuses, metro links, etc. – with specifications like the LR4 that need to support link lengths of well beyond 2 km over one pair of fiber. We [do] need an MSA that implements LR4, and the SR10 meets the needs of a vast majority of data centers, so why not go with CFP that can implement both LR4 and SR10? As for reducing power consumption and cost, the CFP folks are already working on it. And it’s not like we don’t have time – the 10G volume curve hasn’t peaked yet, and may not even peak in 2011. Question: What is the surest way to slow down the decisions of Ethernet switch vendors? Answer: Have one MSA too many.

It’s the right answer: What is the point of having a standard if we can’t implement it for two years? The CFP just isn’t at the right price-performance point today. The 10X10 MSA can be the “here and now” solution because it will be built with 10G components that have already traversed the experience curve. It can be built with power, density and cost figures that will excite the switch vendors, which may accelerate the adoption of 100G Ethernet, not distract it. As for 1-pair vs. 10-pairs of fiber, the first swelling of 100G demand will be in data centers where it’s easier to lay more fiber, if there isn’t plenty installed already. The 2-km length is sufficient to serve small campuses and large urban buildings as well.

Okay, so what do I think? I think the distraction argument is the most persuasive. An implementation that is neither SR10-compliant nor LR4-compliant is going to have a tough time winning the commitment of Ethernet switch vendors, even if it’s cheaper and cooler than the CFP in the short term.

Friday, December 17, 2010

Google Moving Out, Small Businesses Moving In

By David Gross

David Chernicoff over at ZDNet has a good article out on data center planning, where he notes that many of the small to mid-size businesses he's spoken to are planning to outsource some of their operations.   This is similar to the experience Lisa and I have had talking to data center managers who run internal centers, and are hitting capacity limits.   It also is an important point for investors to consider, many of whom are still fretting about the data center services industry with Google, Facebook, and other brand name tenants investing so heavily in their own buildings.

One of the factors to consider with this developing market segment is that these small businesses are not going to be buying a powered base building sort of service, nor are they likely to hit up Equinix for a few cabinets.  More realistically, they'll go to IBM, Horizon Data Centers, a hosting provider, or even someone like Rackspace, and start handing over applications slowly.   Additionally, connectivity is a major concern once these small businesses move beyond simple e-mail outsourcing, and a data center that has dedicated links to other facilities closer to the customer will allow that customer to cross-connect closer to the office, and avoid high dedicated circuit costs from a telco.

Economically, an internal data center for Google, Apple, or Facebook produces a financial return by turning an operating cost for a building lease into a capital cost, while an outsourcing arrangement for a small business turns a capital cost for servers into an operating cost.   As a result, the heaviest users are hitting a point where outsourcing makes less sense, while the lightest users are hitting a point where outsourcing makes more sense.   The result is that the public data center of the future will have a tenant roster that looks less like what you might find in an office building in Santa Clara, and more like what you'd see in a typical suburban office park.

Thursday, December 16, 2010

AboveNet Expanding Services at Data Centers

By David Gross

For years, facility-based CLECs have struggled to fill many of the optical links they've run to corporate office buildings.   With 5-10 tenants in some locations, it can be a struggle for a provider to generate enough revenue to get a good return on the capital invested in the fiber lateral that hits the building.    The data center has provided a great opportunity to overcome this challenge by offering so many corporate customers in one physical location.  And few providers have seized this opportunity as well as AboveNet has.   This is one factor behind the company's 16% net margins - actual profit, not EBITDA.    This is the highest I've ever seen for a bandwidth provider.

Earlier this week, AboveNet announced a new sales initiative to provide optical connectivity services at over 400 data centers across the country.   Its footprint follows many of the major public data center markets, including DC, New York, and Silicon Valley.   A more detailed map of the company's data center POPs is available here.

Wednesday, December 15, 2010

DAC Report

By David Gross

We're happy to announce that our latest report, Direct Attach Copper Cable Assemblies for 10, 40, and 100 Gigabit Networks, is now available.   We've posted a table of contents on the "DAC Report" page if you are interested in learning more.

Tuesday, December 14, 2010

Should You Increase CRAC Set Points to Save Energy Costs?

By David Gross

Energy management for data centers has been lighting up the press wire lately.   The fundamental economic premise behind most of the stories is that by monitoring temperature, air flow, and humidity at more places and more closely, a data center will get a great financial return by reducing energy costs.   But I'm finding that some of the vendor presentations present the savings at a very generic level, and while they might have a good story to tell, the suppliers need more detailed financial analysis, and more sensitivity analysis in their financial estimates, especially to highlight how the financial paybacks vary at different power densities.

Recently, consulting firm Data Center Resources LLC put out a press release claiming that by increasing CRAC (Computer Room Air Conditioner) set points, a data center could get a "six month" ROI on its investment in sensors, aisle containment systems, and airstrips that augment existing blanking panels.    Of course, there is no such thing as a six month ROI, but I'll grant them the point that they really mean a six month payback period.   However, as I've said many times, ROI is a meaningless metric, instead data center managers should be using IRR, and incorporating the time value of money into all such calculations.

Once these new systems are installed, Data Center Resources argues that you can start increasing the temperature (they did not mention anything about humidity) set point on the CRACs and reduce energy costs.  The firm claims each degree increase in the CRAC set point cuts 4-5% in annual energy expenses.   But given the wide discrepancies in data center power densities, the actual savings are going to vary dramatically, and before estimating an IRR, a data center manager would need to perform a sensitivity analysis based on growing server, power, and cooling capacities at different rates, otherwise this is all just a generic argument for hot aisle/cold aisle containment.

Monday, December 13, 2010

When to Upgrade to 40 Gigabit?

By David Gross

ZDNet has a good article out on the decision to upgrade to 40G.   But for all the talk and excitement over the new standard, the reality is that line rate is not as important a measure of network capacity as it used to be, especially with multi-gigabit port prices having more to do with transceiver reach, cabling options, and port densities than framing protocol.    10GBASE-SR, the 850nm 10 Gigabit Ethernet multimode standard, for example, has a lot more in common with 40GBASE-SR4 than it does 10GBASE-EW, the 1550nm singlemode, WAN PHY, standard.   Moreover, a 40 Gigabit port can can cost as little as $350, if it's configured on a high-density Voltaire InfiniBand switch, but over $600,000 if it's configured to run Packet-over-SONET on a Cisco CRS.

Within the data center, which line rate you choose is increasingly falling back in importance to which transceiver you choose, what type of cabling, whether to aggregate at end-of-row or top-of-rack, and so forth.   Moreover, as I wrote a few weeks ago, the most power efficient data center network is often not the most capital efficient.    So rather than considering when 40 Gigabit Ethernet upgrades will occur,  I think it's more important to monitor what's happening with average link lengths, the ratio of installed singlemode/multimode/copper ports, cross-connect densities in public data centers, the rate of transition to IPv6 peering, which can require power-hungry TCAMs within core routers, and especially whether price ratios among 850, 1310, and 1550 nanometer transceivers are growing or shrinking.   So rather than wondering when 40G will achieve a 3x price ratio to 10G, it's equally important to consider whether 10G, 1550nm transceivers will ever fall below 10x the price of 10G, 850nm transceivers.

Line rate used to be far more important within this discussion.   When 802.3z (the optical GigE standard) came out in 1998, it wasn't that it was 3x the price of Fast Ethernet that mattered to data centers, but that it was cheaper than 100 Meg FDDI, which was the leading networking standard for the first public web hosting centers.   The wholesale and rapid replacement of FDDI with GigE was largely a result of line rate - more bits for less money  - and an economic result of the low price of high volume Ethernet framers.    But over a gigabit, production volume of framers gives way to the technical challenge of clocking with sub-nanonsecond bit intervals, and silicon vendors have had the same challenges with signaling and jitter with Ethernet that they've had with InfiniBand and Fibre Channel.   This is a major issue economically, because widely-used LVDS on-chip signaling is not Ethernet-specific, and therefore does not allow Ethernet to create the same price/performance gains over other framing protocols it did at a gigabit and below. 

Another factor to look at is that all the 40 Gigabit protocols showing any signs of hope within the data center  run at serial rates of 10 Gigabit, whether InfiniBand or Ethernet-framed, because no one has come up with a way to run more than 10 Gigabit on a serial lane economically, even though it's been technically possible and commercially available for years on OC-768 Packet-over-SONET line cards.   In addition to the high costs of dispersion compensation on longer reach optical links, transmitting a bit less than 1/10th of a nanosecond after the last bit was sent has proven to be a major economic challenge for silicon developers, and likely will be for years.

So as we look beyond 10 Gigabit in the data center, we also need to advance the public discussion beyond the very late-90s/early 2000s emphasis on line rates, and look further into how other factors are now playing a large role in achieving the best cost per bit in data center networks.

Friday, December 10, 2010

Is SFP+ the Optical RJ-45?

By Lisa Huff

For those of you that have been in the networking industry for what seems to be 100 years, but is really about 25 years, you know that the one “connector” that hasn’t changed much is the RJ45. While there have been improvements by adding compensation for the error that was made way back when AT&T developed the wiring pattern (splitting the pair causing major crosstalk issues), the connector itself has remained intact. Contrastingly, optical connectors for datacom applications have changed several times – ST to SC to MT-RJ to LC. They have finally seemed to settle on the LC and perhaps on a transceiver form factor – the SFP+. The SFP was originally introduced at 1G, was used for 2G and 4G and with slight improvements has become the SFP+ and the dominant form factor now used for 10G. Well, it is in the process of getting some slight improvements again and promises to make it all the way to 32G. That’s six generations of data rates – pretty impressive. But how?

The INCITS T11.2 Committee's Fibre Channel Physical Layer – 5 (FC-PI-5) standard was ratified in September. It specifies 16G Fibre Channel. Meanwhile, the top transceiver manufacturers have been demonstrating pre-standard 16G SFP+ SW devices. But, wait a minute – short-wavelength VCSELs were supposed to be very unstable when trying to modulate them at data rates above 10G right? Well, it seems that at least Avago and Finisar have figured this out. New microcontrollers and adding at least one clock and data recovery (CDR) device in the module to help clean up the signals have proven to be keys. Both vendors believe it is possible to do this and not add too much cost to the modules. In fact, both also think that possibly by adding electronic dispersion compensation (EDC) they can push the SFP+ to 32G as well - which is the next step for Fibre Channel - hoping to stop at 20G and 25G to cover developments in Ethernet and InfiniBand.

And what about long wavelength devices? It has always been a challenge fitting the components needed to drive long distances into such a small package mainly because the lasers need to be cooled. But not anymore – Opnext has figured it out. In fact, it was showing its 10km 16G FC SFP+ devices long before any of the SW ones were out (March 2010). Of course, this isn't surprising considering Opnext has already figured out 100G long haul as well.

These developments are important to datacom optical networking for a few of reasons:  

  1. They show that Fibre Channel is not dead.
  2. The optical connector and form factor "wars" have seemed to subsided so transceiver manufacturers and optical components vendors can focus on cooperation instead of positioning.
  3. They will impact the path other networking technologies are taking – Ethernet and InfiniBand are using parallel optics for speeds above 10G – will they switch back to serial?
Stay tuned for more on these points later.

F5 Added to the S&P 500

By David Gross

F5, which has nearly tripled over the last 12 months, will be added to the S&P 500 after the market close December 17th.   The stock was up over 5% after hours yesterday on this news, topping $145 a share.

Netflix, Cablevision, and Newfield Exploration will be joining F5 as new members of the index.   Office Depot, The New York Times, Eastman Kodak, and King Pharmaceuticals will be moving out.

Thursday, December 9, 2010

Savvis Reaffirms Guidance

By David Gross

At its investor day yesterday, Savvis reaffirmed its annual guidance of $1.03 billion to $1.06 billion of revenue, and Adjusted EBITDA of $265 million to $290 million.   Wall Street was expecting $1.05 billion and $270 million.  

The stock was one of the best performers among data center and hosting providers between July and the end of October, and has nearly doubled over the last five months.   But it has fallen $1.54 over the last two days to $26.26 on heavy volume, after it was announced that one of its largest shareholders, Welsh, Carson, Anderson & Stowe, had cut its stake in the company a third to 10.3 million shares.

Savvis is one of those companies where I don't think EBITDA tells you a good story about its prospects.   It is still net income and free cash flow negative due to high capex requirements.   And its capex produces less revenue per dollar invested than rival Rackspace, whose Revenue/PP&E is approximately 50% higher, because it does not have to spread itself over such a wide product line.   Savvis did the right thing selling its CDN to Level 3.   At some point, it will need to re-examine why it's still in the bandwidth business.

Monday, December 6, 2010

Capital Usage Effectiveness vs. Power Usage Effectiveness

By David Gross

The Green Grid recently proposed that data center managers add Carbon Usage Effectiveness and Water Usage Effectiveness to the already widely used Power Usage Effectiveness metric.    While I've never seen a data center that tracks too many operating metrics, I've seen plenty that lack appropriate financial measurements.

While I think some good could come out of additional energy and environmental metrics, including possible innovations in cooling architectures, they cannot overwhelm metrics important to shareholders, many of which are not tracked.   It's rare to find a data center operator who can't tell you the PUE of the building, by season, but it's also rare to find one who can tell you the IRR on the capital invested in the place.  Haphazard upgrades are sometimes required operationally, but as an investor in a building, either a self-administered or leased facility, I'd want to know what the financial returns are on that capital investment, what the alternatives are to these investments, and how much of the capital could be substituted by operating expenses, and what the return on doing such a thing would be.    But rarely can data center managers discuss these numbers like they can their PUEs.

The problem with not tracking IRR is that the number of options to build or buy continue to expand in this industry, from the facility itself, to power, to cooling, to telecom capacity.   How do you know you're making the best decision if you don't know the returns, and I don't mean costs, but the financial returns of shifting an operating cost to a captial outlay, and vice versa?   And what about the timing of expansions?  A time-sensitive financial measure like IRR, and not ROI or TCO, is needed to handle this.

It's time for the industry to start tracking a new CUE, not Carbon Usage Effectiveness, but Capital Usage Effectiveness.   Vendor TCO models, which generally originate in their marketing departments, not internal capital planning, are a poor substitute for doing this, in fact they're negative because they're arbitrary and typically exclude the opportunity costs of alternative uses of capital, as well as the time value of money.   Moreover, good capital planning can assist with good environmental planning, by eliminating unnecessary costs and capital outlays.  But it won't start happening until data center managers start tracking their capital output as closely as their environmental output.

Telx Adds 12,500 Square Feet Outside of Digital Realty Buildings

By David Gross

Wall Street has been paying close attention to the relationship between GI Partners, Digital Realty (DLR), and Telx.  An investor in both companies, GI Partners has enabled a close relationship between DLR and Telx, including a deal where DLR has granted Telx of exclusive right to operate the Meet Me Rooms in ten of its facilities.  Telx operates in five additional buildings, and today announced it has expanded in two of them - 8435 Stemmons Freeway in Dallas, and 100 Delawanna Avenue in Northern New Jersey.

8435 North Stemmons Freeway is an office building in which Telx already had leased a floor.   It sits just to the west of Love Field, and is four miles north of the massive Infomart building at 1950 North Stemmons Freeway, a major carrier hotel which serves as the Dallas equivalent to 111 8th Avenue or 60 Hudson.

The 100 Delawanna Avenue facility is located in Clifton, NJ,  about three miles west of the Meadlowlands Sports Complex, and a ten mile direct shot down Route 3 to the Lincoln Tunnel.   Adjacent to the New Jersey entrance to the tunnel is the 310,000 square feet 300 Boulevard East facility, owned by DLR and leased by Telx as well as many financial traders.  (300 Boulevard East sits right next to the loop by the NJ entrance to the tunnel, featured in the intro to The Sopranos") 100 Delawanna provides connectivity into that building as well as the popular Manhattan carrier hotels, and in many respects is a backup site and additional POP for customers in Weehawken.    Equinix has a competing site, NY4, in Secaucus, which sits just across the New Jersey Turnpike from 300 Boulevard East.

Telx has been in registration since March, but unfounded concerns about Equinix, as well as the mediocre performance of CoreSite in the aftermarket have kept it from coming out.   The company reported $95 million in revenue for the first nine months of 2010, up over 30% from the prior year, with operating margins rising from -5% to 14%, and EBITDA margins increasing to 33%.

Friday, December 3, 2010

F5 at a 52-Week High

By David Gross

In spite of a flat to down market today, F5 hit a 52 week high this morning of $141.58, and has nearly tripled over the last 12 months.   It's also up over 50% from Goldman's peculiar downgrade of the stock in October. While F5 is a great company that has done an excellent job staying focused on the L4-7 market,  the stock is getting ahead of itself.

Since 2003, the company's top line has grown 26% annually, but its current y/y growth rate of 45% is near its post dot com crash peak of 47%.   It's no secret on Wall Street or in the data center industry that there's a lot of room for both load balancers and WAN Optimization devices to keep growing, but they're not going to keep growing at close to 50% per year, which is what the current enterprise value/earnings ratio of 55 suggests.    That said, this is purely a short-term risk, not unlike the spring and summer of 2006 when the stock lost nearly half of its value as its revenue growth rate decelerated from the high 40s into the low 30s.  But investors who held on through that volatility are being rewarded now, and anyone planning to benefit from future growth needs to be ready to handle a 2006-like drop with current revenue growth running so far above historical levels.

Thursday, December 2, 2010

Windstream Completes Acquisition of Hosted Solutions

By David Gross

Windstream announced yesterday that it had closed its $310 million acquistion of Hosted Solutions.  Windstream paid cash for the company, and financed the purchase through its own cash and a revolving line of credit.   Like Cincinnati Bell's purchase of CyrusOne, this deal marks a departure from the standard video and bandwidth offerings typical for an independent local phone company.  

While CyrusOne is focused on big Texas markets, Hosted Solutions is focused on three markets that classify as tier 2 or 3 within the data center market: Charlotte, Raleigh/Durham, and Boston.   The deal brings an additional 68,000 square feet of data center space, and 600 new customers to Windstream.   Not unlike Savvis, Hosted Solutions offers a mix of colocation and managed services.

Wednesday, December 1, 2010

Data Center Stocks Fall 0.6 Percent in November

By David Gross

After bouncing around in October after the Equinix warning, data center stocks stabilized in November, with our Services Index falling a modest 0.61% to end the month at 97.88.    In October, the index was down 1.52%, but this included recovering most of an 11% drop incurred October 6th following the infamous Equinix warning.  The index was launchjavascript:void(0)ed October 1 with a value of 100. 

The big losers for the month were the REITs, with Digital Realty, DuPont Fabros, and CoreSite all down over 10%.   The big winners were Rackspace and Terremark, which both reported y/y top line growth over 20%.   In the case of the REITs, one factor holding them back is their low yields.   DuPont Fabros only recently started paying a dividend, CoreSite has not begun paying them, and Digital Realty's current yield of 4.04 percent is lower than the current yield on 30 year Treasuries.   Additionally, the "bond bull" is loosing steam, with TrimTabs Research reporting that bond funds and bond ETFs recently ended a steak of 99 consecutive weeks of cash inflows.   If bond yields do continue to go higher as a result, that would put more pressure on the REITs to increase their dividend yields, and could further pressure their stock prices.   In the last month, 10 year Treasuries yields rose 19 points, from 2.60 to 2.79, while 30 year yields rose 13 points to 4.11.

Outside of the REITs, Savvis started to slow down, rising just over 4% for the month, after rising 13% in October, and 43% in the 3rd Quarter.    The company, which is not profitable, is trading at about 1.47x annualized revenue with 13% y/y top line growth, hardly enough to sustain such a big run up.  Nonetheless, Wall Street has completely fallen in love with Rackspace, which is now trading at 78x annualized earnings (74x net of its cash), on 23% y/y top line growth.   It's a remarkably well run hosting provider, but the stock is clearly ahead of itself.  While the company has grown its bottom line 55% in the last year, this is primarily due to reductions in its SG&A/Revenue ratio, not its heavily hyped cloud services or other trendy topics that get Wall Street excited. Services Index

Company Ticker Mkt Cap Nov 30 Close Nov 1 Open Monthly Chg
Equinix EQIX $3,537,784,000 77.60 84.24 -7.88%
Digital Realty DLR $4,584,996,000 52.52 59.73 -12.07%
DuPont Fabros DFT $1,338,005,700 22.59 25.10 -10.00%
Rackspace RAX $3,645,374,900 29.17 24.96 16.87%
Savvis SVVS $1,388,181,200 25.13 24.01 4.66%
Level 3 LVLT $1,660,000,000 1.00 0.97 3.09%
Akamai AKAM $9,478,225,900 52.19 51.67 1.01%
Navisite NAVI $133,387,200 3.54 3.83 -7.57%
Terremark TMRK $786,548,700 11.97 9.99 19.82%
Limelight LLNW $698,356,000 7.10 6.79 4.57%
AboveNet ABVT $1,477,479,000 58.70 56.89 3.18%
CoreSite COR $220,376,800 12.88 15.06 -14.48%
Internap INAP $271,489,300 5.23 5.00 4.60%

Index Value October 1

Index Value November 1

Index Value December 1 Open

Jefferies Initiates Coverage of Digital Realty with a $67 Price Target

By David Gross

Wall Street loves vague, MBA-speak, and Jefferies continued that tradition yesterday initiating coverage on DLR with a price target of $67, or about 30% higher than the stock's current level in the low 50s.  

In its research note, Jefferies stated that  "DLR's dominant market position in the wholesale data center space gives it a major competitive advantage vs peers in regards to taking full advantage of strong fundamentals in the datacenter real estate market, which is poised to continue to experience positive demand/supply fundamentals for the next five years."

So here's the edge they have - they've figured out that DLR has high market share.  Amazing insight.  Moreover, they've determined that demand/supply conditions will be strong for the next five years.    No wait they didn't say that, they said it would be "positive".  Either way, I doubt that they have driven out this way in Northern Virginia, where there are cranes all over the place preparing for a new wave of data center clients.