By David Gross
An odd statement that I hear a lot is that "power is the biggest cost in a data center". This idea has been repeated so frequently, it's become an assumption for some industry followers, especially in the press. Now while power is a major cost, it's often not the largest, especially because data center tenants really need to distinguish costs from cash outlays to get the best returns on their investments in power and space.
Part of the problem with some approaches to evaluating data center costs is that there's a bad tendency to force a number next to each category of expenses. Assign one number to servers, another to storage, one to power, and so forth. This distorts the true economic picture of a data center operation, because recurring personnel costs are extremely low, which means every data center owner or lessor has the choice of buying or renting just about every aspect of their data center operation. But if you try to boil everything down to an amortized cost, the true economics of the operation get lost in that forced calculation.
Actual Operating Costs vs. Accounting Operating Costs
Power can be self-generated with wind, solar, or backup generators, but for the most part it gets paid for on a monthly basis - either by the amp or by the kWh. If you look at a typical co-lo contract, it comes out to about 25-30% of total charges. In terms of public companies, CoreSite reports that about 25% of its revenue comes from power.
In addition to power, people, and rent (in the case of a colo/REIT customer), the other major operating cost is telecom circuits, which can become significant for a heavily cross-connected customer, a customer buying extended cross-connects, or one needing 100 Megabits or more out of the facility. Equinix gets 20% of its revenue in North America from telecom services, and that doesn't include what its customers pay third party transit providers. While telecom costs can be much lower than power expenses, they are also far more variable based on customer requirements.
But once you get past power, rent, and telecom services, there are all the servers, storage arrays, and network equipment boxes to buy. In one scenario, I estimated the loaded capital cost per server for all of this equipment, plus software licenses, to be about $8,000, or around $25,000 per square foot. Now the temptation here is to amortize this over 36 or 60 months and call it something-hundred per foot per month. The problem with doing this is that you can lease the equipment, finance it through low cost debt or high cost equity, and you can cluster purchases in a handful of months - making a straight-line depreciation figure a financial accounting abstraction that has nothing to do with your economic reality. The point is that these are fixed costs with a lot of financing and purchase options, and throwing one number out there to cover them buries the economics of owning (or leasing) these assets.
What I recommend is not to force a number in per month, but to aggregate the cash outlays, and then NPV them at the corporate cost of capital, as well as other interest rates to determine the sensitivity to your discount rate. This is the only way to get a true picture of the economic costs. Additionally, leasehold improvements can be incorporated into this analysis as well. Then the objective should be to minimize the NPV, not the amortized monthly cost, because no matter what you're paying for power, cross-connects, or bandwidth, your data center is an asset-heavy, not a people-heavy, operation.