Wednesday, December 22, 2010

Everything Saves Energy Costs

By David Gross

It's always amusing to see what techniques are used to sell products in this industry.   For many years, the made up "TCO" metric, developed in sales and marketing, not finance, has made its way into all kinds of places.   The term "ROI" has also been used and abused, with very little mention of the metric that really matters to any capital expenditure, which is IRR.  When technology, sales, and financial measurement have met, the results haven't been pretty.

The latest financial tag being attached to products is saving energy costs.   And I'm not just talking about new CRACs (Computer Room A/C), UPS systems, or PDUs, but cabling, networks, really anything that is physically near a data center, can be sold as a device to cut your energy bill.   As I pointed out a few weeks ago, there are some tremendously energy efficient network products which make little sense to deploy unless you want to re-design your network.   A Voltaire 4036 InfiniBand switch, for example, has a nameplate capacity of .18 Watts per Gbps, less than a tenth of a typical Ethernet switch.   Only problem is that deploying an InfiniBand cluster doesn't make financial or operational sense for many data centers. 

Now my favorite recent example is from a recent Processor article, where a Cisco exec claims you should deploy Fibre Channel-over-Ethernet because it reduces energy costs 30%.   Yes, deploy a network that's likely to degrade performance of your SAN and LAN, and increase the capital costs of both by forcing you to buy expensive switches and CNAs (Converged Network Adapters).   This made me laugh because this was EXACTLY the argument used for the failed "God Boxes" of the early 2000s.   Buy one big monster instead of multiple smaller devices, and save power because it's one piece of hardware, not seven or eight.   The capital returns on doing this were atrocious, and the market performance of those products reflected this.   Moreover, the power savings are theoretical, not based on operating networks.

It's no secret that energy efficiency is important to any data center.   But like anything else, it's a trade-off.   You can have high response times, a 100 Meg network, and lightly loaded racks and use very little energy.  

The Processor article goes on to say that locating your data center at a renewable power source is a great way to reduce your carbon footprint.   This comes from a RackForce Networks exec.   Economically, this also cuts your variable power cost down to almost zero, especially with wind power, which has remarkably low O&M costs.   However, this does not mean everyone will follow Google, Microsoft, Yahoo, and Verizon to Lake Erie or the Columbia River Valley.  The trade-off is that you also have to put more capital into fiber and network than you do in Santa Clara or Ashburn.   Not to mention the building itself.   For this reason, it makes little sense to talk about energy savings generically, but rather to determine how the trade-offs change when you go from Equinix or DLR to your own building, and vice versa.

1 comment:

  1. Just to get the sales really=) Throwing a lot of misguided confused and abused marketing pitch around without explaining how it all works. Good thing you pointed out the truth.

    photovoltaic solar panels

    ReplyDelete

Note: Only a member of this blog may post a comment.