Showing posts with label Power and Cooling. Show all posts
Showing posts with label Power and Cooling. Show all posts

Wednesday, December 22, 2010

Everything Saves Energy Costs

By David Gross

It's always amusing to see what techniques are used to sell products in this industry.   For many years, the made up "TCO" metric, developed in sales and marketing, not finance, has made its way into all kinds of places.   The term "ROI" has also been used and abused, with very little mention of the metric that really matters to any capital expenditure, which is IRR.  When technology, sales, and financial measurement have met, the results haven't been pretty.

The latest financial tag being attached to products is saving energy costs.   And I'm not just talking about new CRACs (Computer Room A/C), UPS systems, or PDUs, but cabling, networks, really anything that is physically near a data center, can be sold as a device to cut your energy bill.   As I pointed out a few weeks ago, there are some tremendously energy efficient network products which make little sense to deploy unless you want to re-design your network.   A Voltaire 4036 InfiniBand switch, for example, has a nameplate capacity of .18 Watts per Gbps, less than a tenth of a typical Ethernet switch.   Only problem is that deploying an InfiniBand cluster doesn't make financial or operational sense for many data centers. 

Now my favorite recent example is from a recent Processor article, where a Cisco exec claims you should deploy Fibre Channel-over-Ethernet because it reduces energy costs 30%.   Yes, deploy a network that's likely to degrade performance of your SAN and LAN, and increase the capital costs of both by forcing you to buy expensive switches and CNAs (Converged Network Adapters).   This made me laugh because this was EXACTLY the argument used for the failed "God Boxes" of the early 2000s.   Buy one big monster instead of multiple smaller devices, and save power because it's one piece of hardware, not seven or eight.   The capital returns on doing this were atrocious, and the market performance of those products reflected this.   Moreover, the power savings are theoretical, not based on operating networks.

It's no secret that energy efficiency is important to any data center.   But like anything else, it's a trade-off.   You can have high response times, a 100 Meg network, and lightly loaded racks and use very little energy.  

The Processor article goes on to say that locating your data center at a renewable power source is a great way to reduce your carbon footprint.   This comes from a RackForce Networks exec.   Economically, this also cuts your variable power cost down to almost zero, especially with wind power, which has remarkably low O&M costs.   However, this does not mean everyone will follow Google, Microsoft, Yahoo, and Verizon to Lake Erie or the Columbia River Valley.  The trade-off is that you also have to put more capital into fiber and network than you do in Santa Clara or Ashburn.   Not to mention the building itself.   For this reason, it makes little sense to talk about energy savings generically, but rather to determine how the trade-offs change when you go from Equinix or DLR to your own building, and vice versa.

Tuesday, December 14, 2010

Should You Increase CRAC Set Points to Save Energy Costs?

By David Gross

Energy management for data centers has been lighting up the press wire lately.   The fundamental economic premise behind most of the stories is that by monitoring temperature, air flow, and humidity at more places and more closely, a data center will get a great financial return by reducing energy costs.   But I'm finding that some of the vendor presentations present the savings at a very generic level, and while they might have a good story to tell, the suppliers need more detailed financial analysis, and more sensitivity analysis in their financial estimates, especially to highlight how the financial paybacks vary at different power densities.

Recently, consulting firm Data Center Resources LLC put out a press release claiming that by increasing CRAC (Computer Room Air Conditioner) set points, a data center could get a "six month" ROI on its investment in sensors, aisle containment systems, and airstrips that augment existing blanking panels.    Of course, there is no such thing as a six month ROI, but I'll grant them the point that they really mean a six month payback period.   However, as I've said many times, ROI is a meaningless metric, instead data center managers should be using IRR, and incorporating the time value of money into all such calculations.

Once these new systems are installed, Data Center Resources argues that you can start increasing the temperature (they did not mention anything about humidity) set point on the CRACs and reduce energy costs.  The firm claims each degree increase in the CRAC set point cuts 4-5% in annual energy expenses.   But given the wide discrepancies in data center power densities, the actual savings are going to vary dramatically, and before estimating an IRR, a data center manager would need to perform a sensitivity analysis based on growing server, power, and cooling capacities at different rates, otherwise this is all just a generic argument for hot aisle/cold aisle containment.