Last week Google disclosed the details of its energy consumption, and its data center engineers argued that the leading figure cited to assess how energy-efficient a data center is, power usage effectiveness (PUE), must be continuously measured and averaged over a twelve-month period. This was a veiled shot at some companies that measure their data centers on a cold day in January when their cooling costs are zero and then publish a great PUE number. Google is right. We need more transparency surrounding PUE.
But it’s time to go beyond PUE and examine how we view the whole project of what efficient data center computing means. Leading companies like Facebook, Amazon and Google are all approaching 1.1 with their PUE, so it’s a metric with diminishing returns. With that in mind, here are three shifts in focus with regard to data efficiency that will matter in the future.
1. Admit the limits of the Power Usage Effectiveness metric. While PUE has been helpful in making it clear that a data center will be judged on its energy efficiency, it tells us nothing about the efficiency of the hardware and software. Here’s a hypothetical that Power Assure’s CTO, Clemens Pfeiffer, and I recently discussed:
You’ve got a hundred old servers in your data center that you decide you can do without, so you turn them off. The problem is, your PUE just went up. The same amount of power is being used by the facility to cool and light the building even though there’s less power being used by IT equipment. This illustrates a fundamental point: It’s time to address how efficient hardware and software are themselves as they relate to performing actual compute tasks. If a server’s on but it’s not doing anything, that’s wasteful.
2. Think about software. The entire conversation about data center efficiency over the past few years has revolved around facilities management and hardware. But for the first time, we’re seeing the beginnings of a basic question: What software platform is optimal for reducing energy use?
Stanford professor and current Google fellow Christos Kozyrakis has looked at how energy-efficient the widely used software platform Hadoop is. But one of the problems with Hadoop is that it requires nodes to remain powered on even if they’re not being used. “Hadoop is doing a lot of things that are wasteful, and those things have to be optimized,” says Kozyrakis.
When a semiconductor, like an Intel Atom or Xeon chip, is designed, engineers are constantly considering the energy characteristics of the final product. The same thinking now needs to be applied to software platforms.
3. Integrate hardware and software efficiency metrics. The buzzword in data centers is “heterogeneous computing environment.” Engineers are no longer just dealing with uniform servers built around Intel Xeon chips. They work with all sorts of configurations, ranging from high performance setups to low power servers, from Intel Atom–based Seamicro to Linux-based Tilera and maybe even one day ARM-based Calxeda chips.
Here is an opportunity to figure out which programs are appropriate for which server configurations and to optimize efficiency. Kozyrakis cited an example where an MIT professor asked students to write an application in a simple language like C and then in a high-level program, Java. The execution time for the application differed on the order of thousands. This translates into very different energy characteristics for that program.
In the end, PUE is a metric that’s about reducing waste and making sure the energy going into a data center is being used by the server. But the next frontier of data center efficiency is optimizing software for the multitude of emerging hardware platforms. This is more difficult, because it requires a shift in focus among major cloud players, like Google and Rackspace, as well as a new period of cooperation between programmers and hardware designers. It will take time, but there are clear benefits in terms of power consumption and total cost of ownership for the companies operating the data centers driving cloud computing.