3 Comments

Summary:

Building sustainable data centers is hard — especially if you’re trying to do it in office space in Houston. This and a few less obvious lessons were the takeaways from a panel on sustainable data centers at the Open compute Summit on Wednesday.

Facebook's Prineville data center.

Facebook’s Prineville data center.

Building sustainable data centers is hard — especially if you’re trying to do it in office space in Houston. Plus, the idea of operating some kind of power-generation plant for offering renewable energy such as solar or biogas is a scary prospect for data center operators. These were among the key takeaways (along with a few less-obvious lessons) from a panel on sustainable data centers at the Open Compute Summit held today in San Antonio, Texas.

Bill Weihl, manager of energy efficiency and sustainability at Facebook and the former energy czar at Google, moderated the panel that also featured Melissa Gray, the head of sustainability for Rackspace; Stefan Garrard who is building an HPC cluster for oil company BP; Winston Saunders from Intel; and Jonathan Koomey, a consultant and energy-efficiency expert. While we are entering the age of 100-megawatt data centers the size of football fields, we’re also dealing with higher energy costs and concerns about how to keep our webscale infrastructure running. As part of the focus on lowering costs, The Open Compute Project spends a lot of time on sustainability.

But lowering the energy inside a data center can only go so far. Saunders explained that chips for example, had achieved their lowest possible power utilization without new breakthroughs. Even when idle, the chips still consumer 20 percent of their maximum energy draw because they can’t fully turn themselves off. The inability to power all the way down is a function of adding latency (once something is turned off it takes time to turn it back on) and because powering down the chip requires the data center to stop sending information. However, data centers rarely hit that point, which means chips are always “awake” and consuming energy.

But it’s not just the hardware. Garrard said his current high-performance computing cluster is running in office space that holds both humans ans servers. He’s done a little to help make things more efficient, but because of the office location and Houston’s hot and humid climate, his servers run at a power usage effectiveness of more than 2 (Facebook, which has heavily optimized its PUE is about 1.07; 1.0 is ideal). So, he is building out a new facility and hopes to get closer to a PUE of 1.5.

But where will the power for his and other new data centers come from? Renewables aren’t really on the list yet. When asked about using biogas systems such as those from Bloom Energy or solar, Gray said the idea of running a generation plant along with a data center was so far outside her core competency that it wasn’t really something she thought about.

Koomey, however, called the idea that a data center operator has to follow in Apple’s footsteps to operate their own generation (Apple is using Bloom’s boxes to power part of its new data center) a “canard” and said data center operators should get renewable power from their utilities. Weihl, who helped Google buy wind power from providers for its data centers, agreed.

The panel essentially outlined several areas where data center infrastructure consumes energy. In the ideal world, operators could site their data centers in places that are cool and dry, and build out the ideal facility and hardware to reduce the power draw. As Koomey said, they could think “holistically.”

Unfortunately, most data centers are built in the real world, where and when they are needed with the equipment available at the time. The standards and designs offered by the Open Compute Project will help, but the real world will take its toll.

You’re subscribed! If you like, you can update your settings

  1. Paul Calento Thursday, May 3, 2012

    To many organizations moving to off-premises IT (i.e. the cloud), green is a secondary benefit (“let someone else deal with it”). But a third party cloud provider focused on the most efficient physical plant is (likely) “greener” than under-utilized on-premises servers & storage. Yes, there’s a ways to go, but we’re getting there. Plus, pragmatically, green makes sense as it reduces overall risk to the provider.

  2. Jonathan Koomey Thursday, May 3, 2012

    This particular paragraph implies something I didn’t say: “Koomey, however, called the idea that a data center operator has to follow in Apple’s footsteps to operate their own generation (Apple is using Bloom’s boxes to power part of its new data center) a “canard” and said data center operators should get renewable power from their utilities. Weihl, who helped Google buy wind power from providers for its data centers, agreed.” My point (and Bill Weihl’s) was that buying power from utilities is another option, not that there is anything wrong with generating it onsite. The “canard” is the notion that just because the capacity factor of wind is 35 to 40% in good sites doesn’t mean it can’t power data centers. You just need to buy enough capacity to deliver the total energy used by the facility over the year.

  3. Winston A Saunders Friday, May 4, 2012

    I’d like to clarify this statement:
    ” Saunders explained that chips for example, had achieved their lowest possible power utilization without new breakthroughs. Even when idle, the chips still consumer 20 percent of their maximum energy draw because they can’t fully turn themselves off. ”

    What I actually stated was the PLATFORM consumes about 20% of its maximum power at idle according to SPECPower. Silicon power is a minority of that power consumption – most of it comes from the power supply adn Voltage Regulators in the system. That’s where we need to breakthrough and why I was encouraged by the Open Rack concept.

Comments have been disabled for this post