The world’s biggest Internet companies have dreamed up some innovative, high-profile ways to make their data centers greener, from Google’s piping in seawater to cool a data center in Finland to Yahoo’s designing data centers based on chicken coops that utilize the flow of outside air. But as it turns out, it’s not the experimental, novel tech that’s going to make a big difference in terms of the overall energy-consumption reduction for the majority of the world’s data centers. It’s the low-cost, easy and just plain boring stuff that will be the most important.
Data center energy consumption is emerging as an increasingly important — and costly — issue for data center operators and Internet companies. According to a recent survey from the Uptime Institute, 58 percent of data center operators said reducing energy consumption was very important to the company overall; 87 percent of those that said reducing energy is important said that their desire to to so was due to “economics.” The survey also found that a third — or 36 percent — of data center operators said that their data center facility would run out of power, cooling capacity or space in 2011 or 2012. So these centers must find a way to delay the expensive practice of building new facilities and adding on more power resources.
But at Google’s second annual data center efficiency summit, held last month in Zurich, the general opinion was that many data center operators simply can’t employ the cutting-edge and creative types of methods that have been applied by Google, Yahoo and other Internet giants. In talk after talk from Google’s most-senior data center engineers throughout the daylong event, the search engine giant’s leaders detailed how the majority — 72 percent — of data centers are actually small and medium in size (and are owned by small- and medium-sized companies). Most times these firms lack the resources and desire to try out the more innovative, and high-priced, technologies like those that Google has attempted.
Low-tech and low-hanging fruit
But that doesn’t mean these smaller data center operators can’t save on energy consumption. On the contrary, the owners of small- and medium-sized data centers can cut significant energy use out of their data centers by practicing a few very basic steps and by implementing cheap, mundane technology. Picture “low tech” as plastic curtains from the Home Depot and off-the-shelf wireless networks.
Google and others suggest several other best practices and tools to reduce data center energy consumption, including tapping into outside air for cooling, using modular containerized data centers, having liquid-cooled servers and employing software to dynamically power down servers when they’re not in use.
But it’s these three basic and ultracheap baby steps listed below that are the lowest-hanging fruit and that should be taken now for all data center operators:
1. Measure and manage. Data centers can only be made more efficient if you know how inefficient they are to begin with. According to the Uptime Institute, 27 percent of data center operators say they do not measure the power usage effectiveness (PUE), the standard metric to see if the data center is using energy efficiently (though PUE can be misleading at times). However, 67 percent of data center operators use “power monitoring, benchmarking, or other metric[s]” to measure energy use, according to the Uptime Institute, but they don’t necessarily extract all the data needed (or go the extra step) to crunch the PUE figure.
PUE is taken by comparing the total energy from the facility to the energy consumption from just the IT and the servers (and not the cooling). The most efficient data center would have a PUE of 1, which means that all of the energy is going toward running the servers and operating the IT and isn’t being sucked up by cooling. PUE is the most standard and simplest way to measure data center energy efficiency.
Kevin Dolder, a senior data center engineer for Google, says the first step in managing the efficiency of data centers is to make sure that the instrumentation is in place to measure the energy of a data center. Then, ideally, companies should try to measure this metric constantly — every second or so, if possible. Dolder says that Google incorporates PUE into its building management system to have easy access to it.
2. Separate the hot from the cold. Keeping the hot air apart from the cold air (which makes sure the servers keep cool) in data centers is important for ensuring that the facility runs efficiently. It’s also a simple task. Because it’s so easy and the idea has been in use for so long, it’s also the area where companies can use Home Depot–style cheap parts, like vinyl curtains and plastic enclosures. Google uses low-cost meat locker curtains and sheet metal doors to separate cold and hot aisles; it says for an investment of $25,000 in parts for a data center, it can save $65,000 per year in energy savings.
To get a little more high tech, Google uses algorithms to predict how hot and cold air will flow in its data centers, and to help discover ways to keep these two separate. Polargy is a vendor that Verizon has turned to for more-sophisticated containment systems that separate hot and cold aisles. With Polargy, Verizon says that it has improved energy efficiency by 7.7 percent and has saved 18.8 million kilowatt hours per year.
Many data center operators are already deploying this technology at some level, and 77 percent of respondents to the Uptime Institute’s survey said that they either have already implemented hot-and-cold aisle containment or plan to implement it in 2011.
3. Act like it’s Los Angeles in there. Google’s data center engineers wear shorts to work. That’s because Google and industry groups like the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) suggest that data center operators run their facilities at 80 degrees Fahrenheit. Running a data center at such a high temperature has long been believed to be bad for servers (it was thought it would cook them), but Google says that that’s just not true. Data center operators have tended to be cautious about temperature and servers because overheating servers is a costly mistake that can lead to offline websites (and likely someone getting fired).
Given that cooling — which oftentimes requires large, inefficient chillers — can consume 50 percent of a data center’s energy consumption, reducing the baseline for cooling just makes sense. Operators can reduce cooling power through the two steps mentioned above as well as through other more costly methods, like turning to outside air (when available) for cooling.
If operators of small- and medium-sized data centers can implement just the basic baby steps outlined above, they could reduce a significant amount of energy consumption. The Environmental Protection Agency (EPA) says that data centers with energy-efficient best practices can have a PUE of 1.5, which is a significant reduction from the average current PUE for data centers today of 1.8.
Yahoo’s VP of Data Center Engineering and Operations, Scott Noteboom, told me last year that he thought the ability to reduce energy from data centers is becoming a competitive advantage for Internet companies. Those that don’t take these steps, and who don’t consider power an increasingly important factor for computing, will waste funds on added energy and because of that lose market share.
The other aspect of the equation is a planetary one. Our always-on devices and computers consume a large amount of energy, which means that the energy and carbon footprint of the Internet will continue to grow and will start to present a real problem if steps aren’t taken now to reduce it. The IT industry has tended to be a leader in the global business community. Collectively reducing the amount of energy used (and carbon emitted) by the Internet sends a powerful message to the rest of the business world.