Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Last week, Facebook’s Open Compute Project opened the details of its energy efficient server design, power systems and free-air cooling system for its new data center in Prineville, Ore. That’s a lot of valuable intellectual property, and opening it could push other data center giants in the field — notably Google, which has kept much of its efficient data center IP under tight wraps — to open theirs up as well. But how much is all this information worth to the green data center field as a whole?
The answer depends on how cleverly other data centers can mix and match Facebook’s new energy efficient parts. The social network’s massive, yet relatively homogeneous, web-scale needs aren’t really the norm. Most data centers have a mix of servers doing different tasks for different customers, all requiring different types of energy management. And most green data center spending currently goes towards retrofitting existing facilities to do more with limited power — which means most are getting more efficient piece by piece, rather than all at once like Facebook.
At first glance, it appear’s Facebook’s new servers from Quanta, which use about 13 percent less power than today’s models, may adapt well to the broader market. Dell, for example, has said its Data Center Solutions business will make servers based on Facebook’s design; others may follow.
Facebook’s power conversion and backup schemes, which cut power conversions within the data center and use batteries to back up multiple racks, could also find traction in the broader market. Rackspace is among the companies that’s looking at taking up the Open Compute model to cut power costs.
But some of the key power-saving features of the Open Compute servers are tied into the layout of the data center itself. For example, the air fans in the servers only account for 2 percent to 4 percent of energy consumption per server, compared to the industry average of 10 percent and up. But Facebook gets away with this by coupling the data center’s fans to individual servers — watch Project Compute’s video to see how they’re doing it — which isn’t a typical data center setup.
Moreover, all of these power-saving features rely on hundreds of servers built to use higher voltages and both AC and DC power. Data centers with lots of different IT equipment to manage won’t have that luxury — they’ll have to back up power via whatever means their existing servers can accept, at the rack or server level. That might make Facebook’s innovations in this realm off-limits to most — unless they’re commercialized in a server-power system pairing. Perhaps Dell and Rackspace might be talking soon?
Facebook’s data center cooling design itself does away with chillers, using outside air and evaporative cooling to drive down energy use. That means running hotter than traditionally allowed. Facebook’s new servers aren’t the only ones being built for hot environments, however. Lots of server makers are raising their temperature maximums, and data center operators are pushing that expanded headroom as well.
All in all, putting all these different efficiency parts together to reach the 38 percent energy savings Facebook’s Prineville facility is targeting isn’t so easy to emulate. Most of today’s data center efficiency spending comes from existing data centers that can’t get any more power delivered to their site — utilities can only build so many substations to serve the small city’s worth of power modern data centers use. While the Facebooks and the Googles and the Yahoos of the world can build brand-new, hyper-efficient data centers, those constrained by capacity must fit more computing into an existing footprint. That’s why efforts like virtualization have led overall data center efficiency gains — while energy is more important than ever, performance and reliability remain job number one.