Amazon Web Services infrastructure guru James Hamilton and Sun Microsystems co-founder Andy Bechtolsheim had a lot to say about how data centers can be improved on Thursday. Speaking at the Open Compute Foundation (OCF) debut, they weighed in with words of wisdom.
Their suggestions, coming at the Open Compute Foundation (OCF) debut, could ease the construction of more efficient data centers that can scale out for immense cloud computing workloads. Perhaps surprisingly, some of their advice has more to do with common sense than high-tech wizardry.
Here are the top five tips (plus a bonus) from the event.
1: Don’t sweat the cold
Common problem: Most data centers run way too cold. It’s a good problem to have, because it’ s easy to solve–just raise the temperature, said Hamilton, VP and distinguished engineer at Amazon. Part of the reason facilities remain over-chilled is that ASHRAE recommends they run between 61 and 81 degrees F. (That ASHRAE, a trade group focusing on heating/cooling issues might have a vested interest in selling heating-cooling gear is an open question.) But even ASHRAE acknowledges that it is “acceptable” to run warmer data centers in the 85 to 95 degree range. Somehow no one got that memo. “Everyone runs way down in the mid-7os,” Hamilton said. “You can raise the temp and it’s free savings!”
It’s understandable why people worry about hot data centers–they hear tales of server “mortality” but there’s not a server commercially available today that is not approved to run at temperatures up to 95 degrees, he said.
2: Wall off your servers!
In most data centers, a ton of air leaks around the server racks. For those data center operators, Hamilton had a suggestion: “Don’t do that!.”
“It’s free to put a wall around the hot aisles. That is far and away the biggest change you can make to take your PUE 3.0 facility and take it down to a 2 PUE facility,” Hamilton said. The PUE, or power use efficiency number, is a measure of how energy efficient a given facility is.
3: Substitute standardization for gratuitous innovation.
Forget about vendor-driven “gratuitous differentiation,” warned Bechtolsheim who is now chief development officer at Arista Networks, a newly minted OCF member. “For the last ten years, the focus has been on blade servers and chassis with mobile servers plugged in. Previously companies, my own company included, said ‘my blades are better than your blades, my fans are better than your fans.’” This is not productive, anymore, he said.
The problem with that scenario is that it benefits the vendor rather than the customer, he said in what could be the mantra of the new OCF. “Open system-level standards take away that gratuitous differentiation so you no longer need to invest to have a better RAID controller or BIOS or other products that are not fully interoperable with each other,” he said. The goal of OCF is to spec out these components, make them standard, so third parties can build upon them while retaining base-level interoperability.
4: Build big, and sell off what you don’t need.
If you’re a big business, think bigger when you build data center capacity. If you have a big compute load, build for a bigger one, Hamilton said.”Why? It’s the same principle that lets airlines oversell their seats. It’s just like [Amazon] stole the idea for the spot market, when you have a valuable asset that’s difficult to over-utilize, oversell it,” he said.
There are ways to mitigate risks if there is huge demand. “You can shed any load that is not vital. take you administrative tasks, the periodic scrubbing of your storage, you can do that an hour later and it’s not a problem.” Or, you stop selling on the spot market till things settle down.
Amazon, of course, is the poster child for selling off capacity and its AWS has turned into what looks to be a $1 billion business. According to Harrison, AWS adds enough server capacity every day to support all of Amazon’s global infrastructure as it existed in the company’s full fifth year of operation, when it was a $2.76 billion company. (Amazon’s annual revenue is now just under $40.3 billion.)
5: Look into evaporative cooling.
Innovative data center designs will tap into — or at least evaluate — evaporative cooling, Hamilton said. “Down south they call them swamp coolers, big fans with mist rolling off them. It’s a lot of water going through a state change . You can use porous media with water dribbling and evaporating or a water mist.”
6: Bonus item: Lose the ducts.
Data center buildings themselves have to be redesigned to save energy. “Every data center used to have ducts. There are two things wrong with that…first you have to pay for them, second, they’re not big enough [to do the job right] so why not use the entire building as one big duct as Facebook did in its Prineville [Ore.] facility?”
In that building, the whole second floor is a duct, he said.