Electronics and liquids don’t mix, unless you’re Iceotope. At this week’s Supercomputing 2009 conference in Portland, Ore., the 3-year-old startup from Sheffield, UK is demonstrating a liquid-cooled server setup that has the potential to cut data center cooling costs by up to 93 percent. The firm just came out of stealth mode, 18 months after a round of financing in early 2008 from EV Group. Plans call for Iceotope to begin manufacturing this year with an eye toward getting the system to early access participants by Q1 2010, general availability sometime in the second half of 2010.
Considering that cooling IT systems is responsible for 40-60 percent of a typical data center’s yearly spending on electricity, the company is clearly betting that the energy savings alone will be enough to drum up business. Instead of supplying rack doors with chilled water to cool servers like IBM, or affixing “water blocks” to processors and other heat-generating components of a server to siphon off heat, Iceotope dunks entire server motherboards into modules that are filled with an “inert liquid” that doesn’t short out the delicate electronics.
The concept isn’t exactly new. For years, some hardcore computer enthusiasts called overclockers have been submerging their computer hardware in substances like mineral oil or 3M’s Fluorinert product to counteract the immense heat caused by their performance-enhancing tweaks. It works because liquids are much better at transferring heat than air.
Iceotope’s CEO Dan Chester told me today that despite debuting the technology at a supercomputing event, his company’s modular system is primarily aimed at bettering energy efficiency. Another of his company’s goals — and key to the product’s acceptance — “is to make it look and feel like a liquid-cooled rack.” So Iceotope is putting data center operators at ease with familiar form factors. Taking a cue from blade servers, the liquid-filled modules slot into a chassis, which in turn fits into a standard 19-inch server rack.
As you can see in the cross section (pictured above), each module has a metal heat transfer plate in its interior that makes contact with a motherboard’s heat-producing chips via heatsinks. Heat never builds up because non-chilled, low-pressure water is pumped along channels on the outer surface of the heat transfer plate and is eventually cooled by a heat exchanger somewhere on the premises. Non-chilled water and slow, high-efficiency pumps are key to achieving the company’s goal of “free cooling at much higher ambient temperatures,” says Chester. He adds that the modules are so “thermally neutral” that servers can be packed tighter and fill in the spaces that would otherwise be occupied by air.
There are trade offs, but they shouldn’t be deal-breakers according to Chester. Extra weight from the liquid is a concern, but his company is offsetting the added heft with light, but sturdy, plastics and other materials so that standard racks don’t buckle under the added weight. And depending on a data center’s construction, some additional plumbing might be required. However, reduced costs for energy, chillers and computer room air conditioners (CRACs) makes it worthwhile, says Chester. According to the company’s figures, the system can reduce the cost of cooling 1,000 servers for three years to just $52,560 from $788,400.
As for that “inert liquid,” Chester isn’t spilling the beans. But in keeping with the company’s green IT philosophy, he claims that the synthetic coolant is better for the environment than Fluorinert and that his company has taken toxicity and recyclability into account.