3 Comments

Summary:

Data centers already consumed more electricity than the country of Iran in 2005, and are set to increase electricity use by another 76 percent by 2010. And the biggest culprit of that electricity — cooling — sucks up between 40 percent and 60 percent of the […]

Core4image1Data centers already consumed more electricity than the country of Iran in 2005, and are set to increase electricity use by another 76 percent by 2010. And the biggest culprit of that electricity — cooling — sucks up between 40 percent and 60 percent of the total energy consumption. While these figures are worrisome for the planet, they’re an opportunity for a startup that is officially launching on Monday: Core4, a year-old company that makes cooling infrastructure for data centers, which it says can cut cooling-related energy consumption by 72 percent.

Core4′s strategy is to take a holistic approach to reducing the energy of data center cooling by compiling the most efficient cooling gear — from the compressor to fans to the refrigerant pump — tweaking some of the hardware to make it even more efficient, and then offering the package deal to data center owners. The company is largely aiming at retrofitting existing data centers (instead of for new construction), which it says it can do for between $250 and $350 per square foot. Core4 says its a good deal for customers because it frees up 40 percent to 50 percent more power and cooling capacity at an existing data center, meaning the customer can put off building another new facility, which can cost closer to $2000-$3000 per square foot. Core4 also says that the system pays for itself between 1.5 and 2 years.

While a lot of startups are beginning to focus on building tools to reduce data center costs, there aren’t many that are focused on the cooling gear itself. That’s partly because Liebert, which was founded over 40 years ago, controls the bulk of the market for cooling infrastructure. But Jamien McCullum, Core4′s vice president of business development, tells us that Liebert’s “stranglehold” on the market has actually meant they’ve had little reason to innovate and Core4 can undercut them with savings on energy bills.

Core4′s offering also can become even more attractive with subsidies and rebates that have emerged from state and federal governments. Northern California utility PG&E gave Internet service provider Sonic.net, based in Santa Rosa, Calif., a $159,000 rebate for ripping out an older Liebert system and installing Core4′s system. Sonic had reached the maximum capacity at its data center and was looking to add more capacity and, as a result, began saving $129,000 per year.

One question I had was, since Core4 uses a lot of best-of-breed off-the-shelf cooling components, how would the company be able to defend against if, say, Liebert decided to just start copying it and buying the same gear? Beyond the fact that its market has been so slow moving, Core4 execs said the company also has some patents pending for engineering tweaks it makes to cooling gear like fans. But if I was running Core4, I wouldn’t be too eager to share the list of components, as its approach could be quickly copied by competitors, particularly in such a hard economic climate.

The business model also might not be that well-matched to the venture capitalist world. VCs like investing in technology that is defensible with a lot of intellectual property. Core4 tells us it has raised an angel round, but hasn’t yet decided if it wants to raise a Series A from VCs, as it already has revenue coming in from customers like Sonic. Tip: If you don’t have to give up some of the company to VCs, don’t.

  1. Spot on about the energy cost of cooling. I refer readers to the work of The Green Grid – PUE – the initiative to have a standard calculation for the ratio between energy used by the compute devices and other energy defined – the largest being cooling.

    A computing device is, from a heat perspective, a variable heater. As a general guide the server uses 50% of it’s energy doing no compute work (fans, memory etc) and the other 50% in direct proportion to the cpu utilization.

    You do not need to measure the temperature in a data center to know how much additional cooling is required. Simply take the starting temperature and measure the watts watts being delivered from the power supply system = the heat being created = heat to be removed.

    I write this to encourage data center managers looking at ways to save energy, to both look at the cooling systems, and to look at some of the open source software initiatives to measure and graph energy use.

    open4energy is an open source energy monitoring project built on Cacti and RRDtool, both proven platforms for graphing time series data.

    Share
  2. thanks Alex, Ill be sure to check open4energy out

    Share
  3. [...] energy used to cool data centers sucks up 40-60 percent of the total energy consumption of data centers, so we’re glad to see Chu [...]

    Share

Comments have been disabled for this post