Blog Post

Google Gets Shifty With Its Data Center Operations

[qi:gigaom_icon_hardware] Google (s GOOG) is operating a data center in Belgium without chillers (which augment cool air to help keep the data center at the right temperature and use a lot of electricity), according to Rich Miller over at Data Center Knowledge. However, what’s most noteworthy about this is that Google appears to have the means to automatically shift its data center operations from the chiller-less data center if the temperatures get too high for the gear. The ability to automatically and seamlessly shift data center operations and tasks is a key element in building out data centers that can operate on renewable energy or merely more efficiently. Miller calls it a “follow-the-moon” strategy because a company with a larger number of data centers could shift computing around the globe so processes are completed at night when the temperature is lower and cooling costs are cheaper.

The ability to seamlessly shift workloads between data centers also creates intriguing long-term energy management possibilities, including a “follow the moon” strategy which takes advantage of lower costs for power and cooling during overnight hours. In this scenario, virtualized workloads are shifted across data centers in different time zones to capture savings from off-peak utility rates.

I wrote about a similar scenario last July, only I said data center operators would follow the sun with their workloads so they could use a renewable energy like solar power to provide electricity for their operations. When the sun sets, or on cloudy days, the workload moves to where another power source is available. Regardless, moving data center operations isn’t an easy process, and it requires a lot of bandwidth between the data centers. As Google masters this, expect other companies to follow suit, not merely because they can save on power, but because it makes cloud computing much more reliable in that when one data center experiences a failure, a cloud provider can redistribute operations around the globe.

10 Responses to “Google Gets Shifty With Its Data Center Operations”

  1. Apologies in advance for the shameless self-promotion. We have recently looked at this idea in a research project at MIT, done with some collaborators from Akamai. Our recent paper analyzes such strategies from the standpoint of electricity cost savings. The paper, titled “Cutting the Electric Bill for Internet-Scale Systems”, will appear at the SIGCOMM networking conference. The paper is at for those of you who want to see some analysis (there are definitely some idealizations and assumptions, but we believe that the analysis is generally sound). The paper also contains a number of useful references.

    The abstract of the paper is as follows: Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly
    between two different locations. In this paper, we characterize the variation due to fluctuating electricity
    prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices for twenty nine locations in the US, and network traffic data collected on Akamai’s CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs by being cognizant of locational computation cost differences.

  2. Sujit Mohanty

    Cassatt did this as their sole focus, and still they never got it right. In internal data centers and hybrid private / public cloud models, the biggest headache is automation that can handle the underlying infrastructure. Its a much more difficult task than it seems to the casual observer. We’re about two years out before major companies have fully deployed solutions attacking this problem, and at that point we probably would have barely tapped the surface.

  3. vijaygill

    I believe that the cost structure of this is worth analyzing – Microsoft has a good paper on this, basically the summary is that unless you need to touch the data with more than a few hundred thousand cycles worth of CPU, it is not worth it to move the data. You can trade off data transport cost, cpu cycle, environmentals.
    So it is not quite as simple as follow the moon above.