10 Comments

Summary:

[qi:gigaom_icon_hardware] Google is operating a data center in Belgium without chillers (which augment cool air to help keep the data center at the right temperature and use a lot of electricity), according to Rich Miller over at Data Center Knowledge. However, what’s most noteworthy about this […]

[qi:gigaom_icon_hardware] Google is operating a data center in Belgium without chillers (which augment cool air to help keep the data center at the right temperature and use a lot of electricity), according to Rich Miller over at Data Center Knowledge. However, what’s most noteworthy about this is that Google appears to have the means to automatically shift its data center operations from the chiller-less data center if the temperatures get too high for the gear. The ability to automatically and seamlessly shift data center operations and tasks is a key element in building out data centers that can operate on renewable energy or merely more efficiently. Miller calls it a “follow-the-moon” strategy because a company with a larger number of data centers could shift computing around the globe so processes are completed at night when the temperature is lower and cooling costs are cheaper.

The ability to seamlessly shift workloads between data centers also creates intriguing long-term energy management possibilities, including a “follow the moon” strategy which takes advantage of lower costs for power and cooling during overnight hours. In this scenario, virtualized workloads are shifted across data centers in different time zones to capture savings from off-peak utility rates.

I wrote about a similar scenario last July, only I said data center operators would follow the sun with their workloads so they could use a renewable energy like solar power to provide electricity for their operations. When the sun sets, or on cloudy days, the workload moves to where another power source is available. Regardless, moving data center operations isn’t an easy process, and it requires a lot of bandwidth between the data centers. As Google masters this, expect other companies to follow suit, not merely because they can save on power, but because it makes cloud computing much more reliable in that when one data center experiences a failure, a cloud provider can redistribute operations around the globe.

You’re subscribed! If you like, you can update your settings

  1. Yeah, it’s easier to move solar power around than would be to move cold air around.

  2. Regarding the “lots of bandwitdh needed” … remember Google buying up all that dark fiber years ago? :-)

  3. I believe that the cost structure of this is worth analyzing – Microsoft has a good paper on this, basically the summary is that unless you need to touch the data with more than a few hundred thousand cycles worth of CPU, it is not worth it to move the data. You can trade off data transport cost, cpu cycle, environmentals.
    So it is not quite as simple as follow the moon above.

    /vijay

    1. The core problem with that analysis was that it was based on how Microsoft would go about accomplishing that. Then.

  4. @vijay you should post a link to that paper

  5. Sujit Mohanty Thursday, July 16, 2009

    Cassatt did this as their sole focus, and still they never got it right. In internal data centers and hybrid private / public cloud models, the biggest headache is automation that can handle the underlying infrastructure. Its a much more difficult task than it seems to the casual observer. We’re about two years out before major companies have fully deployed solutions attacking this problem, and at that point we probably would have barely tapped the surface.

  6. Dan Creswell Friday, July 17, 2009

    IMHO Google are already well advanced in handling these issues as they’re closely related to general failure handling/loss of a data-centre. Many useful details can be found in the papers they already publish but the recent problem around AppEngine and the associated post-mortem reveals quite a lot more:

    http://groups.google.com/group/google-appengine/msg/ba95ded980c8c179?pli=1

  7. Hari Balakrishnan Friday, July 17, 2009

    Apologies in advance for the shameless self-promotion. We have recently looked at this idea in a research project at MIT, done with some collaborators from Akamai. Our recent paper analyzes such strategies from the standpoint of electricity cost savings. The paper, titled “Cutting the Electric Bill for Internet-Scale Systems”, will appear at the SIGCOMM networking conference. The paper is at http://nms.lcs.mit.edu/papers/index.php?detail=190 for those of you who want to see some analysis (there are definitely some idealizations and assumptions, but we believe that the analysis is generally sound). The paper also contains a number of useful references.

    The abstract of the paper is as follows: Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly
    between two different locations. In this paper, we characterize the variation due to fluctuating electricity
    prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices for twenty nine locations in the US, and network traffic data collected on Akamai’s CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs by being cognizant of locational computation cost differences.

  8. A Few “Techie” Links of Interest, 21 July 2009 Tuesday, July 21, 2009

    [...] talks about how Google can shift operations from one data center to another half-way around the globe, during [...]

  9. AT&T Dials Up a Computing Cloud – GigaOM Thursday, November 19, 2009

    [...] needs. AT&T’s eventual goal (a common one in the industry) is to enable customers to move their computing around the world either following demand, lower power prices or whatever makes sense for the customer. AT&T may [...]

Comments have been disabled for this post