4 Comments

Summary:

Google opened up on its data center operations today at an industry event in Phoenix. It shared how its thinking and practices have changed as it seeks to lower the costs and environment impact of its servers and IT infrastructure.

googledc
photo: Google

Google’s head of data center operations provided a seven-year look at how the search giant’s data center strategy has evolved during the 7×24 Exchange conference on Tuesday in Phoenix, Ariz.,providing a new look at the secretive search giant’s operations. From the company that pioneered the idea that the data center is no longer a place to keep servers, but rather a computer in and of itself, this evolution is eye-opening.

Joe Kava, the VP of data centers for Google, kicked off his presentation with seven years of data center history at Google. So first, the timeline:

  • 2005: This is when Google used containers in its data center, where IT hardware is integrated into the data center space.
  • 2006: Google created a purpose-built data center where it moved from containers to its own data center design.
  • 2007: Google addressed the supply chain issues with “modular at scale” to support the cost-effective and faster way to build out its data center capacity.
  • 2008/2009: In this year, metrics like PUE and best practices are used and shared.
  • 2010: Google launches its renewable energy efforts, spending over a billion dollars on the initiative. It has reached 260 MW of renewable energy acquired by 2012.
  • 2011: A year for standards — the data center group meets ISO and OHSAS certifications.

This year’s theme is transparency.

Google sharing seven years worth of data center initiatives is unheard of in the data center industry where secrecy is a standard practice. As an example of this transparency, Google has shared data showing that it has improved its data center mechanical systems over the past four years, reducing the energy use in electrical and cooling systems by 42 percent.

During the third quarter of 2008, Google’s electrical and cooling systems across all of its data centers used 21 watts for every 100 watts of IT load. In the second quarter of this year, it used only 12 watts for every 100 watts of IT load. In the data center industry, we would call this a power usage effectiveness of 1.12.

After seven years of data center development, Google shared its vision of modular, cost-effective designs that reduce the lead times to deliver data center capacity. Six years ago, it took 16 months for Google to build its data center in Atlanta. Now, Google can react much more quickly as buildings are built to add capacity at rates measured in months not years.

The inside of the data center looks like no other, as Google has prioritized being flexible and efficient while basing its total costs of ownership for IT over a life cycle as short as seven years. This is the accounting depreciation schedule for capex to build data centers and IT hardware, and takes into account operational expenditures. It is not uncommon for legacy data centers to have a life cycle that corresponds to the depreciation of the data center building at 27 years. Google is updating its data center mechanical infrastructure in the same way a manufacturer wants the latest tooling to support production innovation.

One of the most surprising statements in another presentation by a data center provider at the conference was how low the premium for renewable energy is, yet no customers have chosen the option when presented with it. Yet Google, in thinking long-term, has made the choice to be carbon-neutral, share its efficiency best practices and make it a priority to have a sustainability strategy.

Kava’s presentation had more than 45 slides and went into a deep dive about cooling systems and water use. If you are interested in these topics, here is a post on how Google determined pumps are more efficient than fans. Here is one on the range of water sources Google uses to cool its data centers.

Google has made one of the most significant efforts to change the data center industry by embracing transparency as part of its data center strategy.

Dave Ohara is a GigaOm Pro Analyst, covering data centers, cloud and Big Data. You can read his latest research on GigaOM Pro, and also follow him on his blog and on Twitter at @greenm3.

  1. Reblogged this on My Blog and commented:
    Google-chug-chug

    Share
  2. Google has long said and shown how speed and efficiency is important to providing the best user experience, which links directly to how much money they make through ads so it’s interesting to see how this plays out in their data centres. This is perhaps because the data centre is not just a place to put servers – it’s the direct supply line to their users, so they’re more interested in having the latest tools which ultimately make a real difference to the end user.

    This has previously been considered a competitive advantage but I wonder what the goal is now of opening things up. Do they want to help with the energy/environmental issues that are best solved at their scale or is there a further business advantage?

    Share
    1. Gianfranco Palumbo Wednesday, November 14, 2012

      Pretty sure they want to hire the best talent. And what better way that tell the world you’re the best company to work with you know all-things-datacenter

      Share
  3. 12 watts for cooling for every 100 watts of IT load?! – maybe they should move the servers under water

    Share

Comments have been disabled for this post