8 Comments

Summary:

For anyone who didn’t know, Facebook is a huge Hadoop user, and it does some very cool things to stretch the open source big data platform to meet Facebook’s unique needs. Today, it detailed how it migrated its 30-petabyte cluster from one data center to another.

wildebeest migration

For anyone who didn’t know, Facebook is a huge Hadoop user, and it does some very cool things to stretch the open source big-data platform to meet Facebook’s unique needs. Today, it shared the latest of those innovations — moving its whopping 30-petabyte cluster from one data center to another.

Facebook’s Paul Yang detailed the process on the Facebook Engineering page. The move was necessary because Facebook had run out of both power and space to expand the cluster — very likely the largest in the world — and had to find it a new home. Yang writes that there were two options, physical migration of the machines or replication, and Facebook chose replication to minimize downtime.

Once it made that decision, Facebook’s data team undertook a multi-step process to copy over data, trying to ensure that any file changes made during the copying process were accounted for before the new system went live. Perhaps not surprisingly, the sheer size of Facebook’s cluster created problems:

There were many challenges in both the replication and switchover steps. For replication, the challenges were in developing a system that could handle the size of the warehouse. The warehouse had grown to millions of files, directories, and Hive objects. Although we’d previously used a similar replication system for smaller clusters, the rate of object creation meant that the previous system couldn’t keep up.

Ultimately, Yang writes, the migration proved that disaster recovery is possible with Hadoop clusters. This could be an important capability for organizations considering relying on Hadoop (by running Hive atop the Hadoop Disributed File System) as a data warehouse, like Facebook does. As Yang notes, “Unlike a traditional warehouse using SAN/NAS storage, HDFS-based warehouses lack built-in data-recovery functionality. We showed that it was possible to efficiently keep an active multi-petabyte cluster properly replicated, with only a small amount of lag.”

For Facebook, though, it looks like its fast-growing Hadoop data warehouse is just part of a larger trend toward needing more space. Last night, Facebook confirmed it’s building a second data center in Prineville, Ore., next to its existing one. That will make three for the company, which also is building a data center in Forest City, N.C.

Image courtesy of Flickr user daretothink.

  1. Good stuff. Yahoo should talk more about its activity in this area. They operate roughly 20 large clusters in 4+ data centers for a total of >42,000 nodes. Many production apps run hot/hot so that there is no interruption of service in the face of datacenter failure. Many others run hot/warm with all the input data imported to 2+ datacenters simultaneously.

    This scale of move is large even by their standards, but could be handled in stride by their excellent ops team. They’ve done plenty of full cluster moves as they have scaled up Hadoop.

    Share
  2. Well considering the fact that its 30 PB .i think Facebook did a good job with data.But what happens If that second data center gets fulled?

    Share
  3. ah, why would you build your disaster recovery data center, next door to your primary data center? what am i missing here?

    Share
    1. Ralph H. Stoos Jr. Thursday, July 28, 2011

      You are not missing anything. They missed at least part of the CISSP course. I think they should move everything to the coast of Florida or Louisiana and wait for a hurricane. Doh!

      Share
  4. Redundent Data Centers next to each other? Shame on you.

    Share
    1. Derrick Harris Thursday, July 28, 2011

      Just to clarify, FB didn’t mention where it moved from and to. I suspect it was from an existing colo space to Prineville, but I could be wrong. And the move wasn’t about DR, it just proved that relatively fast DR is possible with HDFS.

      The new data center it’s building is next door to the current one in Prineville, but FB is also building one in N.C., which is a long way from Oregon.

      Share
  5. CloudCover℗ is the latest way to store your computer content–no more cranky hard drives, no more frustrating “sorry full” popups on your screen. We offer only the finest vapor storage units. All clouds are pretested using a rigorous 10-point quality control system. No fog, no mist, only pure fluffy floaters are selected by our experienced Cloudies.

    The whimsy continues at Thinking Out Loud, http://marperl.blogspot.com/2011/03/cloudcover-computing.html

    Share
  6. Riaan Pietersen Friday, July 29, 2011

    This is fascinating! Huge amounts of data replicated. Job well done.
    I think Facebook should look at engineering a system whereby small chunks of data is stored on users computers throughout the world and then collated in a torrent like fashion with caching happening in their data centres. Now that would be something! :)

    Share

Comments have been disabled for this post