1 Comment

Summary:

Making good on a pledge last year, Twitter has moved into its own custom-built data center, which the company claims will be the service’s “final nesting ground” — and, hopefully, put an end to its reputation for frequent or extended downtime.

moving truck

Making good on its pledge last summer, Twitter has moved into its own custom-built data center designed to handle its unique needs, and which the company claims will be the service’s “final nesting ground.” Twitter engineer Michael Abbott detailed the move in a blog post this morning. With the process complete, Twitter has few excuses left for extended or frequent downtime, something for which the popular service has become known over the years.

The migration, as Abbott describes it, began in September and required a fairly detailed process of replication and careful staging:

First, our engineers extended many of Twitter’s core systems to replicate Tweets to multiple data centers. Simultaneously, our operations engineers divided into new teams and built new processes and software to allow us to qualify, burn-in, deploy, tear-down and monitor the thousands of servers, routers, and switches that are required to build out and operate Twitter. With hardware at a second data center in place, we moved some of our non-runtime systems there – giving us headroom to stay ahead of tweet growth. This second data center also served as a staging laboratory for our replication and migration strategies.

At the same time, Abbot explained, Twitter’s engineers were busy prepping the ultimate data center to take over all of the work:

Next, we set out rewiring the rocket mid-flight by writing Tweets to both our primary data center and the second data center. Once we proved our replication strategy worked, we built out the full Twitter stack, and copied all 20TB of Tweets, from @jack’s first to @honeybadger’s latest Tweet to the second data center. Once all the data was in place we began serving live traffic from the second data center for end-to-end testing and to continue to shed load from our primary data center. Confident that our strategy for replicating Twitter was solid, we moved on to the final leg of the migration, building out and moving all of Twitter from the first and second data centers to the final nesting grounds.

As Abbott explained, the new Twitter infrastructure — which reportedly is housed within C7′s new Salt Lake City data center — is designed to optimize performance and give Twitter extra runway for improvements. For Twitter’s sake, the strategy better pay off because all of the new features in the world won’t necessarily make up for relatively regular bouts of downtime, especially as the service grows even more popular. If the migration process is any indication, Twitter may finally have found the cure for the all-too-common fail whale, as Abbott cites improvements in performance and uptime during the migration, while still adding new features and more users.

Twitter’s new infrastructure also highlights a big difference between itself and Facebook in terms of sheer operational size. After all, when Facebook started to outgrow its hosted infrastructure, it didn’t lease space within a colocation facility; rather, it built its own cutting-edge facility. What’s more, it doubled the planned size and sited a second facility during the construction process. Facebook has more users, houses much more data and is rolling in cash, so its out-of-nowhere decisions to build two large data centers isn’t too surprising. But I wonder whether its fate doesn’t foretell what might happen with Twitter a few years down the line. Will continued growth, data and performance needs take Twitter to a place where it’s in the company’s best interests, technologically and financially, to break ground on a new final nesting ground?

We have approached Twitter for a comment on this story and to verify the location of its new data center.

Image courtesy of Flickr user TheMuuj.

  1. Derrick, I think your article highlights some important trends in large scale service deployments. As a service grows, the need for high availability and excellent performance becomes business critical. Both Twitter and Facebook built their own data centers, incorporating flash memory and new custom data access technologies to insure this excellent quality of service rather than trying to run it in the cloud. Smaller growing companies also need high availability and excellent performance, along with the power and cost savings, and advances in flash memory-based standard databases should enable them to realize these benefits without having to fund the large scale development of custom infrastructure.

    Share

Comments have been disabled for this post