Earlier this year, rumors swirled about whether Twitter had actually moved into a new Utah data center, or if it was forced to move its operations to a different data center in Sacramento. That confusion still hasn’t been resolved, but now Rich Miller at Data Center Knowledge reports that Twitter is leasing more data center space, this time in Atlanta.
The move to Atlanta appears to be an effort to replace Twitter’s previous East Coast operation, in Ashburn, Va., when Twitter leased space from NTT America there. As Miller points out, though, Twitter might still be looking for more space in the Washington, D.C., area too.
Wherever they’re actually located, the tale of Twitter’s data centers underscores the importance of the infrastructure that powers our favorite web applications. A company’s infrastructure has to be big enough to store loads of data, fast enough to serve transactions at the rate users expect them, and scaled enough to help guarantee uptime should another site fall. The East-West approach, as Miller calls it, helps with all three goals and, as he notes, Facebook and Apple (s aapl) have undertaken similar strategies.
However, that East-West plan is just a first step toward globally distributed infrastructures like those built by web veterans such as Google (s goog), Yahoo (s yhoo) and Microsoft (s msft) over the years. After that global build-out occurs, companies can start promising no unplanned downtime for applications, as Google did earlier this year for Google Apps. Fielding multiple global locations doesn’t guarantee continuous uptime — companies still have to work some magic around failover and workload migration — but it’s a great start.
It’s noteworthy, though, that Twitter is still very much in the leasing phase while web up-and-comers Facebook and Apple are busy building their own facilities from the ground up. That decision probably has a lot to do with money — something which Facebook and Apple have in spades, compared to Twitter.
But building your own data centers isn’t critical, and Twitter likely will be just fine utilizing leased space. Building offers customization advantages over leasing that can help companies drive down their energy bills, for example, but leasing lets them take advantage of the data center operator’s hard work in building state-of-the-art facilities.
There’s also the software aspect, which goes a long way toward improving an application’s performance and usefulness. Twitter has been very active creating — and open sourcing — all sorts of tools for analyzing user data and handling the firehose of data streaming into the service at all times. Recently, Twitter bought social analytics startup Backtype, and it plans to open source the company’s real-time Hadoop-like processing engine called Storm.
Hopefully for its sake, Twitter will be able to keep up with the software innovation as it keeps growing well over the 100-million-user mark. Last week, Abdur Chowdhury, the company’s chief scientist reportedly responsible for Twitter’s search and recommendation features, announced his resignation.