6 Comments

Summary:

Facebook is playing host to half-a-billion people. And one of the main (and unsaid) reasons it has been able to get there — its technical underpinnings. From thousands of servers to its own datacenter, Facebook knowns social web needs big beefy and superfast web infrastructure.

Facebook is now playing host to half a billion people. The site is the central point of many users daily lives. It is their newspaper. It is their photo site. It is their online social reality. And one of the main (and underappreciated) reasons it has been able to get there is its infrastructure.

The success of Facebook and its ability to handle 500 million users shows that even in the people centric social web, what ultimately matters is the ability to scale and the infrastructure to support that scale. Just as Google has used its infrastructure to its advantage, offering faster and speedier results to search queries, Facebook has outwitted and outlasted its competitors’ infrastructure challenges.

It is no small feat, by any yardstick. In a blog post outlining its growth, Robert Johnson, a Facebook director of engineering, notes that the service has:

  • 500 million active users
  • 100 billion hits per day
  • 50 billion photos
  • 2 trillion objects cached, with hundreds of millions of requests per second
  • 130 terabytes of logs every day

If you look at the graph of our growth you’ll notice that there’s no point where it’s flat. We never get to sit back and take a deep breath, pat ourselves on the back, and think about what we might do next time. Every week we have our biggest day ever. We of course have a pretty good idea of where the graph is headed, but at every level of scale there are surprises. The best way we have to deal with these surprises is to have engineering and operations teams that are flexible, and can deal with problems quickly.

A flexible architecture has allowed Facebook to scale with its audience. According to web analytics and performance measurement service AlertSite, between April 1 and June 30, 2010, an average response time for Facebook was about 1.02 seconds, nearly a fourth that of Twitter’s response time. Twitter was the worst amongst all social networks in terms of availability. From the AlertSite Blog:

During Q2 we witnessed a worldwide Internet event — the World Cup — which began on June 11 and carried through into the current month. However, the site was ill-equipped for the volume of traffic it would receive. Twitter’s experience demonstrates the effect worldwide events such as the World Cup can have on a website, particularly when it has not prepared in advance. As demand for real-time information increases, consumer expectations for the time it should take a website to load follow suit. The performance of social sites must scale to meet these demands.

These performance issues have caused a lot of heartburn amongst Twitter’s developer and partner community. In an IDG News report published earlier today, Seesmic CEO Loic Le Meur put it bluntly:

We are generally used to the service going down without any warning and never surprised. We’re more surprised when it’s up for weeks without problems.

That is not a good reputation for any service to have. In many ways, putting an end to unscalable infrastructure and unreliable service is what will prevent Twitter from becoming Friendster, an early social network that lost all its momentum because of its pokey infrastructure. (Twitter is addressing the problems and is launching its own data center, as reported yesterday.)

The social web is very complex. Data on social networks is dynamic, constantly growing and always changing. And the problem is only going to get more and more difficult as the amount of activity on social networks increases exponentially with every new user.

Facebook’s VP of Technical Operations, Jonathan Heiliger, when speaking at our Structure 2010 conference put it best: “You can never think about scale too early.” Especially when it comes to social web services.

Related GigaOM Pro research (sub req’d):

  1. what is social networks? Total bogus marketing term.

    FB is a web hosting company that allows consumers to publish photos online. This is super lame.

    Share
  2. This is very true. I’m quite late in the social network wagon myself. Only started to have a good presence online when Twitter came out. Twitter was my first active social profile. I’ve built up my following and conversations every now and then. Eventually, I signed up for a Facebook too. Now, I find myself updating my Facebook more than I do my Twitter account. And by experience, Facebook has never failed me. Twitter, on the other hand, has whales for me every now and then…

    Share
  3. Great post. So who was the 500 millionth user? I hope he/she got a ridiculous prize.

    Share
  4. Somehow these numbers don’t seem right.

    The 500 million active user number is well documented/celebrated.

    But 100 billion hits per day?

    So the average active Facebook user produces 200 hits per day? That seems excessive.

    Share
    1. Hits are different than visits. Given any single page visit will load say 20-30 individual items chances are thats correct. Especially when you know some people hit F5 like there’s no tomorrow.

      Share
  5. The infrastructure setup is truly impressive. It is no mean task to handle web requests on this scale. All this hardware infrastructure, CPU, memory, network etc and its management does not come free although the facebook service itself is free to the 500 million users. I wonder which Wall Street firm is paying for it right now. I sure hope my 401k managed by a creative accountant is not funding this.

    Share

Comments have been disabled for this post