Inside Facebook’s Photo Factory

Ever since I got BlackBerry 8900 with a 3.2 Megapixel camera, I’ve been busy taking photos -– randomly at times -– and uploading them to my Facebook account to share with 2,000 or so of my closest friends. Apparently I’m just one of millions of people who upload nearly 220 million images to Facebook every week.

In a blog post today, Facebook shares some secrets of its photo infrastructure, which is based on its core innovation, Haystack. First let me give you some fun facts about Facebook Photos, which will help you understand why what they’ve done is so impressive.

  • Facebook users have uploaded more than 15 billion photos to date, making it the biggest photo-sharing site on the web.
  • For each uploaded photo, Facebook generates and stores four images of different sizes, which translates into a total of 60 billion images and 1.5 petabytes of storage.
  • Facebook adds 220 million new photos per week or roughly to 25 terabytes of additional storage.
  • At the peak there are 550,000 images served per second. (See more in this video.)

Growth at such speeds made it almost impossible for Facebook to solve the scaling problem by throwing more hardware at it; they needed a more creative solution. Enter Doug Beaver, Peter Vajgel and Jason Sobel –- three Facebook engineers who came up with the idea of Haystack Photo Infrastructure.

“What we needed was something that was fast and had the ability to back up data really fast,” said Beaver in an interview earlier today. The concept they came up with was pretty simple and yet very powerful. “Think of Haystack as a service that runs on another file system,” explained Beaver. It is a system that does only one thing -– photos -– and does it very well. From the Facebook blog post:

The new photo infrastructure merges the photo serving tier and storage tier into one physical tier. It implements a HTTP based photo server, which stores photos in a generic object store called Haystack. The main requirement for the new tier was to eliminate any unnecessary metadata overhead for photo read operations, so that each read I/O operation was only reading actual photo data (instead of filesystem metadata).

The Haystack infrastructure is comprised of commodity servers. Again, from the post:

Haystack is deployed on top of commodity storage blades. The typical hardware configuration of a 2U storage blade is: 2 x quad-core CPUs, 16GB – 32GB memory, hardware raid controller with 256MB – 512MB of NVRAM cache and 12+ 1TB SATA drives

Typically when you upload photos to a photo-sharing site, each image is stored as a file and as a result has its own metadata, which gets magnified many times when there are millions of files. This imposes severe limitations. As a result, most end up using content delivery networks to serve photos — a very costly proposition. By comparison, the Haystack object store sits on top of this storage. Each photo is akin to a needle and has a certain amount of information — its identity and location — associated with it. (Finding the photo is akin to finding a needle in the haystack, hence the name of the system.)

That information is in turn used to build an index file. A copy of the index is written into the memory of the “system,” making it very easy to find and in turn serve files at lightening-fast speeds. This rather simple-sounding process means the system needs just a third the number of I/O operations typically required, making it possible to use just one-third of the hardware resources — all of which translates into tremendous cost savings for Facebook, especially considering how fast they’re growing.

Next time I upload a photo, I will be sure to remember that.

Photo courtesy of Flickr.