Blog Post

How Facebook Changed Technology in One Day

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

The biggest deal about Facebook’s open compute project isn’t the project, it’s the wave of innovation this can bring forward at the systems level — which will affect everyone from the chipmakers to the giant systems vendors and data center operators. At its event Thursday, Facebook unveiled the Open Compute Project, which essentially open sources the systems layer that sits between the standard components inside a server and the hypervisor orchestration layer (which itself has been open sourced by the Open Stack project).

There are two things I took away from this: (1) by tying the servers and the data center together in a holistic unit, the data center has now officially become computer and (2) the big iron providers have just had the rug pulled out from under them. They will need to shift their business to this data-center centric viewpoint or they will lose out in the very area where their business is growing fastest.

These server systems, which have historically been built by IBM (s ibm), HP (s hpq), Dell (s dell) and now, even Cisco (s csco) have already begun on a path to consolidation and players such as HP have already been preparing for this future of the data center as a computer by purchasing EYP, a data center design firm. In this vision of the data center as the computer, the server becomes a component, and now, after Facebook’s announcement, it becomes a commodity component of sorts.

Yes, people will still buy servers from Dell, HP and the like, but as more and more people move to on-demand computing either at the infrastructure level as provided by Amazon’s EC2 (s amzn) or at the platform level as provided by an internal cloud or a public PaaS, the older hardware designed for legacy enterprise applications stopS being a growth business. They become mainframes. They’ll still be in the bowels of the building, but it’s not where the newest applications will be built. So the big iron vendors must learn to play a new game for these customers, and it’s a game that Dell is likely the best equipped to play.

HP's server built on Open Compute specs
Dell doesn’t fret over stripped-down commodity servers — it saw the demand and built out an entire business to sell them called Dell Center Solutions. At the event, Forrest Norrod VP and General Manager of Sever Platforms with Dell, wasn’t worried about the loss to his business from Facebook’s Open Compute efforts, simply saying that Dell will now provide end-to-end solutions and innovation on top of those types of designs. Both HP and Dell were showing off variations of some of the stripped-down, vanity-free Facebook servers and each had borrowed different elements from Facebook while emphasizing how customizable their options were.

In talking to the sales people manning the server prototypes it occurred to me that for webscale customers such as Facebook, it makes sense to put in the time and effort to build your own house, while at the other end there are those who buy EC2 instances or dedicated hosting, which is like renting an apartment. There’s no customization there, a point Jonathan Heiliger the VP of Technical Operations at Facebook, made at the event. However, gear on offer from Cisco, IBM, Dell or HP is like getting a McMansion — there’s some customization, but there are only a few basic models to choose from.

I think in the short-term the market for McMansions is giant as enterprises test out the cloud, but want trusted performance and vendors, but the aspiration for many will be to build their own custom homes. And What Facebook has done is make the custom architecture cheaper to build and run, which in turn makes it easier for other players to come behind and adopt that for better apartments inspired more by the custom homes rather than McMansions.

At the systems level, the news is horribly disruptive, but for the chip companies the pressure is now on. Unlike code, which can be tweaked in a matter of hours or days, hardware has to be built. And if we’re talking about constructing a data center, we’re talking about a construction process that lasts months if not years. So how fast can an open source hardware design iterate? Pat Patla, GM and VP of the server division at AMD, explained that development cycles for hardware and systems have been gradually compressing from 24 months seven or eight years ago to 18 months in the last few years. “Now, lots of folks are comfortable in 12-month development cycles and [Facebook’s news] should help more folks get there, but silicon doesn’t move that fast and now there is a very high pressure on the silicon.”

Intel's Jason Waxman (left) and Rackspace's Graham Weston
And for those who think this doesn’t change much, I’ll close with Graham Weston, the Chairman of Rackspace ( s rax) who said that Rackspace was planning its next decade of data center projects and had been working on how to build out the right, efficient infrastructure for the last few years. But after it talked to Facebook and saw its 38 percent savings achieved in running the servers he said of the Rackspace plan, “We threw it out.” Now it plans to adapt Facebook’s ideas for its own use, and perhaps build on them.

And that’s the biggest news of all from today’s announcement. Once something is shared with the community, everyone can take part in adding innovation on top of innovation. Today, Facebook has achieved a power usage effectiveness rating of 1.07 which is down from and industry average of 1.5, but now that Facebook has shared, how long until someone can reach .75, or zero? Sure, this is disruptive for the big iron vendors, but it’s also disruptive to the old, slow way of improving our computing infrastructure. So let’s see what’s next.

8 Responses to “How Facebook Changed Technology in One Day”

    • Christoph Weber

      I think you’re a bit harsh here, and so is the author of the linked blog. For example, the power supply in FB’s servers is innovative, as is their battery backup solution, and the design docs allow others to come up with competing implementations if they are sufficiently knowledgeable. And because the designs are GPL’d there is no road block whatsoever to reimplement anything.

      Moreover, Facebook’s approach is a nice example of leaving nothing unquestioned when designing and implementing a new data center. Just sharing the thought process is already worth a lot.
      I wish Google and Amazon were similarly open.

    • Hey Mohan… thanks for taking an interest in the Open Compute Project. As we said last week, we’re starting at v1.0 and are looking forward to collaborating with like-minded people on how we can collectively improve application, server and datacenter efficiencies. The site has already been updated with additional material and we are working on posting more detailed specifications as the partners provide permission to share their work.

  1. J Flash Rodriguez

    Powershifting since 1990 and always learning new things. With PUE being important to climate change, thank God for constant innovation. Having worked at the many fine technology institutions mentioned , PUE has come a long way in 10 yrs. Like Alvin Toeffler said.

  2. What does it mean to “do work” in this context?
    Deliver an end result of some sort? What would that be?
    Aligning magnetic domains on a disk platter? Sending real info encoded as a burst of light onto a fiber that actually leaves the building?
    Beyond those, everything else winds up as heat (ok, perhaps a stray RF noise emission might escape the building and planet and propagate through outer space :-)

    • Christoph Weber

      As a facility designer/maintainer you have no idea what the servers do and simply assume they’ll do something useful. Delivering on that assumption is the job of the programmers and sysadmins. The facility guys make sure that of the power entering the building as many Watts as possible are used to power servers, as opposed to being used for overhead, such as cooling, lighting, various losses, etc. In this context work simply means keeping the servers humming.

      The servers convert those Watts into heat, of course. No useful mechanical work is done, and the information entropy is completely negligible.

  3. Duncan

    To achieve a PUE of 1.0 or lower would require the laws of physics, as we know them, to be bent and broken. A PUE of 1.0 means the machine, as a whole, is 100% efficient – every watt delivered to the datacenter for machine use is used to do work, not lost as heat or in voltage conversion inefficiencies. A PUE under 1.0 would mean you have invented perpetual motion.

  4. ChristophWeber

    You can’t have a power usage effectiveness of 1.0, much less below 1. Your servers consume 1.0 units of energy by definition, and the difference from 1.0 to, say, 1.07 is how much you spend running the rest of the data center. Cooling, lighting, losses, etc.
    FB’s 1.07 and Yahoo’s 1.08 is damn low and extremely hard to beat.

    Myself, I am looking forward to a new data center in May running at a PUE of 1.2, which is still considered cutting edge if you don’t have the luxury of building from scratch like the big boys.