3 Comments

Summary:

Facebook’s Open Compute Project has been characterized as revolutionary, a giant push that will propel server design into the future now, but what if it isn’t actually all that meaningful? What if it’s just a cool, but niche project that won’t catch on outside Facebook?

balloon on ground

Facebook’s Open Compute Project has been characterized as revolutionary, a giant push that will propel server design into the future now, but what if it isn’t actually all that meaningful? What if it’s just a cool, but niche project that shows how smart Facebook is, and might inspire a few other huge companies with similar needs to follow suit, but will have limited impact in the greater IT world? Such is the case put forward by Jeramiah Dooley this morning in his Virtualization for Service Providers blog (note: Dooley is an employee of VCE, the joint venture between VMware, Cisco and EMC to peddle Vblock converged infrastructure systems).

Essentially, argues Dooley, “Facebook isn’t doing anything that Google and others haven’t already done,” and the model, which works great if you have just one application type that can benefit from Facebook’s brand of application virtualization, won’t cut it in most IT departments. “[T]he users, and their [multiple types of] applications, always come first,” he writes. Elaborating on the latter point, he adds:

In the case of Facebook and Google, the data centers, every server it holds and even the geographic location of the facility have been focused to support the one application stack they provide to their end-users, which isn’t so different from how an enterprise uses specific types of servers and environmental design to cover the requirements from each of the application types they support. The difference is in the number of applications that are required, and the scale that they are used.

For the most part, I think Dooley is right about everything he says; I had some of the same thoughts yesterday while digesting this news. Further, I’d add that most traditional enterprises — while certainly concerned about energy costs and standardization — are fairly risk-averse and won’t likely be investing in caseless servers anytime soon after years of doing everything possible to protect their boxes and the valuable information and applications that reside in them. Unless they’re actually operating at Facebook-scale, most IT data centers aren’t built to fail, so they tackle issues like cost, consolidation and efficiency through denser servers, virtualization and private cloud software.

When it comes to the revolutionary aspect of the Open Compute Project, Dooley notes that not only has Google been doing the do-it-yourself server thing for a while, but also that:

Outside air? Higher ambient temps? Physically isolated cold/hot aisles? Forced plenum? Efficient server power supplies? Modular design? Come on, all of these have been in general use for years. I worked for a regional co-lo company for six years, and almost every one of these principals [sic] were in use there since 2004 or so.

I’m no data center design expert, so I’ll assume for the sake of argument this is true, although suspecting that what Facebook is doing might be a bit more advanced than what many other data centers operators have been doing. I would add, though, that Facebook’s bare-bones, caseless servers are more evolutionary than they are revolutionary: We’re already working toward such a world with micro servers, x86 alternatives and altogether denser, more energy-efficient server architectures, including those being sold by Facebook partner Dell. Unless other companies of Facebook’s ilk are willing to start building their own servers — something history suggests will be limited at best — it seems the natural result of the Open Compute Project will just be new lines of products for legacy server makers to sell in addition to their existing products.

What Dooley ignores, however, is that the large web applications actually are proliferating and cloud computing is changing the way that applications run. Webscale used to mean Google, Yahoo and eBay, but it now also means Facebook, Twitter, Netflix, Zynga, Myspace  (arguably) and of a number of other popular web sites that skyrocket in popularity and need to scale as efficiently as possible to handle their ever-increasing traffic. With the advent of tablets, smart phones, streaming media and every other device whose applications rely heavily on web- or cloud-based infrastructure for the bulk of the computing, this level of scale around specific application types is only going to keep increasing. What Facebook is doing now will look a lot more normal a few years down the road, and, as my colleague Stacey wrote yesterday, Facebook’s Open Compute Project will push legacy server makers to adapt accordingly. Dell and HP are already demoing servers based on the Open Compute design.

Cloud computing will force a change in server design, too, as more businesses host their applications with cloud providers that buy their servers racks at a time. Dell is already selling stripped-down servers like hotcakes through its Data Center Solutions group, and Amazon has been known to buoy entire fiscal years for SGI’s (aka Rackable) webscale server business. Should Dell’s customers, Amazon and others decide they want to drive down both server and energy costs even further, they certainly could push for Facebook-style servers from their vendor partners. Especially in standardized multitenant clouds, the underlying hardware takes a passive role to the virtualization and orchestration software that sits above it. As we move into a PaaS world, where applications are written with little regard to operating system, much less hardware, the type of server underneath becomes even less important. If they are reliable enough, efficient enough and can scale, there’s little reason that cloud providers wouldn’t make a move to Facebook-style servers at least for their standard service offerings.

I think the most revolutionary part of Facebook’s Open Compute Project is the openness of it, but that’s the subject of another post. In terms of technology, it seems mostly to be pushing the envelope in areas where we’re already seeing waves of innovation (don’t forget about ARM-based servers from companies like Calxeda and Nvidia, which haven’t even hit the market yet, but will make a splash of their own when they do). But don’t think Facebook’s designs won’t find a home; the IT-delivery paths we’re already heading down all but ensure they will.

Balloon image courtesy of Flickr user snapxture.

  1. Hi Derrick,
    What do you think about the possibility that Facebook is making their designs open to increase the compatibility of their site with other larger sites? They essentially are giving everyone their specs.
    Thoughts?

    Share
    1. Derrick Harris Tuesday, April 12, 2011

      Brindey,

      I think Jonathan’s reply speaks to your question. I don’t know that there’s a lot of compatibility, so to speak, on the hardware side — that’s primarily a software issue — but like all open source projects, Open Compute should spark community involvement that will benefit everyone’s architectures, Facebook included.

      Share
  2. There is rich irony in the quote which sparked your article. While the next generation of Internet applications are being built to scale horizontally, there is an entire ecosystem of software companies (think hypervisor) selling the same capabilities to enterprises. As you astutely point out, the future will be enabled by a thick software layer capable of coordinating applications across a variety of hardware platforms and abstract, underlying infrastructure (network proximity, reliability, etc.)

    The Open Compute Project is at v1.0 and our intent of sharing our innovations was to get feedback, demonstrate possible scenarios for other users and encourage industry-wide collaboration. To date, most meaningful discussion of infrastructure innovation happens in the hallways of conferences, not on the agenda itself.

    Share

Comments have been disabled for this post