18 Comments

Summary:

Facebook took aim at the hardware business back in April 2011 with the launch of the Open Compute open hardware program, and Wednesday it fired the killing blow at the $55 billion server business.

Frank Frankovsky of Facebook holding an AppliedMicro ARM board.
photo: Stacey Higginbotham

The launch of two new features into the Open Compute hardware specifications on Wednesday has managed to do what Facebook has been threatening to do since it began building its vanity-free hardware back in 2010. The company has blown up the server — reducing it to interchangeable components.

With this step it has disrupted the hardware business from the chips all the way up to the switches. It has also killed the server business, which IDC estimates will bring in $55 billion in revenue for 2012.

It’s something I said would happen the day Facebook launched the Open Compute project back in April 2011, and which we covered again at our Structure event last June when we talked to Frank Frankovsky, VP of hardware and design at Facebook. What Facebook and others have managed to do — in a relatively short amount of time by hardware standards — is create a platform for innovation in hardware and for the data center that will allow companies to scale to the needs of the internet, but also to do so in something closer to web-time. It will do this in a way that creates less waste, more efficient computing and more opportunity for innovation.

New features for a next-gen server

New Open Compute technologies on a rack.

New Open Compute technologies on a rack.

So what is Facebook doing? Facebook has contributed a consistent slot design for motherboards that will allow customers to use chips from any vendor. Until this point, if someone wanted to use AMD chips as opposed to Intel chips, they’d have to build a slightly different version of the server. With what Frankovsky called both “the equalizer” and the “group hug” slot, an IT user can pop in boards containing silicon from anyone. So far, companies including AMD, Intel, Calxeda and Applied Micro are committing to building products that will support that design.

The other innovation that’s worth noting on the Open Compute standard is that Intel plans to announce a super-fast networking connection based on fiber optics that will allow data to travel between the chips in a rack. This 100 gigabit Ethernet photonic connector is something Intel plans to announce later this year, and Frankovsky can’t wait to get it into production in Facebook’s data centers.

I’ve written a lot about how we need to get faster, fiber-based interconnects inside the data center, and the efforts to do so. What’s significant here is not just that this design speeds up chip-to-chip communication. With the right hardware, it runs a rack into a server and makes the idea of a top-of-rack switch irrelevant –something that that Cisco, Juniper and Arista might worry about (although Arista’s Andy Bechtolsheim is helping present this technology at Open Compute, so I imagine he has a plan). It’s also worth noting that only in 2012 did Facebook transition to an all 10 gigabit Ethernet network in its data centers, but now it wants to speed up the interconnects as soon as it can.

Openness makes the new servers flexible

So what has happened here is significant. Open Compute has managed to give customers — from financial services firms to web properties — a platform on which to build custom and modular servers.

A good example of this might be building a house. There are plenty of ways of doing it, from hiring an architect to choosing a plan from a selection offered to you by a developer. That was the former server world — you either went all custom or chose from “plans” provided by Dell and others. But those chosen plans might come with four bathrooms and you might only want three. If you buy 10,000 unneeeded bathrooms, that’s a lot of added cost that brings you no value.

With Open Compute, Facebook showed people how to build houses out of shippping containers. As more and more elements like Group Hug or standard interconnects are added, it’s like you can pick your favorite brand of shipping container and pop it on. Even better, if you want a new bathroom, you can swap it out without ever affecting your bedroom. And for many companies the costs of building and operating these new servers will be less than the mass-produced boxes designed by others.

This is great for web-based businesses and anyone that relies on IT. It cuts out waste and it turns what was once a monolithic asset into something that can be upgraded with relative ease and at a lower cost. So when Frankovsky is talking about renting CPUs from Intel, for example, the Group Hug design makes that possible. He can rent CPUs for a workload, and then use those servers for something else without chucking them. He can swap out chip architectures, networking cards and drives at will to put the right mix of storage, network, and compute together for his jobs as they grow and shrink.

These servers can’t be provisioned as fast, but what Open Compute has done is create a server architecture that can scale and change as much as the cloud itself does. And it did so without the server guys.

  1. Blew up, blown up, fired the killing blow, killed… Strong language from someone who never does anything more violent than staple.

    Share
  2. “and Wednesday is fired the killing blow” Does anyone proof this stuff. Poor grammar erodes credibility.

    Share
    1. It’s interesting… I find that the proofreading is typically much BETTER in independent blogs than it is in many of today’s commercial publications.

      Many online magazines and multi-author commercial blogs appear to have little or no interest in presenting a professional countenance on their own web sites.

      Share
  3. I think the headline is a bit dramatic. I don’t see this as “blowing up the server” business, as much as I see cloud hosting affecting the server business.

    Share
  4. Stacey Higginbotham has written an interesting article with horrific English grammar. She either needs to do a better job or get an editor. There is no excuse posing as a professional writer and then producing something that looks like a Facebook post.

    Share
    1. Yeah, and she has spinach between her teeth too. And a bad hair cut….

      How about just talking about the article. If her grammar stinks, email her.

      Share
  5. “Kill the server market”? Really? As in, no one will ever buy servers again? Wow– a bit hyperbolic, I think.

    Sheer inertia will have enterprises and medium-sized businesses buying servers for quite a few years, still. Small businesses are leaving it to someone else (their ISP or Amazon cloud) to do all this for them.

    Beyond that, openness doesn’t automatically mean market domination. Linux has been Free (as in beer as well as speech) for over 15 years now, and yet there are still plenty of other OS makers in town.

    Companies won’t care if an architecture is open, as much as they will care that an architecture is superior in performance, lower in cost, and has someone backing it up. Again, using the Linux as an example, Red Hat makes a billion dollars a year from “free” software. How? Because they back it up and sign off on it. That’s what the suits want.

    Share
  6. TYPO: ” [...] and Wednesday is fired the killing blow [...] “

    Share
  7. This is definitely an exaggeration. OpenCompute has not “killed” the server market.

    Share
  8. There are two fundamental questions here.

    Servers/compute for most organizations comprise only a small, <10%, portion of their overall IT spend on open systems. The larger costs are operational, license, storage, etc. Does cutting 20% out of the capex costs realy justify the transition costs and added complexity of a modular architecture?

    The second question is what is the appropriate level of granularity? Will companies really operationalize cost controls and hardware specific functions at the chip or NIC level? Sure, I can see it if a company is running tens of thousands of servers as their primary revenue generator. But for a company for whom IT is not their factory, this seems highly unlikely.

    Share
  9. Funny that in college English 102 twenty-five years ago, I got a C because of two errors in my paper which led to a mandatory deduction in letter grade. I wonder how low college standards are for English now.

    Share
  10. Let me seed some more misleading headlines for the author’s use. From 2007: “Apple destroys the phone market with the iPhone release.” From 2002: “Desktop Linux set to kill Windows.” For in a few weeks: “Blackberry 10 set to dominate the central Canadian smartphone market.”

    Share
    1. Great point. Additionally, it’s either hubris or inexperience to say that any reasonable server engineer in a fortune 5000 company is now going to begin again dirtying his hands swapping out parts in servers. That totally like went out of style like 10 years ago.

      Building your own server is *not* the new black.

      Share
  11. lawrencejonesukfast Thursday, January 17, 2013

    It describes this as a next-gen server…..

    It’s just hardware in a box, like all servers. What it does is irrelevant.

    It sounds like another form of cloud technology, and if I am not mistaken there are a number of vendors all claiming to have the lead in this race.

    Share
  12. OK now that we have seen all the comments about the writing… let’s consider the message. Open Compute was and still is a Facebook technology. But, with the release of standards and companies willing to make components it is becoming a disruptive technology in the server market. The question is how are server vendors going to respond. I am currently working on building a Hadoop cluster. Servers represent a significant cost. Every component is much more expensive than the parts cost, sometimes by more than 100%. This is show the server guys make money. For my application, Open Compute would be ideal… and it is almost ready. I would have called the current state of the threat to server vendors a shot across the bow. But, the disruption is coming. The server vendors can take the proprietary approach and try to differentiate their product… or join the fray and support Open Compute. In the end, the ability to tailor the hardware to the intended purpose is pretty compelling. So in essence, the point of the story is that the server market is changing. And, that is probably an understatement. So, “disrupted” is probably a correct observation.

    Share
  13. no redundancy plan on failure failure………

    Share
  14. You of all the people, Ms. Higginbotham, were not serious when you wrote that article, were you?

    I have high respect for your reporting, but, this is just plain horse manure. This has been done for ages, even before since I was born … and I was born a long time ago.

    Share
  15. Not 1 mention of Fusion IO despite being a key part of facebooks plans

    Share

Comments have been disabled for this post