6 Comments

Summary:

Facebook has built its own networking switch and developed a Linux-based operating systems to run it. The goal is to create networking infrastructure that mimics a server in terms of how its managed and configured.

Jay Parikh Facebook Structure 2014

Not content to remake the server, Facebook’s engineers have taken on the humble switch, building their own version of the networking box and the software to go with it. The resulting switch, dubbed Wedge, and the software called FBOSS will be provided to the Open Compute Foundation as an open source design for others to emulate. Facebook is already testing it with production traffic in its data centers.

Jay Parikh, the VP of infrastructure engineering at Facebook shared the news of the server onstage at the Gigaom Structure event Wednesday, explaining that Facebook’s goal in creating this project was to eliminate the network engineer and run its networking operations in the same easily swapped out and dynamic fashion as their servers. In many ways Facebook’s efforts with designing its own infrastructure have stemmed from the need to build hardware that was as flexible as the software running on top of it. It makes no sense to be innovating all the time with your code if you can’t adjust the infrastructure to run that code efficiently.

ocpnetwork

And networking has long been a frustrating aspect of IT infrastructure because it has been a black box that both delivered packets and also did the computing to figure out the path those packets should take. But as networks scaled out that combination — and the domination of the market by giants Cisco and Juniper — was becoming untenable. Thus efforts to separate the physical delivery of packets and the routing of the packets was split into two jobs allowing the networks to become software-defined — and allowing other companies to start innovating.

The creation of a custom-designed switch that allows Facebook to control its networking like it currently manages its servers has been a long time coming. It began the Open Compute effort with a redesigned server in 2011 and focused on servers and a bit of storage for the next two years. In May 2013 it called for vendors to submit designs for an open source switch and at our last year’s Structure event Parikh detailed Facebook’s new networking fabric that allowed the social networking giant to move large amounts of traffic more efficiently.

fbwedge

But the combination of the modular hardware approach to the Wedge server and the Linux-based FBOSS operating system blow the switch apart in the same way Facebook blew the server apart. The switch will use the Group Hug microprocessor boards so any type of chip could slot into the box to control configuration and run the OS. The switch will still rely on a networking processor for routing and delivery of the packets and has a throughput of 640 Gbps, but eventually Facebook could separate the transport and decision-making process.

The whole goal here is to turn the monolithic switch into something that is modular and controlled by the FBOSS software that can be updated as needed without having to learn proprietary networking languages required by other providers’ gear. The question with Facebook’s efforts here is how it will affect the larger market for networking products.

Facebook’s infrastructure is relatively unique in that it wholly controls it and has the engineering talent to build software and new hardware to meet its computing needs. Google is another company that has built its own networking switch, but it didn’t open source those designs and keeps them close. But many enterprise customers don’t have the technical expertise of a web giant, so the tweaks that others contribute to the Open Compute Foundation to make the gear and the software will likely influence adoption.



Photo by Jakub Mosur

Structure 2014 ticker

  1. Just curious, how is this switch any different from the new switches produced by networking companies? I understand that detaching the control plan will make these switches more suited for SDN-like environments, but Facebook is not the first one to claim that. It may look like a server, but it does not function as one. It will be a 640 Gbps switching ASIC.

    Reply Share
  2. Can I just say how utterly hilarious and relevant this article title is with the recent Facebook outage?

    Just made my night.

    Reply Share
    1. But there’s been no report of *what* caused the Facebook outage. Maybe a storage array caught fire. Maybe a construction worker put a back-hoe through a fibre bundle in the ground. Maybe there was a major power loss they had no control over.

      I don’t see how one outage that may have been completely out of their control has any bearing on them developing a product that suits their needs.

      But yeah – “utterly hilarious”. Sure. Whatever.

      Reply Share
  3. Marius Telemacher Thursday, June 19, 2014

    They use AC/DC because it’s heavy metal.

    Reply Share
  4. Facebook switches look like servers? I opened up my server and I can’t find the 40gb switching ASIC. I’ve been robbed! :)

    Reply Share
  5. What moron wrote this?

    it looks like a switch inside.

    MY GOD THERE IS A PCB it must be a computer. I opened up my clock radio and there is a PCB it ALSO LOOKS LIKE A COMPTUER!

    Reply Share