Blog Post

How Google is using OpenFlow to lower its network costs

Google is checking out a new form of networking protocol known as OpenFlow in the communications networks that run between its data centers. The search giant is testing the use of software-defined networks using OpenFlow in order to lower the cost of delivering information between its facilities, says Urs Hölzle, SVP of technical infrastructure and Google Fellow in an interview with me.

Google’s infrastructure is the stuff of engineer dreams and nightmares. The company’s relentless focus on infrastructure has led it to create a competitive advantage because it can deliver search results faster and for less money than the next guy. Much like Dell (s dell) conducts studies showing that lowering a table where people assemble its computers saves seconds and costs, Google understands that investing in infrastructure can help it shave a few cents off delivering its product. And the next area that’s ripe for some infrastructure investment might be networking, specially using the OpenFlow protocol to create software defined networks.

For Google, at a certain point, communications between servers at such a large-scale can be a problem, notes Hölzle, who is speaking at the Open Networking Summit in Santa Clara next week. He explains that the promise of OpenFlow is that it could make networking behave more like applications thanks to its ability to allow the intelligence associated with networking gear to reside on a centralized controller.

Previously, each switch or piece of networking gear had its own intelligence that someone had to program. There was no holistic view, nor a way to abstract out network activities from the networking hardware. But with OpenFlow the promise is you can program the network and run algorithms against it that can achieve certain business or technical goals.

“I think what we find exciting is OpenFlow is pragmatic enough to be implemented and that is actually looks like it has some legs and can realize that promise,” he said. However, he added, “It’s very early in the history and too early to declare success.”

Google is trying the protocol out between data centers, although Hölzle didn’t disclose details about how much Google is saving and how widespread the implementation is. Hölzle said the search giant was trying to see how it could make its wide-area network and long-distance network more flexible and speed up the delivery of services to users without adding costs. However, costs for Google aren’t just measured in terms of bandwidth, but also in terms of people required to operate the network or configuring it.

“The cost that has been rising is the cost of complexity — so spending a lot of effort to make things not go wrong. There is an opportunity here for better network management and more control of the complexity, and that to me is worth experimenting with,” Hölzle said. “The real value is in the [software-defined network] and the centralized management of the network. And the brilliant part about the OpenFlow approach is that there is a practical way of separating the two things: where you can have a centralized controller and it’s implementable on a single box and in existing hardware that allows for a range of management and things that are broad and flexible.”

But OpenFlow still requires some work. Hölzle said that there are issues with how you communicate between OpenFlow networks and those that aren’t. So, today, Google takes its OpenFlow traffic and hands it over to a router running alternative network protocols such as MPLS or BGP, what Hölzle calls a “normal” router. He expects that to change over time as things standardize within OpenFlow and among vendors. As he said at multiple times throughout the conversation, these are early days for OpenFlow and software-defined networks.

8 Responses to “How Google is using OpenFlow to lower its network costs”

  1. Stacey – Thanks for the excellent article. I think this clears up my understanding of the value proposition of OpenFlow significantly…
    If I understand correctly, we are separating the physical hardware and the intelligence layers. This means that the physical network devices can be fairly inexpensive due to lacking the built-in intelligence most high-end switches currently provide – this is replaced by high intelligence at the central controller…

  2. @Keith Townsend Monday, April 9 2012

    Dear Keith, MPLS with MP-BGP is successfully solving a “Single system” issue for ALL ISPs today. MPLS Stack in network equipment is already a commodity. What MPLS/MP-BGP Can’t fix (yet) is L7 logic inflow into network forwarding decision process. BUT if OpenFlow will, it will create a new market and in NO WAY shall it commoditize network equipment, because new devices with new logic will have to be created, they can only work if deployed in an e2e solution, this will make the TCO enormously big and bring the danger of vendor lock-in. So this whole story won’t fly for big operators, mostly for Content Providers who need to manage their application traffic.

    thank you


    • Good points. I know in the Enterprise most network teams I’ve worked involved with have had a difficult time understanding the entire network as a system. This is why I believe it’s difficult for Application and Server Operations personnel to understand problems unique to the network. MPLS solves a specific set of problems but as you stated when you move up the stack doesn’t do as much for you. I can see your point at why content creators would be specifically interested.
      Like any new technology I think you will have to worry on vendor lock in if you are an early adopter. Once standards are accepted I think this will put pressure on network vendors to innovate at the device level.
      This is my first introduction to OpenFlow technology. This post was a good introduction to the concept but I’ll make sure to follow the link and see what types of problems Google is trying to solve.

  3. Brent Salisbury

    So refreshing to see the consumers drive the networking space. Pushing networking vendors to take some risk is great for the industry. Decoupling the Network OS and abstracting up the stack may be revolutionary.

  4. Thanks. I’m going to have to follow this more closely. Networks are extremely complex (thus expensive) as they made of autonomous routers and switches that attempt to create a single system via these long in the tooth routing protocols. If OpenFlow is successful of creating a single intelligent system that’s controlled by a set of controllers Google is right that this will revolutionize networking.

    In theory this will continue the put to commodity of network devices making them basically dumb devices with ports connected to a virtualized controller.

    Almost makes me want to dust off my networking books :)