11 Comments

Summary:

A radical change in networking will be unlocked by the combination of commodity networking hardware and software defined networks, but we are in the early days. The technologies will inevitably go through the tried and true hype cycle, but that cycle has indeed started.

People at the Open Networking Summit

People at the Open Networking Summit

Earlier this month I spent a few days at the Open Networking Summit in Santa Clara, Calif. and walked away certain I watched history being made in the networking industry. The emergence of the OpenFlow standard and software defined networking have been on my radar for a while, but at this event, the future coalesced.

The secret is out on SDN.

I’ve been following SDN and OpenFlow almost since its earliest days. I’ve been lucky enough to know Martin Casado since before Nicira knew what it was going to build and Guido Appenzeller of Big Switch of SDN since his days at Voltage Security. I attended the first Open Networking Summit back in October, but was floored by the scale of the April event. Attendance was up over 3x, and people from all corners of the ecosystem were there. Clearly the secret is out and it’s evident that the networking industry has been starving for the next big thing.

Martin Casado, CTO and co-founder of Nicira

The talks by Google really stood out as the most important of the event. While the industry has intellectually understood the implications of adding new abstraction to the networking stack, Google’s presentations on what it had actually built were game changing. Theory is now practice. Google is ushering in the era of white box networking similarly to what SuperMicro has done in severs. It combined this with centralized and server-based route computation to turn the wide area network into just one of many services its applications can call on programmatically. Of course none of this will be available to the outside world, but Google has shown what is now possible.

Whitebox networking needs a champion

In order for these types of capabilities to become available to mere mortals, a number of developments need to occur in the industry. As evidenced by the work Google needed to do to pull together its solution, a true commodity ecosystem for networking doesn’t exist yet. This fact is often glossed over by SDN proponents.

While the industry and pundits seem focused on figuring out who is going to be the VMware of networking, the commodity networking ecosystem first needs a BIOS and Linux of networking. Practically, this means a set of operating system software and routing protocols robust enough to run networks at scale. This is a tall order, but efforts from a small handle of start-ups and the Open Source Routing Project are driving things in this direction.

Taking the routing out of routers

Urs Hoelzle

These efforts are focused on the low-level software built to run on whitebox switches and are completely synergistic with the higher layer efforts of those building OpenFlow controllers and applications that run on top of them. Google’s work here was also path finding. Urs Hölzle, VP of infrastructure at Google, essentially gave a commercial for centralized route computation, an application written on top of controllers that use OpenFlow to program the forwarding table of the switches they have built.

This differs radically from the traditional networking model where routers each have their own “view” of the network which they communicate to their neighbors. This idea is not entirely new as the IETF has been working for a number of years on external Path Computation Elements but this has limited deployment and was generally used for off-line calculations, not programming the network dynamically.

As an ecosystem of whitebox and branded switches emerge that support OpenFlow, opportunities will emerge for a whole new class of applications built on top of the OpenFlow controller layer. Centralized path computation is just the first to emerge. Further, with documented APIs heading northbound, networking operators will finally have the freedom to build their own applications if commercial or open source offerings don’t meet their needs.

Wide Area vs Data Center Networks

Though much of the SDN focus has been on datacenter, it’s not surprising that the WAN is the first use case Google went after as Internet core and edge routers are significantly more expensive than their little brothers in the data center core. The verdict is still out on if centralized traffic engineering will be appropriate for the data center for VM-to-VM tunnels using the new wave of VXLAN/NVGRE/STT based overlays.

The web scale data center promises to be much more dynamic than the WAN, and bandwidth inside the data center is significantly cheaper. In this case it may easier to throw bandwidth at the problem by building completely non-oversubscribed infrastructures. Unfortunately this is economically infeasible at web scale using the traditional vendors.

While I believe the potential for radical change will be unlocked by the combination commodity networking hardware and SDN, we are in the early days. The networking industry moves slowly, and customers are rightly risk averse given business impact of network stability. SDN will inevitably go through the tried and true hype cycle that plagues all new technologies in this day and age. Stay the course, and the next five years in the networking industry promises to be a lot more exciting than the previous five.

Alex Benik is a principal at Battery Ventures who invests in enterprise and web infrastructure start-ups. You can find him on Twitter at @abenik

  1. Everyone knows that centralized intelligence is good for designing end-through-end optimal stable robust flow and QoS control and management protocols. It provides a neat separation of data plane from control and management planes. But can you make it as robust to failure and attacks as distributed intelligence?? Maybe we should design it as a hybrid architecture. In case the central intelligence goes down or offline, then the system defaults to a simpler distributed intelligence.

    Share
  2. Reblogged this on Virtualized Geek and commented:
    I just listened to a talk from Berkeley professor Scott Shenker yesterday on youtube http://www.youtube.com/watch?v=WVs7Pc99S7w that gave an excellent breakdown of SDN and he spoke of the need of the Network Operating System before SDN’s can become a reality.

    When I think about it, I’m rather amazed that we haven’t created an abstraction for the network. His talk speaks about how relatively easily we’ve done this at layer 2 but how difficult it is to do at higher layers due to the non-modular design of the network stack. Applications shouldn’t be making calls to the network address but rather to the network service.

    Interesting stuff. OpenFlow is a step in the right direction to creating the “BIOS” that we need. I’m especially happy that Google is at the bleeding edge of this in a production network.

    Share
  3. It is quite easy to forget the 25+ years of innovation in routing protocols – that run the whole Internet today – without which the likes of Facebook & Google would be non-existent.
    Distributed routing and decision making in optimization has some very elegant properties and allow for things like convergence and scale.
    Centralizing such computation has inherent risks and unknown pitfalls that are yet to be understood or even discovered. It would be very delusional to suggest and even implement things like protocols in centralized openflow like controllers and expect them to scale well and converge quickly – all essential for a constantly changing and evolving network. The ramifications of these have to be well understood.
    Every so often – we go through these cycles of getting enamored with new concepts and ideas – but some fundamentals are generally invariant. Folks are clearly discovering that is only so much one can do with these controllers – and as they unravel more of the operational needs – it is becoming clear that one ends up re-implementing software that runs today’s internet!!

    Share
    1. I’m sure someone has written a book on this but a question that I ask myself is why is this so? Why has the Operating System and programming languages been able to create the abstractions that make these complex systems simpler to comprehend and develop even more complex applications around.

      Networking isn’t any more mature than these other areas. I believe we’ve had a “keep the lights on” mentality to growing the network. This is why IPv4 is still around. It’s a difficult problem to solve reliably and the impact affects all systems connected to the network.

      That’s why I’m glad to see a company like Google is putting Openflow into their production network. I get the impression they know how to solve hard problems.

      Share
    2. Actually I very much agree with you. Distributed routing protocols have allowed the internet to scale incredibly well and are not going away. Even with centralized TE, traditional protocols do a great job filling those pipes and detecting failures, as you point out. They are not going away.

      Share
  4. Shouldn’t an article that mentions whitebox network vendors authored by a principal at Battery Venture disclose their investment in Cumulus Networks?

    http://www.battery.com/portfolio/cumulus.html
    http://www.wired.com/wiredenterprise/2012/03/google-microsoft-network-gear/

    Share
  5. Great read, Alex. I agree that the networking industry has been starving for the next big thing for quite some time now. And, with all the focus on SDN and OpenFlow, we’re starting to have a much needed dialogue about the problem. However, I’m not convinced they are the solutions the industry needs, but more a means to an end (rather than the end in itself). I recently wrote a blog post on this very topic. If you’re interested, I encourage you to check it out: http://bit.ly/Jffjf3

    Best,
    Mat Mathews
    Plexxi
    VP, Product Management

    Share
  6. Great article! Reblogging–if you don’t mind.

    Share
  7. Reblogged this on applicationspeeds.

    Share
  8. Interesting point by anonymous “guest” on the inherent risks and unknown pitfalls of centralization. On the other hand, the risks of the current Internet economic model are well-known: increased demand for network hardware investment to keep pace with unquenchable public demand for broadband.

    One of the chief benefits of OpenFlow, in layman’s terms: It’s a lot cheaper. Hardware manufacturers who’ve grown fat & happy on the current Internet model may not like it, but their days — or at least the days of many of them — are numbered once SDN hits the streets.

    Share
    1. Their days are numbered? Not quite. Who do you think will be building the new hardware?

      And by the way, there is no longer such a thing as centralization which isn’t at the same time distributed. You need to think a little harder about the problem at hand.

      Share

Comments have been disabled for this post