OpenFlow and software defined networks are here. Now what?


People at the Open Networking Summit

The secret is out on SDN.

I’ve been following SDN and OpenFlow almost since its earliest days. I’ve been lucky enough to know Martin Casado since before Nicira knew what it was going to build and Guido Appenzeller of Big Switch of SDN since his days at Voltage Security. I attended the first Open Networking Summit back in October, but was floored by the scale of the April event. Attendance was up over 3x, and people from all corners of the ecosystem were there. Clearly the secret is out and it’s evident that the networking industry has been starving for the next big thing.

Martin Casado, CTO and co-founder of Nicira

talks by Google

Whitebox networking needs a champion

In order for these types of capabilities to become available to mere mortals, a number of developments need to occur in the industry. As evidenced by the work Google needed to do to pull together its solution, a true commodity ecosystem for networking doesn’t exist yet. This fact is often glossed over by SDN proponents.

While the industry and pundits seem focused on figuring out who is going to be the VMware of networking, the commodity networking ecosystem first needs a BIOS and Linux of networking. Practically, this means a set of operating system software and routing protocols robust enough to run networks at scale. This is a tall order, but efforts from a small handle of start-ups and the Open Source Routing Project are driving things in this direction.

Taking the routing out of routers

Urs Hoelzle

This differs radically from the traditional networking model where routers each have their own “view” of the network which they communicate to their neighbors. This idea is not entirely new as the IETF has been working for a number of years on external Path Computation Elements but this has limited deployment and was generally used for off-line calculations, not programming the network dynamically.

As an ecosystem of whitebox and branded switches emerge that support OpenFlow, opportunities will emerge for a whole new class of applications built on top of the OpenFlow controller layer. Centralized path computation is just the first to emerge. Further, with documented APIs heading northbound, networking operators will finally have the freedom to build their own applications if commercial or open source offerings don’t meet their needs.

Wide Area vs Data Center Networks

Though much of the SDN focus has been on datacenter, it’s not surprising that the WAN is the first use case Google went after as Internet core and edge routers are significantly more expensive than their little brothers in the data center core. The verdict is still out on if centralized traffic engineering will be appropriate for the data center for VM-to-VM tunnels using the new wave of VXLAN/NVGRE/STT based overlays.

The web scale data center promises to be much more dynamic than the WAN, and bandwidth inside the data center is significantly cheaper. In this case it may easier to throw bandwidth at the problem by building completely non-oversubscribed infrastructures. Unfortunately this is economically infeasible at web scale using the traditional vendors.

While I believe the potential for radical change will be unlocked by the combination commodity networking hardware and SDN, we are in the early days. The networking industry moves slowly, and customers are rightly risk averse given business impact of network stability. SDN will inevitably go through the tried and true hype cycle that plagues all new technologies in this day and age. Stay the course, and the next five years in the networking industry promises to be a lot more exciting than the previous five.

Alex Benik is a principal at Battery Ventures who invests in enterprise and web infrastructure start-ups. You can find him on Twitter at @abenik


Jim Crawford

Interesting point by anonymous “guest” on the inherent risks and unknown pitfalls of centralization. On the other hand, the risks of the current Internet economic model are well-known: increased demand for network hardware investment to keep pace with unquenchable public demand for broadband.

One of the chief benefits of OpenFlow, in layman’s terms: It’s a lot cheaper. Hardware manufacturers who’ve grown fat & happy on the current Internet model may not like it, but their days — or at least the days of many of them — are numbered once SDN hits the streets.


Their days are numbered? Not quite. Who do you think will be building the new hardware?

And by the way, there is no longer such a thing as centralization which isn’t at the same time distributed. You need to think a little harder about the problem at hand.

Plexxi Inc

Great read, Alex. I agree that the networking industry has been starving for the next big thing for quite some time now. And, with all the focus on SDN and OpenFlow, we’re starting to have a much needed dialogue about the problem. However, I’m not convinced they are the solutions the industry needs, but more a means to an end (rather than the end in itself). I recently wrote a blog post on this very topic. If you’re interested, I encourage you to check it out:

Mat Mathews
VP, Product Management


It is quite easy to forget the 25+ years of innovation in routing protocols – that run the whole Internet today – without which the likes of Facebook & Google would be non-existent.
Distributed routing and decision making in optimization has some very elegant properties and allow for things like convergence and scale.
Centralizing such computation has inherent risks and unknown pitfalls that are yet to be understood or even discovered. It would be very delusional to suggest and even implement things like protocols in centralized openflow like controllers and expect them to scale well and converge quickly – all essential for a constantly changing and evolving network. The ramifications of these have to be well understood.
Every so often – we go through these cycles of getting enamored with new concepts and ideas – but some fundamentals are generally invariant. Folks are clearly discovering that is only so much one can do with these controllers – and as they unravel more of the operational needs – it is becoming clear that one ends up re-implementing software that runs today’s internet!!

Keith Townsend

I’m sure someone has written a book on this but a question that I ask myself is why is this so? Why has the Operating System and programming languages been able to create the abstractions that make these complex systems simpler to comprehend and develop even more complex applications around.

Networking isn’t any more mature than these other areas. I believe we’ve had a “keep the lights on” mentality to growing the network. This is why IPv4 is still around. It’s a difficult problem to solve reliably and the impact affects all systems connected to the network.

That’s why I’m glad to see a company like Google is putting Openflow into their production network. I get the impression they know how to solve hard problems.

Alex Benik

Actually I very much agree with you. Distributed routing protocols have allowed the internet to scale incredibly well and are not going away. Even with centralized TE, traditional protocols do a great job filling those pipes and detecting failures, as you point out. They are not going away.

Keith Townsend

Reblogged this on Virtualized Geek and commented:
I just listened to a talk from Berkeley professor Scott Shenker yesterday on youtube that gave an excellent breakdown of SDN and he spoke of the need of the Network Operating System before SDN’s can become a reality.

When I think about it, I’m rather amazed that we haven’t created an abstraction for the network. His talk speaks about how relatively easily we’ve done this at layer 2 but how difficult it is to do at higher layers due to the non-modular design of the network stack. Applications shouldn’t be making calls to the network address but rather to the network service.

Interesting stuff. OpenFlow is a step in the right direction to creating the “BIOS” that we need. I’m especially happy that Google is at the bleeding edge of this in a production network.

Max Iyer

Everyone knows that centralized intelligence is good for designing end-through-end optimal stable robust flow and QoS control and management protocols. It provides a neat separation of data plane from control and management planes. But can you make it as robust to failure and attacks as distributed intelligence?? Maybe we should design it as a hybrid architecture. In case the central intelligence goes down or offline, then the system defaults to a simpler distributed intelligence.

Comments are closed.