Forget cool, OpenFlow and networking is now hot!

Over a year ago on this blog, I posed the question, Can Networking Be Made Cool Again? A lot has changed since then. The pendulum has swung back, and now networking is hot. In the past year Cisco (s csco) hit a major speed bump, the OpenFlow networking protocol has burst into the popular consciousness and the first new wave of startups has emerged to meet the changing requirements of the large-scale, highly virtualized data center network.

Many have been waiting for it, but this year, the cracks in Cisco’s dam finally burst open for all to see. The networking giant has lost $60 billion in market cap in the past year, hammered by Wall Street for its flirtations with consumer, collaboration and other noncore businesses. Simultaneously, Cisco flubbed the emergence of the web-scale data center customer. Its UCS servers and Nexus switches are out of touch with these customers’ performance requirements — laden with features they will never use yet must pay handsomely for.

The emergence of OpenFlow

In the same time period, OpenFlow has emerged from the Stanford labs and exploded on the scene. OpenFlow is a protocol that defines how external controllers communicate with switches to program their forwarding tables. The reaction of the networking industry reminds me a lot of what happens when I put a shiny new toy in front of my six-month-old daughter. First she stares at the toy intently, then bats at it a few times, ultimately succeeding in grabbing hold. Then she jams it straight into her mouth. We are currently still in the frantically swatting stage. You’ll know the industry has jammed OpenFlow into its mouth when you see Cisco supporting third-party controllers programming their switches.

Despite the purported revolutionary nature of OpenFlow, the concept of separating the control plane from forwarding is nothing new. One example of this architectural practice is the use of a parallel signaling network for call setup and teardown in the plain old telephone system. An OpenFlow control serves a conceptually similar function for packet switches. It’s ironic to see this decades-old voice concept now wreaking havoc in the data world.

But just because the concept isn’t new doesn’t mean this isn’t an important development. However, OpenFlow itself is just an interface protocol. As Nicira CTO Martin Casado put it, “OpenFlow is about as exciting as USB.” What matters is the value you can deliver in the controllers leveraging it and the potential change in industry structure enabled by the standardization of this interface. Standardization and abstractions allow the emergence of third-party controller vendors who don’t make switches. This is the truly exciting development and the one that scares the incumbent switch vendors, no matter how much lip service they pay to embracing OpenFlow.

Don’t look for the VMware of networking

In most of the discussion around OpenFlow the market seems infatuated with trying to identify the VMware of networking. This makes for splashy headlines but is a flawed analogy. What is really needed before that concept can take hold is the Linux of networking. I am sure that that won’t be the network operating systems from the major vendors, including Cisco’s NX-OS, Juniper’s JUNOS (s jnpr), or Arista’s EOS.

All of this virtualized software is great, but let’s show some respect to the physical layer. Web-scale data centers still need switches — big, dense switches to connect servers. Interop was the coming-out party for OpenFlow, and the number of vendors that showed up for the OpenFlow Lab at Interop this year was impressive. Cisco and Arista were conspicuously absent. Not surprising, since the whole concept behind OpenFlow — taking the control plane and routing computation physically out of the router — is a scary thought for them.

I suspect the next shoe to drop is that the networking industry gets Open Computed. It is inevitable that an open-source hardware architecture for the large chassis switch gets released, likely driven by a consortium of large customers. When combined with the external software control enabled by OpenFlow, this will really shake things up. Once this happens, it will be akin to the shift that took place in the server and database layers in the transition from Web 1.0 to Web 2.0. Early large-scale websites were built with Sun servers and Oracle (s orcl) databases, while today commodity servers and open-source databases are the norm in many environments.

Hurry up and wait

While the disruptions coming to the networking industry are real, it is important not to overhype the speed at which they will take place. These transitions take time, lots of time; Cisco and Juniper are amazing companies, full of world-class technologists and with massive installed bases of customers. They are not going out of business tomorrow because of OpenFlow, nor are they going to sit idly on the sidelines. This isn’t the first time a new technology emerged that purported to be a major threat to Cisco’s dominance. MPLS is a good example of a technology that was supposed to do serious damage to Cisco but that it ultimately embraced and came to lead.

It has been exciting to see networking back in the spotlight again, and I suspect the next year will be even more exciting and dynamic. Two promising startups, Big Switch and Vcider, launched this week at Structure 2011, and a number of others are still percolating in stealth mode.

Perhaps the most exciting promise of the software-defined networking movement is simply that it will increase the velocity of innovation in the industry. As the promise of these new technologies is delivered in the next 12–24 months, they will find use cases that are not restricted to the largest web properties but reach into other applications, including network security and potentially all the way down into the enterprise wiring closet. It has the potential to disrupt and reshape the industry. This means networking is cool again.

Alex Benik is a principal at Battery Ventures.