Structure 2010: Reinventing the Internet: Get Ready for Software-defined Networks

Designed decades ago to hold up amid nuclear apocalypse, the infrastructure of the Internet now faces strain from a very different source: the explosion of cloud computing. According to Nick McKeown, professor of Electrical Engineering and Computer Science at Stanford University, creating an Internet that will serve us well into the future demands a new, open approach to data flows and networking. McKeown spoke today at the GigaOM Network’s Structure conference about how and why we’ll increasingly see networks defined by software, with control gradually being “lifted up and out into a global network operating system.”

As emcee Joe Weinman, VP of Strategy and Business Development at AT&T (s T) commented, “It might be a bit of a challenge to replace the multi-hundred-billion-dollar infrastructure of the Internet.” But McKeown emphasized that he’s not advocating the “crazy academic” idea of wiping the slate entirely clean and starting over. “The way the Internet needs to change is through a constant evolution,” he said. “And we’re trying to allow that evolution to happen.”

The networking industry is already starting to restructure, said McKeown, with what he calls “software-defined networks” separating control from the data path. McKeown offered the Stanford campus as an example of how this might work. His team (which is working on with support from the National Science Foundation, the governments of China and Korea, and others) started digging a few years ago into the question of how far central control of all data flows could really go, looking at how difficult (and costly) it would it be to centralize the process of how every decision is accepted and then routed in the network.

According to McKeown, it took only “about half of a PC for the entire Stanford campus,” which led him to a basic conclusion: “If you can centralize the controller in a logical sense, eventually you will. You will always prefer to do it this way if you can.” Due to replication needed for fault tolerance and performance scaling, this centralized control actually functions in many ways a distributed controller, he said.

Gradually, said McKeown, control of data flows will be lifted up and out into a global network operating system, exploiting flow tables that exist in switches and networking gear today. On top of that system will be presented an API on which features will sit. This will demand an open interface, as well as “at least one, and probably many” network-wide operating systems, both open and closed (“the more the merrier”). According to McKeown, “You also need a well-defined open API,” which his team expects to come along within a few years.

gigaomtv on Broadcast Live Free