Why the “stupid network” isn’t our destiny after all


A decade and a half ago, as Internet adoption began to accelerate, David Isenberg wrote what may well have been the manifesto for the revolution, “The Rise of the Stupid Network.” He argued that seismic shifts were shaking the very foundations of the telecommunications industry: data traffic was overtaking voice, circuit switching was succumbing to packet, price-performance was radically improving, and customers were increasingly taking control.

The network, he contended, should be “stupid,” carrying bits from point A to point B, and not doing much else. Functionality was best delivered by intelligent endpoints interacting over a dumb network. As he foresaw, the interoperability benefits of a ubiquitous protocol like IP, which has now worked itself into our smartphones, tablets, and TVs – not to mention everything from electric meters to light bulbs – cannot be denied. And, thanks to Moore’s Law, even preschoolers can have hundreds of GigaFLOPS at their disposal for less than the price of a swing set.

Of course, 15 years is a long time, especially in the field of computing and communications. So the question is, does Isenberg’s line of thought still hold true? I would argue that, rather than stupid networks, we’re entering an era of “pervasive intelligence,” where endpoints are intelligent, but the network can be as well. Networks can be smart. Tunable. Programmable.

A simple analogy: Suppose you’d like to enjoy a tropical beach vacation, but are constrained by a fixed budget. Anywhere supporting your Vitamin D requirements would do. You might start by comparing resort prices in, say, Bali, Phuket, St. Tropez, and South Beach. But naturally you wouldn’t just factor in the price of the stay; you’d also factor in the cost of transport. The best decision then wouldn’t necessarily be the lowest cost resort or the lowest cost plane fare, but whatever led to the lowest total cost. To put it another way, you wouldn’t just optimize the endpoint or the transport, but would consider both together. Consider that when we check out at the grocery store, we don’t just select the most energetic cashier, but also consider the length of the queue.

Vacation planning and grocery shopping help illustrate an experimental algorithm designed at Stanford, described by Software-Defined Networking / OpenFlow icon and Stanford Professor Nick McKeown, in a YouTube video. The experiment, run on the large scale GENI (Global Environment for Network Innovations) testbed, contrasts two approaches to load balancing, or the distribution of work across multiple servers to minimize response time and maximize throughput.

Screen Shot 2012-12-14 at 8.51.32 AM

Source: YouTube

As can be seen from the chart above (a still taken from the YouTube video), random load balancing (the red line) has dramatically higher worst-case response times and variability than selecting a path simply based on lightest real-time network congestion (the green line).  Even better results would be generated by an algorithm which also accounted for server load.  As McKeown explains, “ideally, [a] request would be sent over a path which is lightly loaded to a server which is lightly loaded. In other words, we would jointly optimize the combination of the path and the server… .”

The reason McKeown reviews this example is to illustrate the power of software-defined networks and existing testbeds to accelerate innovation. As he puts it, “The point here is…a graduate student was able to take an idea, and within a few weeks, put that into a national network, run real traffic over it, … demonstrate it to others, and then hand it to them and say here’s the code.” In addition I think this particular experiment also points the way to a world of intelligent endpoints collaborating with an intelligent network to achieve something neither can do as well alone. As McKeown deduces, joint optimization would generate the best results.

A variety of technologies that enable network smarts to contribute to overall end-to-end performance and ease-of-use are emerging. Consider HetNets – heterogeneous networks that span Wi-Fi and 4G, for example. Enabling seamless handoffs between the two benefits from network intelligence. Or consider peer-to-peer file sharing. Rather than fetching a copy of a file from a location halfway around the planet, emerging approaches such as ALTO (application-layer traffic optimization) will be able to help select a more efficient location hosting that content nearby. (Being a locavore – consuming locally – can be good not only for produce, but for information products.) Moreover, such optimization can be good for users, network service providers, and over-the-top service providers.

Ultimately then “intelligent endpoint, stupid network” vs. “stupid endpoint, intelligent network” is a false dichotomy. As the Stanford work tantalizingly suggests, the best of all possible worlds may actually be smart endpoints harmoniously coexisting with a smart network.  Or perhaps even other configurations; consider the case of light bulbs and netbooks, where “stupid” endpoints access “smart” endpoints – either through today’s IP networks or tomorrow’s software defined networks, built of “dumb” switches directed by intelligent control planes. (Or, a variety of other options with unevenly distributed intelligence that come together to best deliver some particular functionality.)

New algorithms under investigation by researchers are even moving beyond networks and endpoints into additional concerns, such as power. For instance, some approaches dynamically migrate and consolidate virtual machines within a data center to enable freed up physical hosts to be powered down; others move workloads across data centers where the instantaneous cost of power is lowest.  Some might even argue that cloud computing itself demonstrates that the “network is the computer,” where services are delivered by a distributed intelligent fabric.

Moore’s Law effects mean that the cost of intelligence is dropping. And so we may as well increasingly leverage it in today’s digital economy wherever there is a net return: in the endpoint, in the network, or both.  This suggests the fall of the stupid network, and the rise of pervasive intelligence.

Joe Weinman is a senior vice president at Telx, the author of Cloudonomics: The Business Value of Cloud Computing, and a regular guest contributor to GigaOM. You can find him on Twitter @joeweinman.


Comments have been disabled for this post