5 Comments

Summary:

In the early days of the web, David Isenberg famously predicted the rise of a so-called stupid network with smart endpoints. Joe Weinman, of Telx, argues that instead the network has become “pervasively intelligent” and will only get smarter.

dunce

A decade and a half ago, as Internet adoption began to accelerate, David Isenberg wrote what may well have been the manifesto for the revolution, “The Rise of the Stupid Network.” He argued that seismic shifts were shaking the very foundations of the telecommunications industry: data traffic was overtaking voice, circuit switching was succumbing to packet, price-performance was radically improving, and customers were increasingly taking control.

The network, he contended, should be “stupid,” carrying bits from point A to point B, and not doing much else. Functionality was best delivered by intelligent endpoints interacting over a dumb network. As he foresaw, the interoperability benefits of a ubiquitous protocol like IP, which has now worked itself into our smartphones, tablets, and TVs – not to mention everything from electric meters to light bulbs – cannot be denied. And, thanks to Moore’s Law, even preschoolers can have hundreds of GigaFLOPS at their disposal for less than the price of a swing set.

Of course, 15 years is a long time, especially in the field of computing and communications. So the question is, does Isenberg’s line of thought still hold true? I would argue that, rather than stupid networks, we’re entering an era of “pervasive intelligence,” where endpoints are intelligent, but the network can be as well. Networks can be smart. Tunable. Programmable.

A simple analogy: Suppose you’d like to enjoy a tropical beach vacation, but are constrained by a fixed budget. Anywhere supporting your Vitamin D requirements would do. You might start by comparing resort prices in, say, Bali, Phuket, St. Tropez, and South Beach. But naturally you wouldn’t just factor in the price of the stay; you’d also factor in the cost of transport. The best decision then wouldn’t necessarily be the lowest cost resort or the lowest cost plane fare, but whatever led to the lowest total cost. To put it another way, you wouldn’t just optimize the endpoint or the transport, but would consider both together. Consider that when we check out at the grocery store, we don’t just select the most energetic cashier, but also consider the length of the queue.

Vacation planning and grocery shopping help illustrate an experimental algorithm designed at Stanford, described by Software-Defined Networking / OpenFlow icon and Stanford Professor Nick McKeown, in a YouTube video. The experiment, run on the large scale GENI (Global Environment for Network Innovations) testbed, contrasts two approaches to load balancing, or the distribution of work across multiple servers to minimize response time and maximize throughput.

Screen Shot 2012-12-14 at 8.51.32 AM

Source: YouTube

As can be seen from the chart above (a still taken from the YouTube video), random load balancing (the red line) has dramatically higher worst-case response times and variability than selecting a path simply based on lightest real-time network congestion (the green line).  Even better results would be generated by an algorithm which also accounted for server load.  As McKeown explains, “ideally, [a] request would be sent over a path which is lightly loaded to a server which is lightly loaded. In other words, we would jointly optimize the combination of the path and the server… .”

The reason McKeown reviews this example is to illustrate the power of software-defined networks and existing testbeds to accelerate innovation. As he puts it, “The point here is…a graduate student was able to take an idea, and within a few weeks, put that into a national network, run real traffic over it, … demonstrate it to others, and then hand it to them and say here’s the code.” In addition I think this particular experiment also points the way to a world of intelligent endpoints collaborating with an intelligent network to achieve something neither can do as well alone. As McKeown deduces, joint optimization would generate the best results.

A variety of technologies that enable network smarts to contribute to overall end-to-end performance and ease-of-use are emerging. Consider HetNets – heterogeneous networks that span Wi-Fi and 4G, for example. Enabling seamless handoffs between the two benefits from network intelligence. Or consider peer-to-peer file sharing. Rather than fetching a copy of a file from a location halfway around the planet, emerging approaches such as ALTO (application-layer traffic optimization) will be able to help select a more efficient location hosting that content nearby. (Being a locavore – consuming locally – can be good not only for produce, but for information products.) Moreover, such optimization can be good for users, network service providers, and over-the-top service providers.

Ultimately then “intelligent endpoint, stupid network” vs. “stupid endpoint, intelligent network” is a false dichotomy. As the Stanford work tantalizingly suggests, the best of all possible worlds may actually be smart endpoints harmoniously coexisting with a smart network.  Or perhaps even other configurations; consider the case of light bulbs and netbooks, where “stupid” endpoints access “smart” endpoints – either through today’s IP networks or tomorrow’s software defined networks, built of “dumb” switches directed by intelligent control planes. (Or, a variety of other options with unevenly distributed intelligence that come together to best deliver some particular functionality.)

New algorithms under investigation by researchers are even moving beyond networks and endpoints into additional concerns, such as power. For instance, some approaches dynamically migrate and consolidate virtual machines within a data center to enable freed up physical hosts to be powered down; others move workloads across data centers where the instantaneous cost of power is lowest.  Some might even argue that cloud computing itself demonstrates that the “network is the computer,” where services are delivered by a distributed intelligent fabric.

Moore’s Law effects mean that the cost of intelligence is dropping. And so we may as well increasingly leverage it in today’s digital economy wherever there is a net return: in the endpoint, in the network, or both.  This suggests the fall of the stupid network, and the rise of pervasive intelligence.

Joe Weinman is a senior vice president at Telx, the author of Cloudonomics: The Business Value of Cloud Computing, and a regular guest contributor to GigaOM. You can find him on Twitter @joeweinman.

  1. The author either didn’t read or didn’t understand David Isenberg paper. David put a very different meaning into network intelligence.

    And the paper is still super relevant – Just ask any mobile operator what is their main challenge…

    Share
  2. Hi Michael…I did read the paper. It is an important paper, a seminal paper, and I agree remains largely true and relevant, and is likely to remain so. For example, an open innovation ecosystem that empowers millions of endpoint and application development entities (individuals and firms) via market-based mechanisms surely trumps centralized, monolithic planning. A standard, flexible network platform (IP/Internet), accelerates this innovation, as Stanford’s Barbara van Schewick argues in Internet Architecture and Innovation.

    However, Isenberg argued for “nothing but dumb transport in the middle,” “just deliver the bits, stupid,” and “no fancy network routing.” I think that the Stanford example helps illustrate that endpoints alone can’t always determine a global optimum in a distributed computing environment. Emerging technologies can help achieve such optima.

    Share
  3. Ramaswamy (Adi) Aditya Sunday, December 16, 2012

    Using a routing protocol which has insufficient information about constraints (end-host capacity, segment used capacity or latency) is currently how packets get from one end of the network to another. You are suggesting that the Stanford approach is saying the routing protcols should be updated or augmented by others to do that. Perhaps that is just a waypoint to a place where the endpoints are given all the possible paths with all known constraints and have the endpoint select the route? (Traffic engineering by the end point(s)) That is of course very scary to a network operator who doesn’t want the endpoint to make such decisions, but we already encourage them to do it using DNS SRV records or other similar application level mechanisms which are much more resilient in the face of network problems/misconfiguration than network-mediated ones like load-balancers.

    Yes, we now have a better way to disseminate further attributes about network conditions using SDN, but that doesn’t mean the network is getting more information to make decisions for the right reasons; it is doing so to allow network operators more control at a cost low enough that Isenberg’s suggestion can be set aside — the last time it was embraced because it allowed network operators to not spend as much (although that has turned out to be the right thing to do in hindsight for entirely different reasons — end-to-end principle etc.)

    Share
    1. Hi Adi…not sure that distributing that much information to each of billions of endpoints is as efficient as doing it in the network. Moreover, independent decisions may not lead to an optimal global decision. For example, each endpoint might simultaneously select the same least congested path, leading to poor throughput and suboptimal transport resource allocation. Peer to peer coordination or mechanisms such as exponential backoff have their own issues. The basic point is that many functions are best performed by the endpoint, locally, but some functions may be best performed “in the network”: globally, regionally, at the provider edge, etc.

      Share
  4. Joe,
    Nice article. The Stupid Network and its spiritual predecessor the End-to-Eng argument should be required reading for anyone in communications. However, I think of them more as architectural maxims more than implementation guidelines. In practice many devices break both, firewalls, NATs, SBC, proxies, etc.

    Share

Comments have been disabled for this post