16 Comments

Summary:

At the FCC broadband workshop held this morning, researchers argued for a new Internet architecture built upon infrastructure currently used in large data centers that would be capable of adapting itself to deliver each individual application. Meanwhile, those associated with think tanks and the broadband industry […]

fccgovAt the FCC broadband workshop held this morning, researchers argued for a new Internet architecture built upon infrastructure currently used in large data centers that would be capable of adapting itself to deliver each individual application. Meanwhile, those associated with think tanks and the broadband industry argued that the most significant Internet-related innovation is already behind us and that we need to think about embedding more intelligence into the network we have.

It reminded me of Vanity Fair’s awesome story about the making of the web in which Bob Metcalfe relates his attempts to show some AT&T executives the precursor to the Internet:

Bob Metcalfe: Imagine a bearded grad student being handed a dozen AT&T executives, all in pin-striped suits and quite a bit older and cooler. And I’m giving them a tour. And when I say a tour, they’re standing behind me while I’m typing on one of these terminals. I’m traveling around the Arpanet showing them: Ooh, look. You can do this. And I’m in U.C.L.A. in Los Angeles now. And now I’m in San Francisco. And now I’m in Chicago. And now I’m in Cambridge, Massachusetts—isn’t this cool? And as I’m giving my demo, the damned thing crashed.

And I turned around to look at these 10, 12 AT&T suits, and they were all laughing. And it was in that moment that AT&T became my bête noire, because I realized in that moment that these sons of bitches were rooting against me.

Today’s workshop, called “The Future of The Internet,” had a similar feel, with researchers David Clark, professor at the MIT Computer Science and Artificial Intelligence Laboratory, and Taieb Znati, division director for the National Science Foundation, talking up the idea of virtualizing communications networks in order to create several networks optimized for delivering different types of applications. By the way, this focus on the ability to deliver a specific application vs. delivering a set speed is a sticky topic when it comes to defining broadband. Going forward, we’re going to be hearing a lot about it.

Scott Shenker, a professor of computer science at UC Berkeley,  added that such a re-imagined network could be created by mirroring some of the wide area networks used by the likes of Amazon and Google to send information around their data centers. As he noted, today’s telecommunications networks are built atop of specialized hardware with routers running proprietary software. He argued that if the Googles and Amazons of the world could take their focus on deploying commodity hardware and open-source routers to the telecommunications industry, the entire infrastructure of the Internet would change — including allowing for lower-cost networks that could be virtualized.

As the theory moved farther outside of the current telecommunications model, Robert Atkinson, president of the technology industry-funded think tank Information Technology and Innovation Foundation, brought things back to the present by saying that the largest innovations on the web may be behind us and that while the Internet of 2022-2023 would be different from what it is today, it won’t have gone through the evolutionary changes seen in the last decade. His wish list included more embedded intelligence in the network to help advance packets and manage a flow of real-time data, as well as some type of authentication and identification for users.

The end goal seems to be figuring out how to build a network that knows what the content it’s delivering is and where it came from rather than a packet-based network focused on getting unidentified bits from machines. How this will relate to the National Broadband Plan that’s due next year is unclear, but the ideas expressed in the panel are worth listening to. So if you’re curious about what’s out on the fringes for the future of the web, check out the webcast of this panel, which sadly, I could not embed here.

This article also appeared on BusinessWeek.com.

  1. “The end goal seems to be figuring out how to build a network that knows what the content it’s delivering is and where it came from rather than a packet-based network focused on getting unidentified bits from machines.”

    The end goal for who? Telecoms certainly. Not for anyone who cares about Network Neutrality, though. It seems to me that a network operator can spend money building ever faster dumb pipes or build ever smarter pipes that can do things more efficiently. I’m not convinced the final cost is different either way. The reason telecoms prefer intelligent networks is because their retail side is the primary customer of what is in essence a wholesale business run by their operations side. This vertical integration is the root cause of all of our problems with telecom, but that is a whole other discussion. My specific concern with the intelligent network approach is that more control means more avenues for abusive practices. The dumb-pipe operator has no reason not to be transparent and is far easier to regulate.

    Share
    1. Stacey Higginbotham Thursday, September 3, 2009

      I had that same thought initially, and still do, but if folks decide to optimize a network for a type of application they are going to have to know more about the packet as well. For example in a wireless network where resources are more scarce does it make sense to have that intelligence to help ensure QoS? I honestly don’t know. I don’t trust the carriers to be neutral without some external pressure, but I can’t totally disregard the fact that bandwidth for now isn’t unlimited.

      Share
      1. applications come and go. the beauty of the dumb pipe is that it is agnostic to the application. we already have UDP that can support video streaming without the ack overhead of TCP. but nobody uses it. Do we need one more protocol?

        I doubt it, but it makes great fodder for research funding.

        Share
      2. Jesse Kopelman Friday, September 4, 2009

        Of course it makes sens to optimize for applications — for both business and engineering reasons. My concern is from a consumer perspective — that the goal of the optimization is not better service per se, but more profitable service. The reason I support Network Neutrality is, because, without it regulation becomes infeasible. I actually think a more elegant solution is to mandate structural separation between wholesale (network operations) and retail, but it’s been about 10 years since anyone broached that as a serious option . . .

        By the way, Wireless Bandwidth is just as unlimited as wired, it’s just a question of cell density. You want higher peak throughput, you reduce the distance between base-station and user. You want higher average throughput, you reduce the number of users per cell. Both of these are obtained through the same method, reducing cell size (aka building more cell sites). Obviously there is a very real cost to building more cell sites (both capital and operational), but there are also very real costs (both capital and operational) to adding ever more sophisticated gear to perform intelligent and adaptive network management. I’ve yet to see any evidence that one method is more cost effective than another. I doubt anyone has even ever undertaken a serious study of such.

        Share
      3. Stacy, all wireless protocol standards with any weight have QoS capabilities, including Wi-Fi, Wi-Max, UWB, and all the cellular standards. There’s no debate among real wireless engineers that such things are necessary.

        Instead of worrying about making carriers be “neutral,” a meaningless state of affairs that will only hurt application developers, worry about making them deliver the service they promise.

        Neutrality is an effort to engage in network engineering by people who don’t know enough to even begin the task. Leave engineering to the engineers, and let regulators worry about money and service.

        Share
  2. totally bogus. to speak of the “googles” and “amazons” is completely misleading. how many such are there?

    this is just a bunch of know-nothing academics trying to drum up research funding so they can travel to exotic places for conferences. comparing these chumps to Bob Metcalfe is insulting.

    Share
  3. If all we expected out of the internet was the delivery of data packets between computing applications, big dumb pipes with best-effort queuing models would be sufficient. But once we start streaming real-time, high-definition television episodes and movies directly to networked 60” flat-panel HDTVs, all bets are off. Even the aforementioned computer genius showing suited telecom execs how Arpanet worked almost half a century ago would probably admit that he did not envision this happening. And it is happening.

    Video, especially streaming long-form, high-definition, requires a network that can accommodate its peculiar QOS requirements. We can argue all we want about clever ways to overcome these QOS requirements without involving the network but in the end they usually fail to satisfy the expectations of the consumer.

    Share
    1. Jesse Kopelman Friday, September 4, 2009

      In my mind the issue is not whether a private network should have management and QOS (it should). The issue is whether service providers should be able to sell such a service as “Internet Access”, with no public information on what QOS levels are, what management methods are in practice, and what preferential deals they have in place with other private network operators. If you want to sell me a managed connection, provide me with an SLA. If you are serious about all this packet-inspection and application optimization, no more hiding behind the idea of “best effort service”. I have dealt with carriers plenty, and they claim their networks are either managed or unmanaged depending solely on which answer best serves their immediate interest. It’s all about management when they don’t want to support your application and it’s all about Network Neutrality when you ask for an SLA with committed QOS levels.

      Share
  4. Googles and Amazons, eh?

    I duno…Google’s roughly 99.95% uptime isn’t terribly impressive. Also, remember about the Amazon S3 system going offline for hours awhile back? What about YouTube, which can’t seem to deliver solid video streaming these days during peak periods?

    Don’t get me wrong. I shop at Amazon. I use GMail. I use services that use S3. I even watch YouTube videos. However calling these companies out as examples for what a network should be may be a slight stretch.

    Also, about application-based communications etc., I like my internet agnostic, thank you very much. QoS to allow for realtime content streams is great, but when it comes down to nuts and bolts you’ve got 38 Mbps per channel on cable, 100 Mbps Ethernet on copper, 54 Mbps (in ideal consitions) over 802.11g and 3.1 Mbps over EvDO. Last I checked you don’t differentiate products primarily based on artificial tiers that have absolutely nothing to do with the physical realities underlying those products, however remote. My $5 monthly unlimited backup program has exceptions; I expect to pay more for those because they cost more to my proivder. Do I make sense?

    Share
  5. Back in the old days, when chips were severely limited, we had to make trade-offs between network speed and network management; networks could be dumb and fast, or they could be smart and slow. We don’t need to make that choice any more, because they can now be both smart and fast. People who demand a fat, dumb pipe are living in the past.

    Virtual networks allow you to build multiple networks, each optimized for a different application, and run them over the same infrastructure. That’s the future of networking. Internet is a fine network for stored content, but it sucks for anything real-time. You can thank Jacobson’s Algorithm and BGP for that.

    See the ITIF written comments for illumination: http://www.itif.org/files/20090903_The%20Future_of_the_Internet_FCC.pdf

    Share
    1. Jesse Kopelman Friday, September 4, 2009

      “Virtual networks allow you to build multiple networks, each optimized for a different application, and run them over the same infrastructure.”

      I completely agree. The issue is that too many carriers want to build a network one way and sell it as something completely different. My concept of Network Neutrality is not so much about forcing carriers to build/operate their networks a certain way, but to force them to be open and honest about how they build/operate them. The excuse that this would somehow compromise their ability to compete is laughable given the tiny number of competitors, the fact that employees constantly hop between competitors and their vendors, and that most ideas about network build/operation come from the vendors and not the carriers.

      Share
  6. How about Nokia and Siemens?

    Share
  7. [...] Will Google or Cisco determine our future broadband networks? >> GigaOM [...]

    Share
  8. [...] Will Google or Cisco Determine Our Future Broadband Networks? (gigaom.com) [...]

    Share
  9. The Broadband plan needs to clear on the outcomes it wishes to achieve.

    National goals on reducing volumes of journeys, increasing home working, more care delivered at home will deliver one particular design, possibly an open and transparent data transport layer capable of deivering multiple assured services.

    If the goals to deliver high definition TV to all, while leaving legacy voice services as they are, then the engineers will design something different.

    The Internet principles have served the world well and this points to retaining an open and transparent approach to networking. This demands the emergent operation properties (throughput, loss and delay at busy periods) are explained to users and uses get to use the throughput and distribute the available quality as they wish. Thus neutrality is a property and part of the design.

    This is not inconsistent with someone then building a virtual network on top to deliver IPTV, but the change in properties need to be explained and if there are centralised controls, then these need exposing to the user.

    The Broadband planners have a key decision on what outcomes they are planning to meet. Is IPTV a critical service? If not, it should be made clear, the sooner the better.

    Share
  10. We should be careful about adding any intelligence to the network. The reason the Internet has scaled so well over the last decade is largely because it requires so little intelligence in the network. The more the network is aware of what is going through it, the more “state” must be maintained and coordinated within the network. ATM was at one time (yes it is true) a contender to build a worldwide network like the Internet, but one of its major problems was that state had to be maintained for every connection through a switch. The least expensive IP router can have millions of connections through it because the router has no idea what IP connections are going through it, and only the endpoints maintain state information (at a higher layer of the protocol stack, e.g., TCP).

    Let’s not change what has worked so well. The Internet should provide a ubiquitous basic transport of datagrams. The endpoints of the networks, the computers, should use this basic IP transport to provide the applications that consumers are looking for.

    Share

Comments have been disabled for this post