At the FCC broadband workshop held this morning, researchers argued for a new Internet architecture built upon infrastructure currently used in large data centers that would be capable of adapting itself to deliver each individual application. Meanwhile, those associated with think tanks and the broadband industry argued that the most significant Internet-related innovation is already behind us and that we need to think about embedding more intelligence into the network we have.
It reminded me of Vanity Fair’s awesome story about the making of the web in which Bob Metcalfe relates his attempts to show some AT&T executives the precursor to the Internet:
Bob Metcalfe: Imagine a bearded grad student being handed a dozen AT&T executives, all in pin-striped suits and quite a bit older and cooler. And I’m giving them a tour. And when I say a tour, they’re standing behind me while I’m typing on one of these terminals. I’m traveling around the Arpanet showing them: Ooh, look. You can do this. And I’m in U.C.L.A. in Los Angeles now. And now I’m in San Francisco. And now I’m in Chicago. And now I’m in Cambridge, Massachusetts—isn’t this cool? And as I’m giving my demo, the damned thing crashed.
And I turned around to look at these 10, 12 AT&T suits, and they were all laughing. And it was in that moment that AT&T became my bête noire, because I realized in that moment that these sons of bitches were rooting against me.
Today’s workshop, called “The Future of The Internet,” had a similar feel, with researchers David Clark, professor at the MIT Computer Science and Artificial Intelligence Laboratory, and Taieb Znati, division director for the National Science Foundation, talking up the idea of virtualizing communications networks in order to create several networks optimized for delivering different types of applications. By the way, this focus on the ability to deliver a specific application vs. delivering a set speed is a sticky topic when it comes to defining broadband. Going forward, we’re going to be hearing a lot about it.
Scott Shenker, a professor of computer science at UC Berkeley, added that such a re-imagined network could be created by mirroring some of the wide area networks used by the likes of Amazon and Google to send information around their data centers. As he noted, today’s telecommunications networks are built atop of specialized hardware with routers running proprietary software. He argued that if the Googles and Amazons of the world could take their focus on deploying commodity hardware and open-source routers to the telecommunications industry, the entire infrastructure of the Internet would change — including allowing for lower-cost networks that could be virtualized.
As the theory moved farther outside of the current telecommunications model, Robert Atkinson, president of the technology industry-funded think tank Information Technology and Innovation Foundation, brought things back to the present by saying that the largest innovations on the web may be behind us and that while the Internet of 2022-2023 would be different from what it is today, it won’t have gone through the evolutionary changes seen in the last decade. His wish list included more embedded intelligence in the network to help advance packets and manage a flow of real-time data, as well as some type of authentication and identification for users.
The end goal seems to be figuring out how to build a network that knows what the content it’s delivering is and where it came from rather than a packet-based network focused on getting unidentified bits from machines. How this will relate to the National Broadband Plan that’s due next year is unclear, but the ideas expressed in the panel are worth listening to. So if you’re curious about what’s out on the fringes for the future of the web, check out the webcast of this panel, which sadly, I could not embed here.
This article also appeared on BusinessWeek.com.