Reinventing the internet: How do we build a better network?

reinventing internet-01

For many people the internet is an idea; the cloud, as it were. Or maybe it’s the web, or their mobile apps on their phone. It’s quite likely that as more of our interactions happen online, our entertainment is delivered via the internet and unconnected devices are transformed thanks to a web connection, the internet will fade even further from people’s minds as a physical entity, much like we no longer consider voltage unless we’re about to hop on a plane to another country.

But the actual internet is a physical place — thousands of them. When you want to check your email, packets are sent over the coaxial cable, fiber or DSL line from the modem that sits in your home to a box in your neighborhood. From there it travels to a bigger box containing servers and communications equipment. That request might travel still further to a massive aggregation point owned by your internet service provider before your ISP passes it off to one of many other networks located in a data center where ISPs, content companies and transit providers all have network access and servers.

In wireless networks, the process is similar, only the modem is in your phone and the data is sent as pulses of information inside of radio waves with each megahertz of spectrum only able to carry so many bits. Those are sent to a tower or a small cell where they then make it onto a wired network and to these same aggregation points.

RTI mock 2-02

But wired or wireless, once the traffic makes it from your last mile ISP it may go directly to the network controlled by a Google or a Netflix, or it might pass through several hops like the data center above on its way to the final destination. The routing of your traffic is determined by software inside network gear, software that your end device is running and software on servers controlled by the companies that a consumer requests content from. It’s amazing how well it all works actually.

But there are plenty of people concerned that it might not work for much longer. Between battles of peering, concerns over network neutrality, the changing shape of content and even concerns about network resiliency and privacy more people are looking at the current internet and dreaming of a change that takes into account the growing dependence society has on the internet.

While, the projects below are not a complete list, they illustrate some of the big trends in how people with a stake in the internet are thinking about making it better for the long haul and future network demands.

Push everything to the edge

The current thinking about adapting the network is really just more of the same; you just push everything further to the edge. In this way ISPs deal with the onslaught of video content that’s causing so much trouble during prime time while avoiding any huge shift in how the internet operates. Carriers, content distribution networks like Akamai and even those in the data center sector are big fans of this model, which offers ways to put popular content in the various network aggregation points housed in data centers located in cities — even those that are deemed second or third-tier municipalities.

Inside a Google data center. Image courtesy of Google

Inside a Google data center. Image courtesy of Google

In many ways, the content caching strategies of companies like Google, Amazon, Netflix with its Open Connect boxes and even carrier-hosted CDN efforts are an extension of this philosophy. New efforts here include an IEEE standards on transparent caching and maybe a new standards groups that would include content companies.

Because that content gets pushed out once and stored in a data center near the home, the content itself, as well the requests for that content, is only going as far as the nearest aggregation point, cutting down on traffic on the rest of the network. But there is a dilemma for network architects: can pushing files out to the edge continue to solve problems as demand increases for fat content like video, but also when we’re building connected homes and cities that benefit from a more mesh-network structure where devices talk to each other as well as the public internet?

Peer to peer

Pushing content closer to the edge works if you are worried about serving a huge population the same stuff. It’s like building thousands of McDonald’s restaurants in every town as opposed to expecting everyone to drive to one of 20 franchise locations across the country.

But the internet isn’t just for serving content. It has always been a two-way communications mechanism, but in the last few years consumers have, well, consumed, more traffic than they have created online. That’s changing as more people put up videos, network their homes and communities start to use networks for sharing video content, sending medical files or other high-bandwidth applications. In some cases, while the data can be small, it tends to be sensitive to latency and distance, so sending it back to a central server doesn’t make sense.

That’s why peer-to-peer technologies are still much-discussed as a way to rethink the network. Back in 2008, several ISPs and BitTorrent saw the trend of moving video files over the network and attempted to develop a new protocol called P4P that allowed P2P-shared content to travel in-network for ISPs if possible. Instead of searching for any available node to connect with, software for file sharing searched for a node that was nearby on the same network. This helped cut traffic on networks and costs for everyone.

Commotion's community network, as shown in the company's illustration here, allows neighbors to build an open mesh network and share internet access or locally hosted applications. Image from www.commotionwireless.net

Commotion’s community network, as shown in the company’s illustration here, allows neighbors to build an open mesh network and share internet access or locally hosted applications. Image from http://www.commotionwireless.net

Unfortunately, P4P didn’t pan out, in part because the amount of P2P traffic on the networks subsided and the problem P4P solved in effect solved itself. And while P2P protocols from BitTorrent are still around (and even Netflix has threatened to use P2P technologies in delivering its traffic) so far it hasn’t taken off. However, this technology is showing promise in the wireless space in open networks such as Commotion.

Named-data networks

Much like P2P envisions a distributed model of networking at the application layer (you run special software such as BitTorrent or Skype to build the network), there are a class of projects around the world and research networks that envision taking this concept to the network itself. Instead of talking to servers to get an address for a URL or device, nodes on the network are given a name and content is stored everywhere. The way content is given a name and the levels of encryption involved help define the different types of these networks.

This class started with Parc’s Content Centric networking (which still uses the internet protocol), but has since evolved to be a clean slate design for the internet with new proposed protocols. The National Science Foundation calls the concept Named-Data Networking and has come up with a new protocol and a new design that borrows some of the elements from the IP network design, but is fundamentally about communications between many distributed nodes as opposed to communication between central nodes and end devices.

The Pursuit project in Europe is an example of such a network design as is the SAIL project funded by the EU . Each of these efforts are aimed at building distributed networks that could create a more secure and reliable network better suited for the billions of devices we’re adding to it.

The internet as a market –not a highway

So far, I’ve been talking about the technical aspects of the generation internet, but the next two options are more about business models and economics that would require very little new tech to put them into place. The most complicated is a model of the internet that views it not as a highway with packets whizzing from location to location, but as a trading floor where applications bid for available capacity in real time.

net-neutrality-money-tube

Martin Geddes, a telecoms consultant in the U.K., explained it as a way to meet the needs of many different types of traffic without continuing to overbuild communications networks for certain types of traffic — notably video streaming. He’s solidly in the camp that today’s network design can’t handle the demand of video streaming — but he’s also frustrated that applications aren’t aware of current network conditions and able to adapt to them. For example, if a broadband connection is full of real-time voice or video traffic, a large operating system download might be able to wait for delivery overnight, when networks experience less peak demand.

Or, if it’s a priority, the sender or the user then pays to get that traffic to the home. The challenge with this bidding process is that it would require customers to prioritize their traffic (something many would not necessarily be able to do because it requires people to understand the needs of a variety of different traffic types) or gives last-mile ISPs undue influence over setting prices for this trading floor model. Given the furor over network neutrality and the lack of competitive last-mile broadband market, this idea seems a tough sell.

Go ahead, create a fast lane. For innovation.

Given the discussion about network neutrality in the U.S. and in Europe this proposal is likely to cause some people to rage, but it’s a neat way of calling the ISPs’ bluff on the idea that some traffic should be prioritized and that to avoid doing so will prevent innovation. Dean Bubley, an analyst with Disruptive Analysis, suggested that regulators allow for prioritization … of new types of traffic.

Photo by Thinkstock/wx-bradwang

Photo by Thinkstock/wx-bradwang

So instead of Netflix or Viacom buying faster service, existing internet companies are grandfathered into the best-effort internet we have today. Carriers can allow for paid prioritization of only truly new applications. Bubley imagines that an improvement to existing video streams such as the transition from HD to 4K video wouldn’t count as a new or innovative service but creating a content company or application that would translate video content into Gujarati on the fly would be substantially different and could get priority.

Bubley feels that such a model would still leave ISPs providing enough capacity for the services that compel customers to sign up for faster service tiers in the first place, but gives telcos what they are asking for — a new way to make money off their pipes. But this also forces telcos to actually innovate; either through finding new services that need guaranteed delivery such as a medical device monitoring system, or by setting pricing schemes that truly encourage innovation.

I’m skeptical that all ISPs would be able to make this jump, but if they can’t Bubley isn’t concerned. He thinks any such program should sunset after a set period of time, and when it does, it will answer the question of whether or not neutrality hurts innovation or helps it.

Each of these proposals deals with different aspects of the internet, from its core architecture to how we pay for it and regulate it. It’s clear that as the internet grows in size and importance we need to make sure it remains true to the core attributes that made it such a haven for communications and new ideas. We need our future network to scale and we need it to remain open. The proposals above are by no means exhaustive, but they do offer food for thought on some of the big issues facing the Internet in the U.S. and abroad.

Check out the rest of our special report below:

Images from Wx-bradwang/Thinkstock. Banner image adapted from Hong Li/Thinkstock. Logos adapted from The Noun Project: Castor and Pollux, Antsey Design, Mister Pixel and Bjorn Andersson.

loading

Comments have been disabled for this post