1 Comment

Summary:

In this part of our special report on reinventing the internet, a look at how its growth of the internet, in terms of connected devices, content and its importance, has researchers and analysts searching for new business models and technical ways to improve the network.

reinventing internet-01

For many people the internet is an idea; the cloud, as it were. Or maybe it’s the web, or their mobile apps on their phone. It’s quite likely that as more of our interactions happen online, our entertainment is delivered via the internet and unconnected devices are transformed thanks to a web connection, the internet will fade even further from people’s minds as a physical entity, much like we no longer consider voltage unless we’re about to hop on a plane to another country.

But the actual internet is a physical place — thousands of them. When you want to check your email, packets are sent over the coaxial cable, fiber or DSL line from the modem that sits in your home to a box in your neighborhood. From there it travels to a bigger box containing servers and communications equipment. That request might travel still further to a massive aggregation point owned by your internet service provider before your ISP passes it off to one of many other networks located in a data center where ISPs, content companies and transit providers all have network access and servers.

In wireless networks, the process is similar, only the modem is in your phone and the data is sent as pulses of information inside of radio waves with each megahertz of spectrum only able to carry so many bits. Those are sent to a tower or a small cell where they then make it onto a wired network and to these same aggregation points.

RTI mock 2-02

But wired or wireless, once the traffic makes it from your last mile ISP it may go directly to the network controlled by a Google or a Netflix, or it might pass through several hops like the data center above on its way to the final destination. The routing of your traffic is determined by software inside network gear, software that your end device is running and software on servers controlled by the companies that a consumer requests content from. It’s amazing how well it all works actually.

But there are plenty of people concerned that it might not work for much longer. Between battles of peering, concerns over network neutrality, the changing shape of content and even concerns about network resiliency and privacy more people are looking at the current internet and dreaming of a change that takes into account the growing dependence society has on the internet.

While, the projects below are not a complete list, they illustrate some of the big trends in how people with a stake in the internet are thinking about making it better for the long haul and future network demands.

Push everything to the edge

The current thinking about adapting the network is really just more of the same; you just push everything further to the edge. In this way ISPs deal with the onslaught of video content that’s causing so much trouble during prime time while avoiding any huge shift in how the internet operates. Carriers, content distribution networks like Akamai and even those in the data center sector are big fans of this model, which offers ways to put popular content in the various network aggregation points housed in data centers located in cities — even those that are deemed second or third-tier municipalities.

Inside a Google data center. Image courtesy of Google

Inside a Google data center. Image courtesy of Google

In many ways, the content caching strategies of companies like Google, Amazon, Netflix with its Open Connect boxes and even carrier-hosted CDN efforts are an extension of this philosophy. New efforts here include an IEEE standards on transparent caching and maybe a new standards groups that would include content companies.

Because that content gets pushed out once and stored in a data center near the home, the content itself, as well the requests for that content, is only going as far as the nearest aggregation point, cutting down on traffic on the rest of the network. But there is a dilemma for network architects: can pushing files out to the edge continue to solve problems as demand increases for fat content like video, but also when we’re building connected homes and cities that benefit from a more mesh-network structure where devices talk to each other as well as the public internet?

Peer to peer

Pushing content closer to the edge works if you are worried about serving a huge population the same stuff. It’s like building thousands of McDonald’s restaurants in every town as opposed to expecting everyone to drive to one of 20 franchise locations across the country.

But the internet isn’t just for serving content. It has always been a two-way communications mechanism, but in the last few years consumers have, well, consumed, more traffic than they have created online. That’s changing as more people put up videos, network their homes and communities start to use networks for sharing video content, sending medical files or other high-bandwidth applications. In some cases, while the data can be small, it tends to be sensitive to latency and distance, so sending it back to a central server doesn’t make sense.

That’s why peer-to-peer technologies are still much-discussed as a way to rethink the network. Back in 2008, several ISPs and BitTorrent saw the trend of moving video files over the network and attempted to develop a new protocol called P4P that allowed P2P-shared content to travel in-network for ISPs if possible. Instead of searching for any available node to connect with, software for file sharing searched for a node that was nearby on the same network. This helped cut traffic on networks and costs for everyone.

Commotion's community network, as shown in the company's illustration here, allows neighbors to build an open mesh network and share internet access or locally hosted applications. Image from www.commotionwireless.net

Commotion’s community network, as shown in the company’s illustration here, allows neighbors to build an open mesh network and share internet access or locally hosted applications. Image from http://www.commotionwireless.net

Unfortunately, P4P didn’t pan out, in part because the amount of P2P traffic on the networks subsided and the problem P4P solved in effect solved itself. And while P2P protocols from BitTorrent are still around (and even Netflix has threatened to use P2P technologies in delivering its traffic) so far it hasn’t taken off. However, this technology is showing promise in the wireless space in open networks such as Commotion.

Named-data networks

Much like P2P envisions a distributed model of networking at the application layer (you run special software such as BitTorrent or Skype to build the network), there are a class of projects around the world and research networks that envision taking this concept to the network itself. Instead of talking to servers to get an address for a URL or device, nodes on the network are given a name and content is stored everywhere. The way content is given a name and the levels of encryption involved help define the different types of these networks.

This class started with Parc’s Content Centric networking (which still uses the internet protocol), but has since evolved to be a clean slate design for the internet with new proposed protocols. The National Science Foundation calls the concept Named-Data Networking and has come up with a new protocol and a new design that borrows some of the elements from the IP network design, but is fundamentally about communications between many distributed nodes as opposed to communication between central nodes and end devices.

The Pursuit project in Europe is an example of such a network design as is the SAIL project funded by the EU . Each of these efforts are aimed at building distributed networks that could create a more secure and reliable network better suited for the billions of devices we’re adding to it.

The internet as a market –not a highway

So far, I’ve been talking about the technical aspects of the generation internet, but the next two options are more about business models and economics that would require very little new tech to put them into place. The most complicated is a model of the internet that views it not as a highway with packets whizzing from location to location, but as a trading floor where applications bid for available capacity in real time.

net-neutrality-money-tube

Martin Geddes, a telecoms consultant in the U.K., explained it as a way to meet the needs of many different types of traffic without continuing to overbuild communications networks for certain types of traffic — notably video streaming. He’s solidly in the camp that today’s network design can’t handle the demand of video streaming — but he’s also frustrated that applications aren’t aware of current network conditions and able to adapt to them. For example, if a broadband connection is full of real-time voice or video traffic, a large operating system download might be able to wait for delivery overnight, when networks experience less peak demand.

Or, if it’s a priority, the sender or the user then pays to get that traffic to the home. The challenge with this bidding process is that it would require customers to prioritize their traffic (something many would not necessarily be able to do because it requires people to understand the needs of a variety of different traffic types) or gives last-mile ISPs undue influence over setting prices for this trading floor model. Given the furor over network neutrality and the lack of competitive last-mile broadband market, this idea seems a tough sell.

Go ahead, create a fast lane. For innovation.

Given the discussion about network neutrality in the U.S. and in Europe this proposal is likely to cause some people to rage, but it’s a neat way of calling the ISPs’ bluff on the idea that some traffic should be prioritized and that to avoid doing so will prevent innovation. Dean Bubley, an analyst with Disruptive Analysis, suggested that regulators allow for prioritization … of new types of traffic.

Photo by Thinkstock/wx-bradwang

Photo by Thinkstock/wx-bradwang

So instead of Netflix or Viacom buying faster service, existing internet companies are grandfathered into the best-effort internet we have today. Carriers can allow for paid prioritization of only truly new applications. Bubley imagines that an improvement to existing video streams such as the transition from HD to 4K video wouldn’t count as a new or innovative service but creating a content company or application that would translate video content into Gujarati on the fly would be substantially different and could get priority.

Bubley feels that such a model would still leave ISPs providing enough capacity for the services that compel customers to sign up for faster service tiers in the first place, but gives telcos what they are asking for — a new way to make money off their pipes. But this also forces telcos to actually innovate; either through finding new services that need guaranteed delivery such as a medical device monitoring system, or by setting pricing schemes that truly encourage innovation.

I’m skeptical that all ISPs would be able to make this jump, but if they can’t Bubley isn’t concerned. He thinks any such program should sunset after a set period of time, and when it does, it will answer the question of whether or not neutrality hurts innovation or helps it.

Each of these proposals deals with different aspects of the internet, from its core architecture to how we pay for it and regulate it. It’s clear that as the internet grows in size and importance we need to make sure it remains true to the core attributes that made it such a haven for communications and new ideas. We need our future network to scale and we need it to remain open. The proposals above are by no means exhaustive, but they do offer food for thought on some of the big issues facing the Internet in the U.S. and abroad.

Check out the rest of our special report below:

Images from Wx-bradwang/Thinkstock. Banner image adapted from Hong Li/Thinkstock. Logos adapted from The Noun Project: Castor and Pollux, Antsey Design, Mister Pixel and Bjorn Andersson.

You’re subscribed! If you like, you can update your settings

  1. Stacey, you are correct in stating that intelligence needs to move to the edge, particularly if 4K VoD, 2-way HD collaboration, seamless mobile BB, and the internet of things are to develop rapidly in the next 3-5 years. All need much, much more capacity but the latter 3 need upstream capacity in particular. Also latency needs to drop dramatically, while QoS, security and redundancy need to increase and be monitored in real-time.

    Dean’s idea won’t work. Too many examples of monopoly edge access providers investing in the revenue streams they can control (fast lanes, or bundled packages) and letting the standard lanes wither on the vine. It’s simply not a long-term solution.

    The solution is this. We need more open access in layer 2 (intelligence at the edge) that will drive and scale multiple fiber/hetnet combos. Multi-modal competition can work in the last couple hundred feet given today’s technology. In return for opening up the lower (middle and last mile access/transit) layers we can have “balanced settlements” in the middle (control) layers that send price signals and clear marginal supply and demand both north-south (apps/content and infrastructure) and east-west (between networks or in the information stack.

    The IP stack does not contain price signals. Few seem to understand the implications of embracing a bill and keep model which results in stagnation and perpetuates monopoly as it limits new entrants. It wasn’t “settlement free”peering that made the internet scale; it was the low cost of transit, peering components and importantly flat-rate dial-up at the edge (a trojan horse the Bells built themselves). Due to divestiture in 1983, the US had this commercial foundation 10 years ahead of the rest of the world and that’s why the internet scaled rapidly here. The volume and nature of the traffic was so small it didn’t require mediation or settlements at the core (remember it also had its origins as a clubby, private peering stack).

    Competitive market driven “settlements” will be balanced in that they recognize value sharing and provide price signals and incentives to ensure network effect occurs rapidly and simultaneously (regardless of what economic or geographic strata the end-user resides). Markets will decide optimal originating or terminating settlements and connection costs to clear supply and demand. Priciing will reflect marginal, not average costs. (The 10GigE ports the last mile monopoly access providers are not buying to keep up with demand cost 1/4 of one cent per end-user!). Then large, managed service VPNs, that buy or reserve edge bandwidth to many end-points on multiple networks, can develop for a large range of services and applications. It will be an OTT model in steroids, but one which ensures the edge and bottom layers are upgraded rapidly (just as they are in the WAN today) by competitive market forces.

    The resulting revenue model, many many times bigger than today’s model when you factor in the above 4 trends, will probably be:
    40% centrally procured or subsidized managed services
    30% advertising
    30% edge subscription

    And everything will be 99% cheaper on a per bit basis. Only as in 1983 with voice, or 1990 with data, or 1996 with wireless, or 2007 with smartphone/offload, no one can really imagine this.

    Today’s competitive WAN/internet costs for a voice minute are $0.0000004; declining 20-40% annually. The monopoly MAN/last mile costs are $0.001; declining only 5-10% annually. This has been going on for 10 years since Brand-X and (un)divestiture; hence the spread or arbitrage. Where there is competition, like GoogleFiberKC (synchronous gbs for $70), the cost is $0.00001. And that is early days, without the benefit of scale from SMB, enterprise, wireless offload and backhaul. The point being that both absolute and relative costs between WAN and MAN are not very different (maybe 1, at most 2 decimals) when fiber enters the picture and the diversity of demand at the edge also gets scaled.

    Net neutrality, interconnect, peering, etc.. are all related. Comcast is trying to push the WAN/MAN demarc towards the core just like AT&T used interconnect exclusion zones 100 years ago. Same story, different terms. The issue (since 1996, and even earlier) is how to introduce competition in the last mile ensuring MAN costs are driven down and how competitive markets clear marginal consumption efficiently; which they clearly have demonstrated they can. In fact net neutrality as a concept is both a contrived notion and a farce; invented by those who choose to ignore the history of the 1980s-90s. Everyone is “just talking” and not using data and objective analytical frameworks that can be consistently applied to all types of information, applications, networks and users over the past 170 years.

Comments have been disabled for this post