Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
A decade ago the internet had about 1.4 terabits per second of global capacity while today it has 77 Tbps. But as the internet gets bigger, the way traffic moves back and forth across the “series of tubes” that make up the internet is changing. As a result of the growth in internet exchange points around the world and more people in more countries getting online, the internet is becoming truly global.
Instead of massive streams of data moving back and forth across entire networks each time people request a web page, a video or a digital download, data is getting sent to a content delivery network and kept at the edge of the network. Thus, when it’s called up by a user, it doesn’t have as far to go. But there are two significant things that are changing how the internet is “shaped,” for lack of a better term.
First, the growth of Internet Exchange Points (IXPs) and caches mean the traffic patterns look more like a river flowing downhill to a reservoir as opposed to millions if creeks spreading out to feed each user. Internet exchange points are giant data-center like buildings where different networks connect and exchange traffic. Content can be cached in local IXPs or even further out at the edge of the network in specific ISP’s central offices.
Two, the growth of broadband access in the rest of the world means that places like Latin America and Africa, which used to depend on getting most of their bandwidth served from U.S. or European providers are gradually beefing up their supply of internet exchange points. (IXPs) They get content reservoirs too.
More content but more caching as well.
This has been the case for years when it came to content such as movies and graphic-rich web pages, but as the basic delivery of bits became commoditized, players like Akamai (s akam) and Limelight as well as newer companies like Edgecast and Fast.ly sprung up to deliver newer types of content. Now, even Facebook (s fb) is getting in on edge caching, joining Google (s goog), which has had edge network servers for a couple of years.
In a paper released yesterday, UK analyst firm Analysys Mason estimates that 98 percent of internet traffic now consists of content that can be stored on servers. This combined with deeper penetration of IXPs and caching means that the way traffic flows across networks is changing too. The paper was written to persuade governments that the proposed ITU regulatory changes would hinder the growth of the web, but the report is well worth reading as a way of understanding how the web has changed over time.
Those conclusions are also backed up by similar analysis from Craig Labovitz, who documented that roughly 45 percent of internet traffic today is content from CDNs. That analysis emphasized, however, how few companies control web traffic, while the Analysys Mason report focused on how deeply the internet has penetrated different areas of the world.
As an example, the Analysys report takes a close look at how connectivity has changed for Africa:
While in 1999, 70% of bandwidth from Africa went to the US, by 2011 this had fallen to just a few percent, and nearly 90% went to Europe. This does not mean that over time Africans began to rely almost exclusively on European content, but rather that much of the content originally from the US began to be stored on servers in Europe as providers began to build out their networks. This shows how traffic can shift in response to changes in bandwidth costs and local conditions, as Europe liberalized its telecom networks and IXPs developed to host the content, and demonstrates how in future similar shifts could localize traffic in Africa to further reduce latency and costs.
To bolster this example, the Analysys report notes that bandwidth to the U.S. has fallen from over 90 percent of total international connectivity in 1999 to just over 40 percent in 2011. And as the internet becomes far more global and content spends much of its time at the edge, it changes the way we should think about and regulate the web.
The internet is like a cockroach.
The report is focused on the looming ITU regulations, but the key point is one that was raised time and time again during the worries over the SOPA and PIPA legislation in the U.S.— the internet has no borders, and governments must recognize that. It’s like our monetary system, our food supply and myriad other complex ecosystems we depend on for our modern life. That’s why we’re seeing a rise of treaties and international bodies attempting to create rules governing these systems, because regulating the web in the U.S. is like trying to solve a cockroach infestation by fogging a single apartment in a multitenant building.
As the internet has advanced, it’s become exactly what it was supposed to: An interconnected series of networks that have organically grown to meet demand at the edge. Like cockroaches, it can survive in hostile conditions. But unlike roaches, it’s something most people want in their lives, so news of its growing resiliency and localization should be good news.