6 Comments

Summary:

Exponential increases in internet usage from consumers, enterprise, research and education strain our current broadband infrastructure. We ignore these demands at our economic peril.

Fibers
photo: Flickr / pasukaru76

Much like the debate over whether raising the US federal debt ceiling is the right choice for the country, the networking industry all too regularly engages in a debate about whether the need for faster data connections is real. The significant role of broadband as an economic driver deserves to be elevated to a similar level of attention as progress and innovation are stifled when network capacity is constrained, which doesn’t bode well for consumers, businesses, research communities and the economy on the whole.

High-speed, high-capacity networks are critical to our future because they power the world’s Internet and digital economy. For the most part, networks based on 100G technology have become mainstream to address current demands – and this represents a giant leap forward from traditional network architectures and scale. However, it won’t be long before we need to go beyond 100G and even 400G and start to build 1 Terabit networks.

Connected consumers

There is no doubt that demand on our networks will continue to grow. We’ve quickly become a “connected” world and it is only just the beginning. During the past decade, the number of Internet users worldwide increased almost 700 percent, from 360 million users in 2000 to 2.4 billion users in 2012, according to Internet World Stats. To ensure those 2.4 billion users have a quality experience, one might expect that the underlying communication infrastructure must also expand by 700 percent, assuming people today use the Internet in the same way we did 12 years ago.

But, we all know that isn’t the case.

How we use the Internet has radically changed. Facebook made its debut in 2004; YouTube came online in 2005; Netflix started streaming video in 2007; and Hulu.com joined the video streaming world in 2008. These services offer only a glimpse into how consumer consumption patterns of the Internet have changed from simple text emails to rich media and video streaming. Add to that the dramatic growth in wireless access on top of existing wired connections, and we’re now placing a 300x (30,000 percent) capacity increase on the underlying communications infrastructure per Internet user.

Clearly, the strain this places on these legacy networks for simple connectivity is enormous.

For example, according to our estimates, if only 10 percent of New York City’s 8 million people wanted to stream a movie at the same time, it would require an infrastructure capacity of 1.6 Terabits per second. This type of demand would overload many of today’s networks and meeting this demand with existing technology can be quite costly. However, failure to provide that capacity would mean lower quality playbacks with pixelation, stuttering, and pauses as the network tries to keep up with demand – and that is something that today’s consumers find unacceptable.

The same demands, if not greater, are felt on the wireless side of things. The use of mobile devices (phones, tablets, eReaders, laptops, etc.) has been nothing short of explosive in the past decade. In fact, the annual worldwide total of mobile devices shipped is estimated to reach more than 2 billion this year, according to multiple industry sources. At that pace, mobile devices will outnumber humans by 2017, according to CCS Insight. Compound that number of devices with a growing variety of rich media content and we clearly have a dilemma.

Case in point: Recently, the VP of Information Technology for the Denver Broncos, noted that for the first time at games this year upload traffic often eclipsed download traffic. The likely culprits: “selfies” and short videos of great plays. Recognizing this evolution of the fan experience, the San Francisco 49ers are installing dual 10 Gbps Internet connections to their new stadium next year to keep pace with demand.

Simply put: higher speeds and greater network capacity are required to ensure the high quality end user experience that consumers have grown to expect.

Demands on the enterprise

Enterprises are no different. It’s well established that demand for enterprise network capacity is growing thanks to IT virtualization efforts and corporate use of cloud computing, mobile, video and bring-your-own-device (BYOD) policies. More bandwidth is required within data centers, between company data centers, and data centers to the cloud, to facilitate daily operations as well as data back-up and disaster recovery efforts.

For example, healthcare institutions are faced with the challenge of managing the explosive growth of medical data created by the digital transformation of paper records to digital data files. Electronic Medical Records (EMRs) are increasingly moving online, and the images produced by Picture Archiving Communications Systems (PACS) are extremely high resolution, creating large files that slow down a typical network. This shift is creating an exponential problem in data and storage growth and disaster preparedness, unprecedented in healthcare IT.

In fact, results of a Ponemon Institute 2012 survey found 30 percent of the world’s data storage resides in healthcare, with 45 percent of respondents saying their facilities were planning a storage upgrade of one terabyte or more in the next 12 months. And where storage demands increase, so too do bandwidth requirements.

Research and education connectivity

Research and Education (R&E) communities around the world work on projects that also require massive amounts of bandwidth. Consider the task of modeling the human brain, which has roughly 100 billion neurons and between 1,000 and 10,000 connections. Add it all up and this can equate to 100 trillion connections in total. To perform a simulation, high-performance computational infrastructure and database technologies are required, and those place a tremendous capacity burden on the network.

To illustrate, in 2010, the Canadian Brain Imaging Network (CBRAIN ) demonstrated how a 100 Gigabit per second network link can be used to support research efforts. Using the link, researchers were able to transfer 3D and 4D video images between the CANARIE research network in Canada and the StarLight network in Chicago. This demonstration also showed how remote doctors and students can use the network to learn new methods, observe and interact with live surgeries through a lifelike HD virtual experience.

The network matters

Now more than ever, the underlying communication networks matter, and the need for speed is immediate. Digital demands from consumers, businesses and the R&E communities on the underlying communication infrastructure are accelerating at unprecedented rates. Investing in advanced networks to support these increasing demands is vital to maintaining consumer appetites for new and better services, which drive corporate revenues and prompt innovation through research.

Steve Alexander is senior vice president and CTO at Ciena.
  1. Agree with you, if the IoT is going to work bandwith and access will be the key! Question is if the entire infrastructure itself is equipped to facilitate it and if not what actions are being taken?c

    Reply Share
  2. Interesting! Loaded with information. I might try condensing some this to point form, for better reader reception. Asking people to act on something is like asking snails to pole vault! You have to get them fired up first. Just the facts leaves it kind of dry and emotionless. Not that facts aren’t good things, to an extent. I hope you won’t take this wrong. If you choose to take my advice, I’m positive your readership will increase…

    Reply Share
  3. Globally, over the next several years, activity on the Internet will change due to the pending desire for a system that supports content creation, not just content consumption by a growing majority of users.

    This will require massive adaptation of last mile infrastructure connectivity in order to permit ultra fast upload capacity for everyone. The Internet will need to become a two way superhighway for all users.

    Great article.

    The optical fibre interconnects between continents, cities and regions are synchronous high speed networks wherein data travels fast in all directions, within these networks, one entities uploads are another entities downloads. Inside the network there is no differentiation made regarding directionality of data, source and destination are agnostic to the systems.

    However, in the current context of most end user connections, the short sighted telco industry has built out the network end points based on a flawed mindset and worldview that always assumed data needs will be asynchronous, and that people will always want to download far more data than they will ever need to upload.

    In that worldview, assumptions were made that only commercial entities need high speed uploads as well as high speed downloads, therefore business infrastructure is available in a synchronous connection context at a massive extra cost for the upload capacity and network capability.
    In much the same way that telcos overcharge customers for global roaming services, that cost telcos only pennies extra at the carrier to carrier wholesale level, but have became highway robbery on most customers retail billing, so to do telcos rob business customers for high upload speeds and network support for such speeds, because all data is just data once moving inside the network, additional charges for better upload capacity is a profiteering red herring.

    As the global demand for fast ubiquitous reliable Internet connectivity increases, there will be no differentiation between the high speed upload demands from domestic consumers compared with the demands of commercial consumers using Internet services. The telcos won’t understand this adaptation and initially they will do everything they can to limit availability of faster upload capacity to domestic customers, because the overcharging fraud will become self evident very quickly when a business user compares and contrasts the
    difference in the fee structure that will exist for domestic synchronous optical fibre service charges in comparison to the commercial fees that have been forced on the marketplace to date. Given that the individual business owner paying so much extra for current commercial connectivity and upload capacity, will most likely also have a domestic connection eventually with the same rated upload capacity and allowance, but will then be billed at a fair lower rate for the domestic service, it will not take long for customers to challenge telcos to explain the criminal billing disparity.

    Eventually these cartels of deception will be self evident. The fact that Google Fibre is available as a one gigabit synchronous service, with unrated data, for a flat monthly fee, ought to be already raising questions about how traditional telcos can have corruptly billed so high for commercial connectivity to date. Consider also that such high commercial telco data fees are ultimately passed on to the consumers of goods and services provided by the business comunity. Therefore telcos are indirectly robbing us all constantly, for something they pay very little to provide, as a wholesale cost to themselves.

    Years ago, when only dial up connectivity was available, I had the misfortune to be connected through an ISP that billed for both the time connected AND for data downloads / uploads. A very expensive proposition.

    The individual that owned the ISP at that time is now a politician and is the minister for communications in the country I live in. This same person is doing everything in his power to stop the country I live in from upgrading any of its Internet infrastructure, because he and his telco associates know that their commercial service cash cow is about to stop laying golden eggs if all our last mile copper infrastructure gets upgraded to a ubiquitous fibre to the premises network.

    We live in interesting times, but with lunatics in charge.

    Reply Share
  4. It happened several weeks ago when my laptop went out of order and I realized the power and the place the internet has gained in our life, and having read many of articles, I should admit this one is great. Nowadays neither sphere of our life is possible to be imagined without it. And the same question occured to me if really the whole world wide web is meant to facilitate the growing demand for it and will cope with it.

    Reply Share
  5. Thank you very much for a great read. Depending on how exactly you define “mobile device”, I am pretty sure that the number of mobile devices already outnumbers the total human population. The number of mobile subscriptions definitely does already (due to multiple SIM ownership). The GSMA estimates that there are 3.3 million unique mobile phone users. It would only require a small number of these to own more than one ACTIVE mobile device (compared to the four or five mobiles+laptops+ipads you see in developed countries) and mobile devices already outnumber humans — no need to wait until 2017!

    Reply Share
  6. For this sort of proposition to take hold, two things need to happen:

    1. Telecoms and cable operators need to realise the “making maximum profits yesterday” business model is not sustainable in the long term and that straight market surveys are not adequate enough as standalone research. Ironically, the Internet has conditioned many to be of the belief that surveys are junk and only useful as trash can liners.

    2. The prevailing North American belief the overbuilding is bad needs to go away. GAS does not create true competition as the smaller companies still depend on Ma Bell and cablecos for infrastructure, and the monopolies have proven more than once that they don’t care about providing good service for a reasonable price with no throttling or overage fees.

    Reply Share