34 Comments

Summary:

In the net neutrality debate, Internet Service Providers talk about charging content providers for prioritization so they can invest in improving infrastructure. But placing a price on prioritizing content creates an inherent disincentive to expand. Professors Hsing Cheng, Shubho Bandyopadhyay and Hong Guo elaborate.

Traffic Jam

Traffic JamIn the net neutrality debate, Internet Service Providers like AT&T and Verizon, have said they need to charge content providers for prioritization so they can invest in improving infrastructure: faster internet service for all, they say.

But placing a price on prioritizing content creates an inherent disincentive to expand infrastructure.  ISPs would profit from a congested Internet in which some content providers will be more than willing to pay an additional fee for faster delivery to users.  Content providers like the New York Times and Google would have little choice but to fork it over to get their information to end users.  But end users would be unlikely to see the promised upgrades in speed.  Those are some of the results of research we conducted on the Internet market.

Despite the fierce back-and-forth on net neutrality, there is a surprising lack of rigorous economic analysis on the topic. To change that, we built a game-theoretic economic model to address this question: Do ISPs have more incentive to expand their infrastructure capacity when net neutrality is abolished?

This is a key claim, used widely by ISP companies in arguing against maintaining a net neutral internet.  The money from fees levied on content providers, they say, would be incentive to improve and expand infrastructure.  In this argument, web surfers gain access to a faster internet.

But our analysis shows that if net neutrality were abolished, ISPs actually have less incentive to expand infrastructure.

Here is the intuition behind this result: Think of any road or highway you hate to drive on during rush hours. Say, I-5 in Seattle or the 495 loop in Washington, D.C. The highway is like the Internet, and the individual cars are the packets of data. The ISP is essentially the gatekeeper that controls the flow of cars on the highway.

If the ISP is allowed to snatch any car from the back of a very long line and put it in front of everybody else when the driver of the car pays a “priority delivery fee”, would the ISP have an incentive to keep the road congested, or, to expand the road capacity?

In this scenario, ISPs profit more when the roads are congested — if traffic is cruising, no one would feel the need to pay for faster service.

Currently, ISPs earn profits from attracting customers — mostly end users — using their computers for things like blogging, Tweeting, and downloading music and movies. For these people speed is an asset they might be willing to pay for. That gives ISPs motivation to improve their service and better compete for users.

But in a non-neutral Internet, the dynamic would change. ISPs would be able to strike deals to give certain Web sites or services priority in reaching users. For sites and services that pay up, there’ll be less waiting when the Internet’s information superhighway gets jammed — their pages will load faster.  Those who don’t pay will be essentially forced to sit in traffic.

To see how ISPs and content providers might act under these proposed circumstances, we developed a model that describes the interactions of an ISP, multiple content providers and end-users. We examined how content providers, ISPs, and consumers would fare under both the neutral and non-neutral regimes. The most unambiguous finding from the model is that incentive for ISPs to invest in infrastructure is higher under the neutral regime than under the alternative. This is the case because the non-neutral regime allows ISPs to profit from greater congestion, undermining their return on infrastructure expansion.

Without net neutrality, ISPs will likely be better off and content providers worse off. This finding mirrors the reality of the debate, where the two sides have squared off on opposite sides.

If the goal of public policy is to expand broadband availability and reduce congestion, decision-makers should look beyond the immediate winners and losers and focus on the long-term consequences of their choices. Eliminating net neutrality will put a damper on investment in the Internet infrastructure that is likely to power a great deal of future innovation and growth — not exactly a recipe for maintaining the United States’ position as the global technological and economic leader.

Hsing “Kenny” Cheng and Shubho Bandyopadhyay are professors at the University of Florida, and Hong Guo is a professor at the University of Notre Dame.

Image courtesy of Flickr user epSos.de

You’re subscribed! If you like, you can update your settings

  1. Exactly! This post is right on. Necessity is the mother of invention and ISP network traffic shaping takes the necessity of network thoroughput expansion away. I posted something similar just the other day: http://techonblogger.ward.pro/2011/11/anti-net-neutrality-resolution-is.html

  2. The overriding goal should be to keep the government out of regulating the internet. Imagine if the government regulated computers back in the 90s? We’d be on IBM PCs with massive arguments on how to make things better – because it would suck. Think about healthcare – the prices are high because of heavy government involvement. Oh, except for procedures like Lasik that pretty much escape the price controls.

    1. Your prejudices are clear enough. Not the most convincing argument, friend.

  3. Instead of speculating, let’s look at the track record of industries where the FCC controls access: Small competitors get squeezed out by the regs, competiton thus innovation go down, prices go up. Europe is already experiencing the downward curve of this cycle after institutionalizing access.
    The ridiculous theory expounded above ignores cycles and history, thus it is a complete waste of time. Really need to see Om prove he has not sold out to Google and the FCC by posting some articles discussing the downside of NN. Expect the Big Media players whose profits will be protected by NN to fully endorse this article.

    1. Alas, this article is yet another reminder that Om *has* sold out to Google. If its arguments held water, then FedEx and UPS would be slowing package delivery to force users to buy expedited service, when in reality they are highly competitive and now offer better tracking than ever on ground as well as air shipments. But telling the truth would not promote the regulations Google wants to cement its monopoly and forestall competition.

      1. FedEx and UPS do not have exclusive access to wireless spectrum that is necessary for many people to access the internet. FedEx and UPS do not require that municipalities grant them exclusive rights to install wiring. Any company that wants to buy a fleet of trucks and planes can compete with FedEx and UPS, but that can not be said for anyone who wants to be an ISP. In fact, several states have laws restricting the rights of towns to offer broadband service to their residents.

        There is nothing wrong with Telcos offering to host a server or even provide a pipe to the telco’s internet switch, and giving that pipe equal priority to all of the other traffic entering the switch. But when they are allowed to give priority to the traffic of their customers over other traffic traversing the Internet, then the model for the internet breaks down.

        If you want to use transportation as an analogy for net neutrality, imagine if all roads were toll roads, and private entities were allowed to set and collect tolls on all interchanges. The amount of commerce passing through the interchanges would drop, while the interchange owners increased their profits at the expense of the rest of the economy. the interchange operators would have to decide whether to invest in expanding the interchange capacity, or just increase the prices of the interchange. Guess which they would choose?

        The internet is a network of networks. Everyone using it, whether it is google or amazon or facebook or you, is already paying for access to it. Nobody is getting a free ride, and allowing the access providers to have control over how data flows through the interchanges (this is the net neutrality issue) will diminish the value of the internet, and the overall economy that depends on it.

  4. Another way to say it is the prescription to fix this perceived problem of innovation stopping will slow down innovation.

    …just like the more government “help” pushed into the inner cities seems to make things worse.

  5. Well done. This is the best defense I have heard for Net Neutrality.

    1. So-called “network neutrality” regulations (in truth, they are not in any way neutral) are indefensible. See my testimony before Congress at http://www.brettglass.com/testimony.pdf

      1. You know what is missing here? An explanation for why ISPs should be allowed to create their own Amazon, Netflix or Google services, and THEN DISCRIMINATE AGAINST THEIR COMPETITION.

  6. Rob (Bob) Wilcox Sunday, November 13, 2011

    I have worked in the network quality of service area for many years. The term “network neutrality” contains at least two ideas: neutrality of sites (node pairs) and neutrality of applications/protocols. Further, wireless networks lag wired (fibered) networks by 4-6 orders of magnitude in channel capacity.

    The challenge in the discussion is protocol/application neutrality because wireless networks are often congested and are without a rapid capacity expansion roadmap.

    Voice calls on systems like Skype, 4G IP voice calls, and video calls on systems like FaceTime require low delay and low loss. High compression ratios are limited by the compute time to encode and decode. Mobile gaming or virtual worlds are another use case. Forward error correction can help, but is not a panacea.

    Email, text chat, web browsing and many native apps adapt to loss and are relatively insensitive to delays measured within a few multiples of transit time. TCP rate management algorithms do an excellent job.

    The issue is real time video streaming. Users expect an experience like radio and television broadcast. Expansion of streaming is the primary driver today of application traffic, and for the foreseeable future.

    But streaming in many use cases could be technically replaced by file downloads, similar to the TiVo model.

    Other drivers are faster CPU/GPU/internal buses and larger screens (with a greater expectation of video quality) on wireless tablets and future large mobile virtual display platforms, like eyeglass displays. These systems are capable of both higher peak and sustained data rates.

    I believe the authors are correct in their analysis. However the technical means to increase wireless capacity to support the demand for streaming video use case will result in a tragedy of the commons.

    Also notable is that protocol/application prioritization at backbone peering points is not generally available at any price. I have worked in the network quality of service area for many years. The term “network neutrality” contains at least two ideas: neutrality of sites (node pairs) and neutrality of applications/protocols. Further, wireless networks lag wired (fibered) networks by 4-6 orders of magnitude in channel capacity.

    The challenge in the discussion is protocol/application neutrality because wireless networks are often congested and are without a rapid capacity expansion roadmap.

    Voice calls on systems like Skype, 4G IP voice calls, and video calls on systems like FaceTime require low delay and low loss. High compression ratios are limited by the compute time to encode and decode. Mobile gaming or virtual worlds are another use case. Forward error correction can help, but is not a panacea.

    Email, text chat, web browsing and many native apps adapt to loss and are relatively insensitive to delays measured within a few multiples of transit time. TCP rate management algorithms do an excellent job.

    The issue is real time video streaming. Users expect an experience like radio and television broadcast. Expansion of streaming is the primary driver today of application traffic, and for the foreseeable future.

    But streaming in many use cases could be technically replaced by file downloads, similar to the TiVo model.

    Other drivers are faster CPU/GPU/internal buses and larger screens (with a greater expectation of video quality) on wireless tablets and future large mobile virtual display platforms, like eyeglass displays. These systems are capable of both higher peak and sustained data rates.

    I believe the authors are correct in their analysis. However the technical means to increase wireless capacity to support the demand for streaming video use case will result in a tragedy of the commons.

    Also notable is that protocol/application prioritization at backbone peering points is not generally available at any price.I have worked in the network quality of service area for many years. The term “network neutrality” contains at least two ideas: neutrality of sites (node pairs) and neutrality of applications/protocols. Further, wireless networks lag wired (fibered) networks by 4-6 orders of magnitude in channel capacity.

    The challenge in the discussion is protocol/application neutrality because wireless networks are often congested and are without a rapid capacity expansion roadmap.

    Voice calls on systems like Skype, 4G IP voice calls, and video calls on systems like FaceTime require low delay and low loss. High compression ratios are limited by the compute time to encode and decode. Mobile gaming or virtual worlds are another use case. Forward error correction can help, but is not a panacea.

    Email, text chat, web browsing and many native apps adapt to loss and are relatively insensitive to delays measured within a few multiples of transit time. TCP rate management algorithms do an excellent job.

    The issue is real time video streaming. Users expect an experience like radio and television broadcast. Expansion of streaming is the primary driver today of application traffic, and for the foreseeable future.

    But streaming in many use cases could be technically replaced by file downloads, similar to the TiVo model.

    Other drivers are faster CPU/GPU/internal buses and larger screens (with a greater expectation of video quality) on wireless tablets and future large mobile virtual display platforms, like eyeglass displays. These systems are capable of both higher peak and sustained data rates.

    I believe the authors are correct in their analysis. However the technical means to increase wireless capacity to support the demand for streaming video use case will result in a tragedy of the commons.

    Also notable is that protocol/application prioritization at backbone peering points is not generally available at any price. CDN’s within the last mile provider can help, but it have limits. Likewise WiFi offload helps.

    1. Looks like you have either been drinking the Google Kool-Aid or work for Google. The truth: no one ISP has exclusive access to all of the spectrum that can be used for Internet access. The largest holder of such spectrum, in fact, is Clearwire, which is not a cable or telephone company. And thousands of WISPs operate on unlicensed spectrum.

      Nor is it difficult to start a competitive ISP. I did, and I did not need to ask for permission from a municipality or from anyone else. The threat of unfair competition with private ISPs by municipalities (which amounts to horizontal monopoly leverage, because they already have monopolies on garbage collection and other services) is, indeed, a problem, but fortunately has been held in check by laws preventing municipalities from interfering with and harming private enterprise.

      As for the Internet’s privately held status: this has virtually always been the case. The Internet is not a public facility but rather a federation of independently owned and operated networks. Those networks never would have joined the Internet — and, in fact, it never would have happened — had there been a requirement that they give up the right to manage their own networks in any way they saw fit. Regulations which attempt to do that now, after the fact, are likely to destroy the Internet. That’s why it’s so fortunate that the courts will soon overturn the FCC’s so-called “network neutrality” regulations, which were written in secretive closed door meetings between Obama campaign contributor Google and FCC staffers. The FCC does not have authorization from Congress to regulate the Net, and ultimately the rule of law will prevail.

  7. Congratulations, I’ve been making this same argument for at least 3 years in various tech blog comments. Someone needs to make sure blogger Stacy Higgenbotham reads this, as she’s clearly drunk the kool-aid. All of her arguments start with the unquestioned wisdom that it must cost more money to sling more gigabytes.

    ISPs have been pocketing their skyrocketing profits instead of investing it in better equipment and faster speeds for the customers. The cost of delivering gigabytes is almost unmeasurable. They’re just miniscule electrical impulses, infinitely recyclable. The cost is in keeping the network equipment powered up, and with newer, far faster equipment, this cost drops dramatically due to lessening power consumption and space taken up.

    If everyone shut off their modems for a whole day, their ISP wouldn’t save a terribly large amount of money. The networks still need to be kept powered up and air-conditioned. Maintenance staff still need to buzz about, monitoring and fixing problems. The savings would be in a drop of apparent issues to deal with that day, since customers wouldn’t be calling in every few seconds with an issue to be resolved.

    Keep the resource scarce, and you can keep the price high. They call it “management of a scarce resource” but I call it a good old-fashioned embargo.

    The only fair yardstick to use in billing an Internet subscriber is in their bandwidth *capacity*. The size of their pipe. Everyone should enjoy unlimited, net-neutral traffic, up to the capacity of their connection.

    The situation is a bit different for ISPs who are not part of a telcom which has it’s own Internet backbone. They’ll tell their customers that gigabytes are definitely expensive. “See, here’s our bandwidth bill from AT&T!”. Righto. That’s the arbitrary price they charge you, and it was probably the cheapest you could arrange for in your service area. But there is no way on God’s green earth that those gigabytes cost your upstream provider anything like that much.

    Big telcos will argue that their bandwidth bills are high too. But they enjoy peering agreements with the other fatcats. Starting with a gentleman’s agreement to bill each other, say, $100 per gigabyte (wink wink), after credits, they come out about even. This also lets them establish a “plausibly fair” fee for the mom and pop ISPs who really can’t go elsewhere for a backhaul.

    And let’s put behind us the argument about the long term amortization of the cables, network equipment and NOC facilities. That was over a decade ago, and taxpayer money helped pay for it.

    Let’s put behind us the argument that it costs more money to expand the networks. Most of the fiber optic cables are laying in the ground unused. The telcos won’t lease them out at any price for fear of flooding the market with capacity and forcing a drop in the price of their service. Recent advances in networking technology raise the capacity of these still-unlit cables by several orders of magnitude.

    Let’s put behind us the argument that it costs more money to replace worn out network equipment. Old routers have largely been replaced with cheaper, smaller, cooler and faster ones, since, hey, that’s what they build these days. If customers were billed according to a real relationship with the costs of providing the service, then either our bandwidth capacity would be skyrocketing, our bills should be plunging, or both.

    In a non-neutral Internet, ISPs make less money providing reliable, high-quality, competitive service, and more money in charging for each “billable event” they can invent. Since most of these fees will be to get around deliberate inconveniences, and to fix things that shouldn’t have been broken, ISPs become impediments to the Internet, instead of providers. They’re well-positioned bullies who demand a toll to pass, along a road which the taxpayers paid to be built in the first place.

    1. You obviously have no idea of ISPs’ cost structure or what bandwidth, maintenance, and upgrades actually cost. The fact is that bandwidth is expensive and networks must be managed. Contrary to the misinformation Google propagates through sites such as GigaOm.

      1. Oh come on, now. You don’t even offer an explanation, just an indignant denial!

        The FACT is that the movement of volumes of data doesn’t cause any expense at all. The cost is in maintaining the network infrastructure, which is rapidly dropping, even as upgrades take place. Thank you, 21st century technology.

      2. Again, bandwidth costs money. Duh. It’s common sense. If you deny it in the first place it likely means that you won’t accept any explanation anyway.

  8. The government should get the heck out of it and let the free market innovate and deliver solutions to both sides of the connection.

    For example, ISPs don’t have to deliver one speed to everyone. Through the free market they can design and deliver tiered services that can deliver faster speeds to customers that are willing to pay for it. Technology and traffic shaping solutions like the Dynamic Bandwidth Shaper (http://dynamicbandwidthshaper.com) can deliver a variety of speeds without prioritizing specific protocols or websites. Thus, by definition ISPs can still be network neutral even if the government won’t stay out of ISPs business.

  9. This article has all the right avenues covered. If only businesses would stop acting negatively there is room for development and a piece of the cake for everybody. The entire video content delivery is in the initial stages. Once some one becomes successful others will lament. Net neutrality will offer broad selection and people will crossover from one selection to another just like the video game market success. Video game was one lucky segment that developed even with all the negativity in place. Imagine what video games would have accomplished by now if it had incentive like net neutrality. Las Vegas did not succeed on negativity, it succeeded on enticing the product (the lucrative bait of how little or how much a person can spend; the richest and the poorest went for entertainment, only the die hard gamblers went to make money).

  10. BG: Nope. It costs nothing to move and deliver data. All of the cost is in building and maintaining the infrastructure. This cost drops over time.

    I note that you are following a strategy of throwing poo and then ducking behind cover. You haven’t contributed one iota to your position other than that you feel like a smarter fellow for following your ‘common sense’.

    You lose.

    1. Actually, it is both your strategy and GigaOm’s to sling $#!+ and then run for cover. GigaOm has even gone as far as to censor comments that point out that it is doing this. Can’t let anything cast doubt on the propaganda it publishes daily on behalf of Google, its main source of revenue!

Comments have been disabled for this post