25 Comments

Summary:

Two decades ago Tim Berners-Lee invented the browser, HTML, and the web, but things took off six years later when America Online switched from pay-by-the minute dial-up to unlimited flat-rate plans, causing usage per sub to more than triple. But pay-per use is coming back.

meters

Ed.: This is the first of a two-part post. The second post will appear on Sunday.

Exactly 20 years ago this month, Tim Berners-Lee invented the browser, HTML, and the World Wide Web, but things really took off six years later when America Online switched from pay-by-the minute dial-up to unlimited flat-rate plans, causing usage per sub to more than triple (PDF). Recently, however, wire-line and wireless providers are circling back, either trialing or instituting tiered or pay-per-use pricing, and in the world of cloud computing, pay-per-use is touted as a major benefit. Pricing plans may seem like an arcane topic for marketing professionals, but are driving fundamental questions regarding global capital expenditures and the sustainability of the current content and network ecosystem. If pay-per-use prevails, what are the implications on industry structure and new business opportunities?

For the record, I like unlimited Internet access just as much as anyone else. However, such plans appear to be on their way out, and here’s why. As I’ve explored in ”The Market for Melons” (PDF), pay-per-use is not an evil plot by greedy robber barons, but a natural outcome of independent, rational consumer choice. Consider a town with an all-you-can-eat (flat rate) buffet and an a la carte (pay-per-use) restaurant. Smart shoppers on diets will save money by patronizing the a la carte restaurant, whereas heavy eaters will save money by visiting the buffet. As patrons switch, the average consumption of the buffet will increase, driving price increases for the luncheon special, causing even more users to switch to pay-per-use.

Bottom line: it is not the proprietors driving this dynamic, but the customers themselves acting out of pure, rational self-interest—light users, by deciding not to subsidize the heavy ones, foster the vitality of the pay-per-use model. As the spread in bandwidth consumption increases between frequent digital movie streamers or videoconferencing users and lightweight occasional emailers, rational light users will want to migrate to pay-per-use. Of course, people aren’t always rational, and consumers often prefer to overpay for flat-rate (PDF) rather than save money but risk bill shock.

Under conditions where buyers are coldly rational, active decision-makers, consumption levels are dispersed, prices are a non-trivial portion of income, and the industry is highly competitive, pay-per-use will tend to dominate. When behavioral economics come into play and emotions and cognitive biases are taken into account, switching costs are high, there are no meaningful differences in consumption levels among consumers, and/or there is a single dominant player, flat-rate may prevail. And, there may be a pendulum effect as marketers attempt to differentiate their offers from prevailing practices.

A large number of business models from the “over-the-top” providers of content, applications, and services have been predicated on zero marginal cost to consumers for data usage. It’s not only an access issue; consider the current Comcast / Level 3 disagreement regarding payment for core backbone bandwidth. What might happen if pricing plans, instead of being “unlimited,” become increasingly granular and usage-sensitive? In tomorrow’s post, I’ll predict possible implications, ranging from cultural changes, application and architecture shifts, and industry ecosystem and business model transformation.

Joe Weinman leads Communications, Media, and Entertainment Industry Solutions for Hewlett-Packard. The views expressed herein are his own.

Image courtesy Flickr user mugley.

Related content from GigaOM Pro (sub req’d):


By Joe Weinman

You're subscribed! If you like, you can update your settings

  1. wow. so few paragraphs, so much wrong.

    Of course, why should light tv viewers subsidize heavy viewers? As a natural outcome of independent, rational consumer choice, all tv shows should be pay per view. American Idol? 1.99 per episode. It’s a rational outcome. Unless of course you consider that by removing the transactional cost of limiting service, you open up new areas of innovation. Example – do you think YouTube would be a success if we monitored bandwidth? If mobile broadband became unlimited, we would see new industries opening up that we can’t imagine at this point.

    This plan has nothing to do with the supposed explanation of light users subsidizing power users. Proof? When industries talk to the public, they speak of cheaper plans for light users, but when they speak to shareholders, they speak of how they make the light plans so unappealing, that users uniformly go for the more expensive plan.

    http://www.fiercewireless.com/story/att-has-7-million-usage-based-pricing-subscribers/2010-12-07

    Flat rate models are by far the best way to encourage growth and new business models.

    Share
  2. I cannot disagree more. This is a prime example of pseudo shortage created by under-utilization of existing infrastructure backbone (a.k.a. black fiber), under investment in last mile backbone (copper), technologies (P2P vs streaming) and services (backbone providers and CDNs).
    This is a failure in the management of both ISP and content providers lack of interest in maintaining and upgrading their infrastructure.
    Rationalizing for the ISP by those who should point out the disingenuous claims if the ISP is troubling.

    Share
  3. Quick question. Will your second post outline the cost structure of broadband too? Whether or not a pricing plan is inevitable is completely dependent upon the cost structure. Your example of all you can eat vs a la carte isn’t a good analogy. Ingredients a large and perishable variable part of the cost. So is labor. The only part that isn’t variable is the lease on the building.

    In broadband the cost structure is completely different. Almost all costs are fixed. They don’t change with use. There is a certain maximum amount of bits that can be send through a pipe or a combination of pipes, after which new investments in fixed costs need to be made. Some costs in administration and helpdesk are a parameter of the amount of customers there are, but there is also an element of economies of scale there.

    So I really hope you come up with a convincing costing story, that clearly shows that the average usage of end-users is robbing the network blind. Do include why most networks in the OECD have moved away from limited offers to unlimited offers. Also if you could, explain how much extra you expect networks can expect to make from this increase in operational complexity. For instance I looked at the numbers of Virgin UK and it is clear that their differentiated offers deliver only 2% extra revenue, or hardly enough to pay the system intgration bill.

    Really I’ve never heard of any network going broke on bits, certainly not with transit costs down to a couple of dollars per mbit/s/month. With networks like NTT setting limits at 900 gigabytes a month (upload, not download) I don’t see where the panic is

    Share
    1. Rudolf, tomorrow’s post makes 17 specific predictions of things that may come to pass if tiering / pay-per-use comes to be prevalent for either wireless or wireline. The economics of costing for networks are complex. To your point, when networks are uncongested, the marginal cost to carry another packet is nominally zero. When they are at capacity (or a subnetwork is), the marginal cost is substantial. Airlines are the same: when seats are available, one more passenger doesn’t cost much to transport. But if not, the airline may need to buy another plane / operate another flight. Even thornier, congestion externalities are harder to model on both the cost side and opportunity cost side. What’s the cost of being stuck in traffic, both in terms of fuel / carbon and being late for a job interview?

      In any event, tomorrow’s post doesn’t argue for or against pay-per-use, just explores the implications of metered wireline or wireless–and perhaps some interesting business opportunities.

      Share
  4. I knew there would be, ah, multiple perspectives on this. Looking at U.S. carrier investment from 2007-2009 (e.g., http://finance.yahoo.com/q/cf?s=T+Cash+Flow&annual), just to pick some obvious ones, AT&T has invested just over $54 Billion in capex, Verizon just over $51 Billion, and Comcast just over $17B. According to Telegeography (http://www.telegeography.com/product-info/gb/download/gb10-executive-summary.pdf) B/W demand is growing at 60% per year globally, with $3.1 billion in new subsea cable investment, and last year there being more new cables built than at any time since 2001. If this is underinvestment, what exactly are the right investment levels and how will they be recovered?

    In any event, the point of the article was that even though humans have cognitive biases against metering (tied to Kahneman/Tversky Loss Aversion), dispersion in demand levels, non-trivial investment requirements, and other factors are likely to make this trend inescapable in the near future. And, the other phenomenon that is of interest, if you read my “Market for Melons” PDF, is that individual rational self-selection under such circumstances leads to emergent system dynamics favoring pay-per-use.

    Whether we like it or not, if you agree that this is potentially a trend, you’ll be interested in my assessment of implications tomorrow.

    Share
    1. Hi Joe,

      Thanks for responding. Many authors just post and have no interest in actually engaging their audience. While I disagree with your post, kudos to you for your responses.

      I understand your basic premise, though I strongly disagree with the idea that moving from the ‘buffet’ to the ‘a la carte’ in your metaphor is a logical reaction.

      The internet did not really take off until we moved from the hourly plans to the flat rate plans. In the same way – newspapers looking for revenue models tried micro-payments, where you would pay for individual articles. Complete failure.

      The world is awash in similar examples

      1. Telephones. Decades of use with similar dispersal of demand and yet there is no move to break up your monthly telephone bill so light users no longer subsidize the heavy users.

      2. Television. Again, a lucrative business model is built up around a no limits plan. There is room for a pay per view model on top of this, but no one would seriously suggest all programs should be metered.

      3. Radio. Like television, all you can eat, with a Freemium model built on top.

      4. Mobile phones. Many plans are by the minute but there is strong pressure for bringing in flat rate plans. The original IPhone was a big success in large part that the data plan was unlimited. In Canada, there was no unlimited plan, and that was widely viewed as a deficit.

      All of these industries have the varied demand levels and strong infrastructure requirements that you bring up as proof of your thesis, and the fact remains, flat rates beat out pay per use every time.

      These plans are coming in, of that I have no doubt, but they are not coming from consumer demand but from the top down with a viewpoint of revenue generation.

      Share
      1. trapper5, this is a complex topic with strong feelings on all sides. It took me 82 pages in “The Market for Melons” just to scratch the surface, and there are professional regulatory/utility economists that have devoted their careers to these issues. There are benefits to the flat-rate model, and benefits to usage-sensitive models as well. Tipping the balance are consumer preference, usage dispersion, whether the market is “perfectly competitive,” fixed and variable cost structure, transaction costs, information/search costs, switching costs, loss aversion, various other cognitive biases such as the “taxi meter effect” (see the Lambrecht and Skiera paper I link to or Dan Ariely’s Wall Street Journal piece http://on.wsj.com/fWWRb8 ). I don’t agree that “flat rates beat out pay-per-use every time”–consider your electric bill, your water bill, your natural gas bill, your cloud computing bill, or your restaurant bill.

        Moreover, if bandwidth demand is growing 60% per year, there had better be a rational economic model to sustain continued investment. It isn’t as clear to me that a push to usage-sensitive plans is coming “from the top down with a viewpoint of revenue generation,” as studies show that people are willing to pay more (unnecessarily) under flat rates due to loss aversion. I see it more as rational pricing and resource allocation when transaction costs are low, demand levels are dispersed, and costs and capex are non-trivial. Moreover, an issue with flat-rate is “moral hazard/the tragedy of the commons.” When something is free (e.g., marginal use under flat-rate plans), you tend to waste it. Conversely, when you pay for something, the “endowment effect” makes you value it more highly.

        As for the validity of the migration to micropayment models, the app stores with billions of downloads, many paid, are a sufficiently valid example, I think. Telephony has swung between measured service and flat-rate plans a number of times as technology transitioned from circuit-switched to mobile voice to non-dispersed mobile data to highly dispersed usage levels (latest generation smartphones, e.g., per the Mary Meeker / Morgan Stanley reports). Traditional broadcast radio and TV are, in fact, pay-per-use, it’s just that you are paying with eyeballs, not in hard dollars.

        I agree with you that measured service typically reduces consumption and flat-rate increases it (although if one pre-pays for measured service one tends to increase consumption to “get what one has paid for,” as studies show.) However, success can’t be measured just from one party’s perspective. Content providers and network services providers may have a different definition of “success.”

        In any event, the point of this short post was to tee up tomorrow’s predictions / implications of broader pay-per-use adoption.

        Share
      2. Right Joe, the points about dispersed demand levels and non-trival costs and capex are key. People often fail to appreciate that the consumer experience of the Internet has been dominated by a single application, the web, until quite recently, so demand levels have been so uniform that usage-based pricing has been redundant. The large technical and regulatory hassles have all been stimulated by non-web-based Internet applications such as VoIP, BitTorrent, and video streaming that depart from the web usage model considerably.

        There’s nothing inevitable about flat-rate pricing in a world of heterogeneous applications.

        I’m looking forward to your follow-up.

        Share
  5. Anyone who wants to understand why the shift to usage-based pricing is an inevitable side-effect of the transition from cable TV to Internet TV is welcome to read by recent ITIF report, “Now Playing: Video over the Internet”, http://itif.org/publications/now-playing-video-over-internet

    The bottom line is that the bandwidth needed to support the new way of watching television is millions of times more than the capacity that broadband networks and regional Internet exchanges were designed to handle. Ultimately, we’ll need many times more Internet exchanges, and they’ll need to connect several times more points in the broadband network providers’ Regional Area Networks than they do today, and the cable networks will need some changes in their internal architecture as well.

    This isn’t simply a matter of lighting up some dark fiber here and there, it’s essentially a re-design of the system that interconnects the last mile cable network with the Internet core.

    Share
  6. I never thought about it from this angle, but you are probably correct.

    Share
  7. Just a little perspective for the doubters… A 720P HD video stream runs at 12 mbps. You can only fit 200 720P streams in an OC48. Your ISP is probably connected to the backbone via an OC48.

    To make cord cutting a mainstream reality we need more last mile bandwidth, but more than that we need to push more and more of the content closer to the edges of the network. That means the CDNs need to grow massively very quickly.

    Share
    1. More CDNs doesn’t get it done if they’re located in the existing colos, POPs, and IXPs. The bottleneck is the dearth of exchange points themselves, and the scarcity of bandwidth between them and the last mile. The last mile is fine, it’s everything else that’s jammed up.

      This comes about partially because every projection of bandwidth demand is faulty. The Internet protocols (all the way from IP up to application) used by content-based apps are elastic with respect to network conditions. Model the middle mile B/W needed to support 200 million HDTV streams and you start to appreciate the problem.

      It’s not enough to deliver streams to the ISP edge, you have to take them all way to the DOCSIS EdgeQAM.

      Share
      1. I agree with you. The CDNs need to start (or expand) their storage caches in the MSO headends (especially for popular programming). You will end up with exabytes of storage colo-ed in comcast, TW, and cox headends in every city in the country. This is not about a giant server farm somewhere, it’s about pushing as much of the popular programming as far down the network into the neighborhoods as you can.

        I’ve been saying for a long time now that right now is the golden era of cord cutting. Almost no one is doing it so the networks can support it (and they’re still flat rate). As soon as it takes off, we’ll be paying by the bit.

        Share
      2. Richard, excellent piece. To make matters even worse, over the top content delivery networks and network multicast approaches will only take us so far, as they assume that there is a single object or stream that is being synchronously or asynchronously distributed that can therefore either be multicast, or cached at the edge. What happens when there are dozens or hundreds of real-time, video-enabled devices in your home, some user endpoints and some M2M, passively/ambiently capturing or displaying streaming unicast HDTV either for business, entertainment, video surveillance / security or social networking. One rule of thumb for 30fps 1080p HD is 5-7 megabits per second. Now bump that up to 60 or more frames per second, and migrate from 1080p (2 megapixels) to Quad HD (8 megapixels) or Ultra-HD (32 megapixels) and the magnitude of the problem becomes apparent. (see, e.g., http://gigaom.com/2010/02/07/tuning-in-to-the-big-picture-on-video/ )

        Share
      3. More caches below the head-end (the EdgeQAM isn’t th head-end actually) solves the problem for video streaming, but other applications such as immersive gaming and video conferencing are emerging that don’t respond to local acceleration, so more needs to be done to satisfy the demand they create as well. It could be that video streaming is just a fad anyhow; two or three years ago p2p was dramatically on the rise, but it’s frittered out already.

        Share
      4. Joe, looks like our last two comments crossed in the mail. I obviously agree that the emerging symmetrical video apps and point sources of video are the interesting and challenging cases. The Internet needs to become more deeply meshed to handle them, and that raises issues Internet routing, which is fundamentally non-scalable because of the global and ad-hoc nature of Internet addressing.

        Share
  8. No more than pay-for-flush is inevitable – though it’s non uncommon in public loo’s around the world.

    Tier pricing for bandwidth consumed per month is inevitable. Net Neutrality is a Red Herring wrt this aspect of the debate.

    Share
  9. I thought Al Gore invented the internet? Next you’re going to tell me that Global Warming is BS? (in the 20′s here in FL)

    Share
  10. HP is this month’s Cisco, great defender of its customers — the big telecom companies. At Stopthecap.com, we’ve been fighting these Internet Overcharging schemes for more than two years now in an effort to keep America’s broadband landscape looking better off than what folks in Canada are enduring these days.

    Readers who want to see Joe Weinman’s vision of broadband need only look north to see how things worked out. It’s not the rosy scenario Weinman seems to promote. High prices, rationed usage (with some allowances declining), and slow speeds. Canadians pay far more for far less service. Companies like Netflix trying to make a living streaming content there are finding ubiquitous usage caps cutting into their potential subscriber base.

    Indeed, as we’ve proven in more than 1,000 articles since starting, when Big Telecom comes ringing with promises of savings from metered or capped broadband, hang up immediately.

    These plans save almost nobody money and expose dramatic overlimit fees to consumers, creating the kind of bill shock wireless phone users endure.

    The OPEC-like Internet price-fixing on offer from big players (that sign millions of dollars of contracts with Weinman’s company BTW) delivers broadband rationing and sky high prices, while retarding Internet innovations that providers don’t own or control.

    Consumers are forced to double check their usage and think twice about everything they do online out of fear of being exposed to huge overlimit fees up to $10 a gigabyte for exceeding an arbitrary limit ranging from 5-250GB.

    Americans already pay too much for Internet service and now the providers want more of your money. The rest of the world is moving AWAY from the pricing schemes Weinman would have us embrace. It’s such a serious issue in the South Pacific, the governments of Australia and New Zealand are working to address the problem themselves.

    Providers are already earning BILLIONS in profits every quarter from their lucrative broadband businesses. Now the wallet biters are back for more, with the convenient side benefit that limiting consumption is a great way to prevent Internet-delivered TV from causing cord-cutting of cable TV packages.

    As far as consumers are concerned, and Weinman admits as much, people are happy with today’s unlimited price models. When Big Telecom complains people are overpaying for broadband, wouldn’t their shareholders be telling them to shut up and take the money? There is more to this story.

    Weinman defends the extortion proposition Big Telecom would visit on us: either give us limited use pricing or we’ll raise all of your prices.

    But as consumers have already figured out, these providers never reduce prices for anyone. When was the last time your cable bill went down unless you dropped services?

    Don’t be a sucker to Big Telecom’s “broadband shortage” or pricing myths. Broadband is not comparable to water, gas, or electric. The closest comparison (and the one they always leave out) is to telephone service, and as we’ve seen, that business is increasingly moving TOWARDS flat race, unlimited pricing.

    Want to know what metered pricing does to the wallets of consumers? Just ask Time Warner Cable customers in Rochester, Greensboro, San Antonio, and Austin what they thought about the cable company’s “innovative” pricing experiment that tripled the price for the same level of broadband customers used to get for $50 a month. After the torches and pitchforks were raised over $150 a month broadband service, Time Warner backed down.

    Either with or without metered pricing, the cable company raised its prices three times last year alone.

    It is also remarkable how many of the comments in this thread agreeing with the author earn their living working for dollar-a-holler “interest groups” quietly funded by Big Telecom, equipment vendors that sell to them, or other related special interests. Follow the money!

    Our group is entirely consumer-financed with absolutely zero dollars from the industry.

    Share

Comments have been disabled for this post