10 Comments

Summary:

Are fewer competitors better for mobile broadband customers? Yes, according to a new study, which seemingly ignores trends in mobile network architecture that intended to address the capacity crunch the author’s see, thus undermining the assumptions on which the theory is based.

mobile phone and telecommunication towers

Are fewer competitors better for mobile broadband customers? Yes, according to the Phoenix Center for Advanced Legal & Economic Public Policy Studies, which says that when spectrum is used up, fewer firms lead to lower prices.

The February study “turns the conventional view of wireless competition on its head,” according to its authors. It does so, however, by seemingly ignoring trends in mobile network architecture that intended to address the capacity crunch the author’s see, thus undermining the assumptions on which the theory is based.

The study raises questions.

The conventional view of more wireless providers as better for competition — and consumers– is based on the Cournot competition model, under which prices and profits intuitively decline as the number of firms increases. The authors start with this and make some tweaks for the wireless case.

First, they assume the amount of capacity is not linearly related to the amount of spectrum an operator has — capacity is assumed to increase at a greater rate than spectrum is added. This can be seen as an economy of scale.

Second, the authors look at what happens when all operators in a market have reached the point of so-called “spectrum exhaust” — when they’ve maxed-out spectrum use and are running at maximum capacity. Under spectrum exhaust, according to the theory, the operators with the largest spectrum assignments enjoy the largest economies of scale, which become even bigger if they can get more spectrum. These economies, ideally, make their way to the customer in the form of lower prices.

Split the exhausted spectrum up among more operators and economies of scale go down, prices go up. The authors give this example in an accompanying blog post:

Say you have 100 MHz of spectrum and you divide it among 4 firms so that each gets 25 MHz. Say this generates 100 units of capacity. If instead you divided 100 MHz among two firms, so that each gets 50 MHz, then the amount of total capacity would be something like 150.

Why 150 instead of, say, 105 or 200? We aren’t told. How few operators are optimal? We aren’t told that either:

We cannot and do not reach conclusions about how many competitors is the right number under existing market conditions. What we do demonstrate is this: if it is true that there is spectrum exhaust, then the argument that more competitors leads to lower prices is not true.

Again we’re left with a question: More than what? Though they don’t reach conclusions about the right number of competitors, they present a model that happens to show two as optimal, for what they say is an arbitrary set of input assumptions. Presumably a different set of assumptions, equally arbitrary, could indicate a higher or lower number than two.

What about the new wireless reality?

In light of their findings, the authors say the U.S. Department of Justice’s and the FCC’s reliance on traditional market concentration measures, such as Cournot model and the Herfindahl-Hirschman Index, is misplaced.

It’s encouraging to see some fresh, thought-provoking thinking on mobile competition analysis. One concern I have with the study, however, is the need for there to be a condition of “spectrum exhaust” for the model to work. Does an operator ever reach the point where it “runs out” or is exhausted of spectrum? I think not.

This is because capacity — what the operators are really selling, not spectrum — can be increased without using new licensed spectrum through a variety of techniques including Wi-Fi and small-cell offloading, or increased antenna sectorization at the base station. What happens to this, or any other, competition analysis when the customer can access an operator’s network using no licensed spectrum, or bypass that network completely for some services?

Some other concerns:

  • In setting the background for the report, the study invokes a discredited FCC technical report that uses invalid assumptions and is reported to be disowned by the FCC staff that prepared it.
  • After relying on the Cournot model, the authors caution that it has several practical defects.
  • The authors say some analysts think there’s too much competition “today,” citing articles that are two or more years old.

I don’t doubt the authors’ belief that mobile-broadband competition analysis can be improved, but I don’t think this analysis, in its present form, is ripe for influencing policy. Perhaps the research could be extended to take into account the move toward heterogeneous networks, severing the notion of spectrum and capacity, and looking at the issue as an optimization problem in terms of size and location of licensed spectrum, number of operators, and use of unlicensed spectrum and other techniques to increase capacity. Then we may have a better handle on what is the optimal number of mobile broadband operators.

In one study of 40 international markets, 36 have three operators that control 85 percent of their market. That same study observes that this follows the Rule of Three, which states that there are three significant competitors for any mature market. Maybe two, as shown in the Phoenix Center model, isn’t that far off. In the face of disruption from offloading, heterogeneous networks, and over-the-top content, however, the mobile-broadband market is losing its mature status. The Rule of Three may become less applicable as traditional notions of a mobile-broadband industry fade.

Steven Crowley is a consulting network engineer, who blogs here. He can be found on Twitter @stevenjcrowley

  1. blevinsdavidlester Sunday, March 25, 2012

    I wonder if any of these guys remember when 1200 bps was the “absolute maximum” anyone could get out of dial-up?

    They completely forgot about “need is the mother of invention” and engineering could almost always get more out of the spectrum, getting more efficient and therefore charge less. Fewer competitors equals big, fat, monopolistic, companies competing with lawyers.

    Share
  2. Q: who paid for this study?

    The outcome, which you point out is dubious, implies that there will be fair competition and not the duopoly (or oligopoly) that we are headed for.

    When we launched Leap Wireless (Cricket) with a paucity of spectrum – 10 Mhz in many markets – an UNLIMITED plan no less, it changed the industry. Leap, Metro and others proved that efficiency of new technology was a force to deal with so-called “spectrum exhaustion”. Indeed, the very same arguments were made at the time (10 years ago) that the US was so far behind Europe that we would never catch up.

    Well, that did not happen. Rather, the US – with the iPhone and the largest LTE deployments has leapfrogged Europe. Spectrum efficiency has soared. Yes – LTE Rel. 8 and soon Rel 10 (LTE- Advanced) has very large channel bandwidth. But the performance is so closely tied to the link budget and having a high signal/noise ratio that you only get the great performance when you are “dB-wise” close to the BTS…hence the upcoming small-cell revolution – which is simply taking the entire “cellular” concept of spectrum reuse – developed in the 1960s by Bell Labs – to a logical next step.

    What should be clear is that without spectrum for less well-heeled businesses, continued innovation – even simply business innovation such as what Leap/Metro brought to the wireless industry – will suffer.

    Share
    1. I didn’t see any indication of a sponsor for the study, but I wouldn’t be surprised to see this concept show up in an AT&T or Verizon Wireless filing before too long.

      I think your points on small cells good. Get the user closer to the “base station,” whether that’s a femtocell, a larger small-cell, or Wi-Fi access point. The reduced distance increases the signal-to-noise ratio, and the short range means the total capacity in the cell is shared among fewer people. Both factors greatly increase performance.

      Share
  3. I think your last two sentences are what keep the ruling cellular execs awake at night. Especially unlicensed and OTT. IMO that’s largely the reason they’re making a land grab to control the future of public Wi-Fi through their hotspot and Carrier Wi-Fi initiatives. Hopefully new contenders will put forward better models and bring some much-needed change.

    Share
  4. If fewer is better then the ideal should be one. In many ways a pre-break up Ma Bell might make sense. Separate hardware suppliers with no cross subsidies, a single standard for network technologies, heavily regulation to ensure net neutrality and rural coverage while preventing price gouging, it couldn’t be much worse than the mess we have now. Unfortunately, I don’t think that’s what the study’s sponsors have in mind.

    Share
  5. There is no need to re-invent the wheel here. We already have a model for the least inefficient exploitation capital-intensive, limited-throughput resources – it’s called utilities. For utilities, inefficiency is minimised (though by no means zero) with a small number of large, very tightly regulated operators. Furthermore, utilities are not a desirable model, they are simply the least-bad model under achievable under certain circumstances – if we can change these circumstances to others (namely those in which increased competition clearly leads to lower prices), we should.

    Share
  6. Richard Bennett Tuesday, March 27, 2012

    We’ve had this discussion before Steve, but to recap: At some point, cell splitting develops diminishing returns and even Wi-Fi suffers overload. Going from three sectors to six is fine, but going from 64 to 128 doesn’t produce much benefit. Similarly, traffic on campus Wi-Fi networks tends to have high latency because there’s only so much you can do with 80 MHz.

    The Phoenix paper may not be the most technically astute analysis of LTE Advanced ever written, but it does raise the ultimate question about competition, and it’s one that no one else even wants to discuss. I suspect the answer isn’t two, three, or four competitors of the same type but may be a half dozen or a dozen competitors of different types, some pre-paid, some post-paid, some MVNO, some wholesale only, and some hybrids of licensed, unlicensed, and other. Network markets defy classification because so many players have multiple roles. Facebook, for example, if both an application and a platform that enables other applications. The best way to look at this is probably through some variation on Jon Sallet’s Value Circles concept.

    The utility model is totally bankrupt, however, as it fails to encourage technology upgrades. Look at the phone network pre-breakup or the California power market for details.

    In any event, LTE envisions a mobile architecture built out of macro cells and micro cells, not so much one of infinite sectorization. There are good technical reasons for this.

    Share
    1. I agree there are practical limits to cell splitting, Wi-Fi, and to each of the techniques used to increase capacity in the heterogeneous network. Again, it becomes an optimization problem. Different types of competitors, as you suggest, should be considered in that process.

      Share
  7. Steve
    You make some great points. The idea that spectrum might ‘run out’ misses the nature of wireless capacity. A spectrum shortage just increases the intensity with which you need to deploy other solutions . As it happens, we just published a major modelling study – conducted for Ofcom in the UK in connection with considering the role of spectrum in mobile capacity – into just this point. We’ve looked at the whole range of network capacity solutions: spectrum, technology upgrades, small cells and offload – and worked out according to a range of scenarios how a cost-optimised network looks over a long period (2012-2030!). It’s a deep and detailed modelling process, which avoids making up-front assumptions about the ‘best’ approach to increasing capacity, but incorporates a very granular view of demand by location, time of day, indoor/outdoor, usages, the distribution  of demand amongst users and devices… I can’t claim that we can foretell the future any better than others, but I’m not aware of any other studies which have put all of these factors together in an even-handed fashion.
    http://stakeholders.ofcom.org.uk/consultations/uhf-strategy/?utm_source=updates&utm_medium=email&utm_campaign=uhf-spectrum-condoc

    Share
  8. Steve , thanks for the careful read and comments our paper. I have offered some responses to your ideas on my blog (http://phoenix-center.org/blog/archives/535). Perhaps we can continue the discourse — it’s an important issue.

    Share

Comments have been disabled for this post