19 Comments

Summary:

I confess I have a rather odd hobby. I seek out and collect statements from broadband regulators and lobbyists that reveal a fundamental misconception about the networks they oversee. Stacey uncovered this gem in April from New York state’s CIO: Consumers should be able to “know […]

walshI confess I have a rather odd hobby. I seek out and collect statements from broadband regulators and lobbyists that reveal a fundamental misconception about the networks they oversee. Stacey uncovered this gem in April from New York state’s CIO: Consumers should be able to “know the actual data transmission speeds” of their broadband services. And in July she noted that lobbyists, petitioning the NTIA over perceived shortcomings in the recent broadband stimulus package, complained service providers often “advertise speeds of up to 3Mbps while refusing to guarantee those speeds.”

Many believe that broadband service providers selling, say, a 5Mbps service should be required to set aside the same amount of capacity in order to fulfill that implicit service-level agreement (SLA). In other words, if you pay for 5Mbps, it’s there when you need it. But the reality is that networks, just like hotels and airplanes, are almost always oversubscribed — the owners of these assets sell more capacity than they have available.

This is actually an economically rational thing to do. It accounts for the fact that not all requests for capacity (or seats or rooms) are used, and results in greater efficiency and lower overall costs. But whereas hotels and airlines might sell 10 percent or 20 percent more capacity than they have, broadband operators typically sell a few thousand percent more capacity than they have. This may sound egregious, but it’s really not. It’s a reflection of typical usage patterns on broadband networks and necessary to achieve the price points consumers are willing to pay.

According to Pew Internet’s Home Broadband Adoption 2009, Americans pay an average of $37.60 per month for broadband, and according to Akamai’s first-quarter 2009 “State of the Internet” report, the average actual (not advertised) downstream speed in the U.S. is 4.163Mbps. This means that consumers are paying a little over 0.0009 cents per bit per month. That doesn’t sound exorbitant.

Most large carriers (e.g., AT&T, Verizon) offer broadband services that absolutely guarantee speeds. Verizon, for example, provides Internet access with guaranteed speeds up to 2.5Gbps and a written SLA. I don’t know what Verizon charges for this service, but a wholesale OC-48 (2.488Gbps) Internet access line from US Access costs $60,000 per month, or 0.0024 cents per bit per month.

The reason the latter is 2.7 times the former on a per-bit basis (and almost 1,600 on an absolute basis) is not that one is for businesses and the other for consumers (although they clearly are), it is that one has dedicated bandwidth and availability and the other does not. The reason broadband operators oversubscribe their networks and do not make minimum speed guarantees is that they must deliver these “high-speed” services at very low price points. Keep in mind, it wasn’t all that long ago that 128Kbps ISDN lines ran several hundred dollars per month.

The regulators, legislators, lobbyists and interest groups clamoring for broadband speed guarantees ought to be cautious. Broadband operators are perfectly capable of guaranteeing speeds, but they’ll need to ask consumers for more dollars to do it — and folks might not like the price tag.

Kevin Walsh has over 25 years of telecommunications and networking industry experience and is currently an executive at Zeugma Systems.

  1. Obviously, the politicians are grandstanding about guarantees. But the carriers have brought this on themselves by loudly making speed claims in their marketing and advertising that they know are impossible to deliver in the real world.

    A better answer would be to do what they do with car milage. Make the operators equally and clearly state both the maximum speed and one that people would expect under more normal usuage conditions. That seems like a fairer and more useful answer.

    Share
  2. Excellent article. I’ve been pushing this case for years, but people don’t seem to listen.

    I had a 64Kbps ISDN line years ago and it did cost me hundreds of dollars a month. That transitioned to 128Kbps I-DSL (DSL over the ISDN dual pair) that was “only” $150 a month.

    We’re hitting higher burst speeds, and if the FCC and local municipalities would allow for more competition, you know the future is more speed, lower latency and lower costs. That’s a big if.

    No one needs the State to enforce speeds. We need the market to allow for competition, and it’s that competitive pressure that will give consumers the product they want at a price they’re willing to pay. Competitive choice also stifles false advertising because consumers WILL use review websites to knock down those who lie about their product.

    Get the State out of this, it’s a mess they’re only going to make worse. They’re the ones limiting competition, thereby limiting our choices.

    Share
  3. Well of course it makes sense to hide the truth from consumers; where i’m from we have a name for this — it’s called fraud.

    The whole point behind transparency and regulation is to ensure a level playing field. A fair marketplace doesn’t necessarily drive up prices, even though industry would have you believe so.

    Share
    1. I agree, we need some transparency here. It is obviously that other countries have nationalized their broadband systems and many are better than ours. What is it that they do, or we didn’t do? And why are these companies so pathologically against more transparency in their pricing methods? Almost each company can justifies whatever when they raising the prices. There are just no standards here.

      Perhaps, an independent studies organized by the FCC can provide consumer a little better idea of what is going on.

      Meanwhile, these so called “basic truths” that support/validate broadband’s current practice are raising no less distrust than what the industry has already done themselves.

      Share
  4. Well said. WIth fiber creeping closer to the consumers, perhaps we will get the cost vs. adoption curves right and higher committed speeds for everybody. I note that a T1 still runs $300+ so at least the notion of ‘high availability’ will run you approx 10x more than similar bandwidth on a business or home DSL link. Talari is doing something about that particular issue as well.

    Share
  5. You used two terrible analogies – overbooked planes, and overbooked hotels. In both of those situations there are financial penalties for overbooking. When you get bumped from a plane, you (1) get cash, and (2) get the next available plane headed in your direction. Likewise for hotels – if you have a reservation and show up to a full hotel, they’re going to find you someplace else to stay.

    Does this mean when I can’t get the advertised download speed, I get a rebate on my monthly bill? Thought not.

    Do you have an example of another area where you get sold something that might not be there when you want it and there’s no penalty?

    Share
    1. Good point, the analogy is less than perfect. But the reason is that, in the broadband case, you really weren’t sold a guarantee of x Mbps, you were sold access for which the maximum rate is x Mbps, the minimum rate is zero, and the average rate is somewhere in between (determined by the oversubscription factor and congestion).

      A better analogy might be a freeway. The posted speed limit is 55 MPH but that doesn’t mean you can always drive at the rate. The actual rate at which you can drive is also determined by oversubscription and congestion. Traffic engineers don’t build freeways to meet peak capacity because we couldn’t afford it.

      Similarly, broadband engineers don’t build networks that guarantee peak capacity is always available because consumers wouldn’t pay the price.

      Share
  6. It’s true, there’s no way you’d get away with it in other niches. Imagine if you went to fill your car up and they said we could fill it up to 15 gallons, but it might actually be only 5 or 10 gallons, only we can’t tell you which it is but you might be able to figure it out when your car slows down and stops.

    Share
    1. I think a better analogy is if you went to the gas station and some of the pumps were out of service and the rest were all in use. The capacity of the station is subject to operational considerations and also takes time and capital to increase. Your access to it is governed by how many other people also want service when you do.

      Share
      1. Or the companies are suggesting to their customers that the size of their gas station is larger than it actually is, like it can accomodate 50 people when in reality it only has 4 pumps.

        Share
  7. I couldn’t agree more. Oversubscription is a needed part of the economic equation for ISPs. This is reality.

    The recent renewed discussion over Net Neutrality has only gotten more ignorant legislators talking about things they don’t understand.

    For a primer on recent Net Neutrality legislation as well as the issues involved take a gander at something I wrote a couple of days ago:
    http://metafarce.com/index.php?id=24

    Share
  8. I live in Japan, not only is there no overbooking of hotels or aeroplanes, but I have fibre to the (albeit) tiny apartment for $40 per month – 21 Mbps with no caps or limits…

    Share
  9. Steve Steiner Sunday, August 9, 2009

    The ‘averages’ here are wildly misleading. One problem is that this suffers from “the average income in bill Gates neighborhood” problem. What is the median bandwidth? What is the median total price? Then what is the median price per bit? You don’t get that last one by dividing the 1st and second .. you get it by determining all the prices per bit and picking the one in the middle. When those numbers appear then maybe people can take your argument seriously. Otherwise most people merely need to look at the bill and the service to know it’s really a bad deal.

    Share
  10. Unfortunately this article messes things up just as badly as the politicians criticized. There are the important elements that need considering.

    1. The speed of the line from the end-user to the local aggregation point (switch, cable head-end). DSL and wireless technologies are particularly crappy, as the speed the end-user can have is a function of the distance the end-user is from the DSLAM or the antenna. Unfortunately there are still telco’s that sell up to 8mbit or up to 20mbit subscriptions that can only attain 4-8mbit/s because of distance limitations. The really nasty one’s try to upsell the customer to a top tier 20mbit/s line where only 4mbit is achievable and so a 4mbit/s subscription would have sufficed
    2. The speeds that can attained between 6pm and 10pm and the speeds that can be attained between 2am and 6am. These can be limited by oversubscription on:
    a. the local segment (cable and wireless) and
    b. on the ISP’s WAN
    c. from the ISP to the rest of the world

    Oversubscription on the local segment is a fact of life on cable and wireless networks. It is a shared medium. This should be clear to end-users. It is not bad, it is a fact of life. It’s effect is that between 6pm and 10pm the speeds can be erratic. between 2am and 6am the listed speeds can quite often be attained.

    Oversubscription on the WAN is part of the problem you’re describing above. Oversubscription on the WAN is not an economic fact, it is a result of crappy network planning. Oversubscription on the WAN can be completely unnoticed by the end-user if the network operator builds enough bandwidth into it’s WAN. If the network owner and the ISP are the same entity, this shouldn’t be a problem. With traffic growing 50% per year proper network management dictates that an oversupply of bandwidth is necessary anyways. Statistics from the AMS-IX in Amsterdam show that peak traffic is about 50% higher than average and three times higher than the bottom. So next year your average is the same as today’s peak and in 2.5 years even the bottom is at today’s peak. WAN Bandwidth is a problem in some countries like the UK and the USA where the costs of backhaul to and from smaller communities are extremely high because of regulatory and/or competitive problems. Mind you, technical and cost limitations are often not important here, as the costs of installing faster equipment is often not prohibitively high. (DWDM, 10Gbit/s ethernet etc) This also means that you don’t have to build your network in a fashion where everyone can achieve max speeds at the exact same moment. Just carefully planning it. Like the highway system, where we don’t expect all car drivers in the US to show up at the Brooklyn Bridge at the same moment (or to start driving at the same moment at all, regardless of location).

    The costs of traffic from the ISP to the rest of the world is governed by the economic laws of peering and transit. For an explanation see my article on Ars Technica. Whether enough is available to the end-user is dependent upon how easy it is for an ISP to get peerings with the most important networks (Google, Microsoft, Yahoo, Akamai etc) and the local costs of transit. Many developing nations find that their biggest problem lie here:
    – The national incumbent monopolizes transit traffic and charges outrages amounts for it.
    – No local internet exhanges to keep local traffic local.
    – No possibilities for local peerings with Google, Microsoft, Yahoo etc meaning that the transit link gets hit harder.)
    Amsterdam, London, New York are places with low costs for transit ($4/mbit/s/month) and many peering opportunities, so any network operating there should be able to get enough traffic for their end-users. Dave Farber once mentioned that traffic costs were only between 1%-5% of a subscription.

    So, to conclude:
    – Politicians are right to complain when listed speeds on the local loop cannot be attained because of distance problems. Providers of DSL and wireless should be put in the doghouse for this. The should inform their customers properly of what speeds can really be achieved.
    – Cable networks and wireless networks could be required to publish the mean and median speeds users can attain between 6pm and 10pm.
    – Problems on the WAN and on the interconnect to the rest of the world are either a result of bad investing in backhaul or because of regulatory and competition problems. If that is the case the ISP should inform its customers of the situation, explain why this is the case and show how it deals with distributing a scarce resource among all the users. A good example is the Plusnet DSL network in the UK who are very clear on how they prioritize network traffic between different classes of customers.

    Share

Comments have been disabled for this post