2 Comments

Summary:

An independent task force that provides recommendations on broadband policy to the FCC has made its first eight recommendations, including one that relates to the FCC’s recent questions about if megabits per second is a good metric on which to judge broadband.

digital data flow through optical wire

An independent task force that provides recommendations on broadband policy to the FCC made its first eight recommendations, including one that suggests throughput speeds (currently measured in megabits per second) are not a good metric on which to judge broadband. The FCC has also recently asked questions on this issue, which means the industry could be preparing to shift the way it sells and advertises broadband.

The debate over how to measure broadband is an important one, because while Mbps isn’t necessarily a consumer-friendly standard that indicates how fast an email is delivered versus how well a Netflix movie plays, it is a neutral, technical metric that some ISPs may be keen to obliterate as they push for broadband plans tailored to specific applications. The FCC, as it chooses to address this issue, will have to a walk a line between the consumer interest for more understandable data on their web connections and potentially anti-consumer efforts to change the way the industry charges for broadband to discriminate against certain applications.

The current metric measures how many bits can travel over a given connection in a second. Because all files transferred via the web are made of bits, it’s a useful piece of information, but alone, it doesn’t help consumers much. It’s like knowing how many miles per hour a car can go without knowing the speed limit, how far away a location is and what conditions the roads are in.

This is why two weeks ago, the FCC submitted a public inquiry on broadband speeds, something we’ve touched on before. However, the task force recommendation issued Monday went quite a bit further than the mere consumer angle and asked the FCC to consider other issues such as reliability, latency and other quality measures. The task force also looked ahead to where we think networking will be heading, with its plans to eventually track how multimodal a provider’s network is. Given that it’s connectivity that’s important, and not necessarily the method by which you connect, this more holistic point of view makes sense.

The task force recommends:

New Metrics to Measure Broadband Network Quality. The TAC believes that, for some usage models, developing metrics beyond throughput speed to measure the quality of Internet Protocol (IP) broadband networks is important for helping the IP ecosystem flourish by enabling “extended” quality standards that can support the subset of applications that require not only fast, but precise, timely and reliable broadband networks. Simply measuring broadband networks by throughput speed does not provide a full picture nor set sufficient performance parameters to support uses with “extended” quality requirements such as healthcare monitoring, emergency services, alarms, etc. Although network services that meet such extended criteria may not be offered by all service providers, or included in all service plans, it would be beneficial to have common metrics for them.

Additionally, in transitioning to IP-based networks the TAC will be identifying how reliability can be characterized in a multi-modal environment where reliability is provided by having many alternate paths, means and/or modes of communications. The FCC should initiate the steps necessary for determining how this aspect of the transition will impact the basic architecture of emergency services.

Since the FCC doesn’t have to listen to the 45 private citizens that make up the Technical Advisory Council, their recommendations (read them all here) have no direct bearing on policy. We’ll have to see how seriously the FCC takes this and other recommendations. But for our readers, I’m interested in what metrics should matter when it comes to consumer broadband. Should those metric be different for different types of networks? Different customers? I — and the FCC — want to know.

  1. Reliability would seem like a good stat to measure. If you’re getting 30Mbps but your connection is down 3 times a week for an hour, you might be happy with 3Mbps but a 99.99 availability. Of course that gets into commercial broadband services with SLA, etc

    Share
  2. The only meaningful performance metrics are throughput and latency, so the issue is where you put the probes. The Internet’s an end-to-end system, but your typical Speedtest.net test only measures throughput and latency to the nearest IX.

    Where else should the FCC measure performance?

    Share

Comments have been disabled for this post