Blog Post

There’s something rotten in the state of online video streaming, and the data is starting to emerge

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

If you’ve been having trouble with your Netflix streams lately, or maybe like David Rafael, director of engineering for a network security company in Texas, you’re struggling with what appears to be throttled bandwidth on Amazon Web Services, you’re not alone.

It’s an issue I’ve been reporting on for weeks to try to discover the reasons behind what appears to be an extreme drop on broadband throughput for select U.S. internet service providers during prime time. It’s an issue that is complicated and shrouded in secrecy, but as consumer complaints show, it’s becoming increasingly important to the way video is delivered over the internet.

The problem is peering, or how the networks owned and operated by your ISP connect with networks owned and operated by content providers such as Amazon or Netflix as well as transit providers and content deliver networks. Peering disputes have been occurring for years, but are getting more and more attention as the stakes in delivering online video are raised. The challenge for regulators and consumers is that the world of peering is very insular and understanding the deals that companies have worked out in the past or are trying to impose on the industry today is next to impossible.

Which is why we need more data. And it’s possible that the Federal Communications Commission has that data — or at least the beginnings of that data. The FCC collects data for a periodic Measuring Broadband Quality report that was most recently issued in Feb. 2013. In that report the FCC said it would to look at data from broadband providers during September 2013 and issue a subsequent report in that same year. That hasn’t happened, but the agency is preparing one likely for late Spring. The report measures how fast actual U.S. broadband speeds are relative to the advertised speeds. While the initial report published in 2011 showed that some ISPs were delivering sub par speeds versus their advertised speeds, ISPs have since improved their delivery and FCC rankings. As a result, the reports goals have shifted to measuring mobile broadband and even caps.

But the FCC’s next report will have what is likely to be a hidden trove of data that paints a damning picture of certain ISPs and their real-world broadband throughput. The data is provided in part by Measurement-Lab, a consortium of organizations including Internet 2, Google, Georgia Tech, Princeton and the Internet Systems Consortium. M-Lab, which gathers broadband performance data and distributes that data to the FCC, has uncovered significant slowdowns in throughput on Comcast, Time Warner Cable and AT&T. Such slowdowns could be indicative of deliberate actions taken at interconnection points by ISPs.

[protected-iframe id=”95236298fe9063178fe885b7bf523590-14960843-61002135″ info=”” width=”580″ height=”465″ frameborder=”0″ scrolling=”no”]

When contacted prior to publishing this story, AT&T didn’t respond to my request for comment, and both Time Warner Cable and Comcast declined to comment. I had originally asked about data from Verizon and CenturyLink, but M-Labs said those companies had data that was more difficult to map.

So what are we looking at in the above chart? It’s showing the median broadband throughput speeds at the listed ISPs. As you can see, certain providers have seen a noticeable decline in throughput. Measurement Lab was created in 2008 in the wake of the discovery that Comcast was blocking BitTorrent packets. Vint Cerf, who is credited as one of the creators of the internet, and Sascha Meinrath of the Open Technology Institute decided to help develop a broadband measurement testing platform that took into account the experience that an end user of an actual web service like Google or Netflix might experience.

The idea was to capture data on traffic management practices by ISPs and test against servers that are not hosted by the ISP. The company gives its data to the FCC as part of the agency’s Measuring Broadband America Report, and provides the data under an open source license to anyone who asks for it.

The FCC also uses additional data from SamKnows, a U.K. firm that provides routers to customers around the country and tracks their broadband speed, to produce its report. SamKnows did not respond to requests for comment on this story, and the FCC did not respond to my questions about the M-Lab data. So right now it’s an open question if the upcoming Measuring Broadband Report will have M-Lab’s data incorporated into the overall results, or if, because of the terms under which the FCC gets the M-Lab data, the agency will merely release the data without validating it.

Dueling methodologies

Ben Scott, a senior advisor to the Open Technology Institute at the New America Foundation who is working on the M-Lab data, said he and researchers at M-Lab are exploring new ways to test the data to see if they can “give more clarity about the cause or causes” of the slowdown.

While it does that, it will also have to address why its data is so different from the existing FCC data (a source at the Open Technology Institute explained that the FCC says SamKnows data is not showing the same trends) or even data available from Ookla, which runs the popular broadband tests. Checks with other companies that monitor broadband networks also don’t show these trends. For a contrast, here’s what Ookla shows for Comcast’s speeds over the same time period as the M-Lab data.


Scott said that the goal behind M-Lab’s tests is to replicate what an average user experiences. That means measuring results not just from a carefully tuned server designed for offering bandwidth tests, but to include some of the many and varied hops that a packet might take in getting from Point A to Point B. Thus, the M-Lab tests include data on throttling, latency and over 100 variables that influence performance.

The servers that act as the end point for the M-Lab tests are in a variety of places such as cloud providers, universities and research institutions, and may connect to the end ISP via any number for different transit or CDN providers. For example, Level 3, Cogent, XO, Voxel, Tata and others own some of the transit networks that M-Lab’s tests traverse. Some of these companies such as Cogent have had established peering disputes that have affected traffic on their networks.

It’s at those transit and CDN providers where the packets make those different hops, and that’s where Scott said he and his researchers are focusing.

Ookla, the company behind, is probably the most popular speed test out there but it also tends to have a few weak points. When you run a speed test using, the app sends a package of packets to the closest server, which can be hosted at a local ISP or data center where interconnection points are common. There are several areas where the owner of the testing server can “tune” the test so it delivers maximum speeds. From the Ookla wiki:

The Speedtest is a true measurement of HTTP throughput between the web server and the client. Packet size and all other TCP parameters are controlled by the server itself, so there is a certain amount of tuning which can be done to ensure maximum throughput.

Ookla also eliminates the fastest 10 percent and slowest 30 percent of the results it receives before averaging the rest to get a sense of the reported throughput. Critics say the ability to tune servers and use ISP-hosted servers skews the results.

The consumer impact is growing, or at least the complaints are

What might look like an esoteric debate over the best way to measure broadband speeds is hiding a real issue for America’s broadband networks. Sources at large content providers believe the M-Lab data shows how ISPs are interfering with the flow of traffic as it reaches their last-mile networks.

So you might get something that looks like this — as I did on Saturday night while watching a show on Amazon (I had a similar experience while watching a Hulu stream the evening before).

Time Warner Cable, my ISP, is investigating why my Blu-ray player was tracking at 1.9 Mbps when an Ookla test show 28 Mbps down.
Time Warner Cable, my ISP, is investigating why my Blu-ray player was tracking at 1.9 Mbps when an Ookla test showed 28 Mbps down.

While I was seeing my episode of The Good Wife falter at what appeared to be 1.9 Mbps, I was able to measure connection speeds of 28 Mbps to my house using a test from Ookla. This is exactly the dichotomy that the M-Lab data is showing, and my example is not an isolated one; Comcast users have been complaining for months.

During the summer the CEO of Cogent accused Verizon of throttling traffic Cogent was delivering onto the Verizon network because Cogent wasn’t paying for interconnection. Yesterday, an IT worker in Plano, Texas named David Raphael put up a blog post that accuses Verizon of violating network neutrality because it appears to have admitted to blocking Amazon AWS traffic.

The real fight is over a business model for the internet

While peering disputes are not a network neutrality issue because those disputes are not actually governed by the recent legal decision striking down the Open Internet Order, it is an issue of competition and whether the last-mile ISPs are behaving like a monopoly.

Wednesday’s blog post from Raphael documents a Verizon technician apparently admitting that Verizon is throttling Amazon traffic. That might be a mistaken admission by a tech (as Verizon said in a statement) but the post does a credible job of laying out exactly what many consumers are experiencing and providing traceroute documentation.

Verizon’s statement on the post emphasized that it treats all traffic equally before noting that a variety of factors could influence the actual application performance including, “that site’s servers, the way the traffic is routed over the Internet, and other considerations.” For details on the ways the application provider can fail users, Comcast’s head of network engineering provides a much more in-depth post in response to user complaints of poor quality Netflix streams.

ISPs are correct to point out where their control ends and begins. Decisions made on server capacity, whether to buy content delivery services and choosing transit providers have an impact on the ability of content companies to deliver internet content to your TV or computer. Anyone who tries to visit a smaller blog after a post or photo has gone viral has seen those limits in action; those 404 errors are because of insufficient server capacity.

But pointing to Amazon, Netflix, Hulu or other internet giants and assuming they aren’t dedicating the resources to serve their customers is a hard sell. In fact, the pressure to build out that infrastructure may actually be behind some of the escalation in user complaints.

Industry watchers who count both ISPs and content companies as customers say that the decision by Netflix to create its own CDN last summer has prompted ISPs to get more aggressive in their peering negotiations, which has led to the consumer complaints. That aggression may come from not wanting to give Netflix — which increasingly competes with many ISPs’ own pay TV services — a “free ride” into the network, or it may be a grab for incremental revenue from a company that ISPs view as making bank off their pipes. Meanwhile, just this week rumors surfaced that Apple is building its own CDN business.

Colbert can't rise above a poor connection. I pay both Hulu and Time Warner cable, so why is there a problem?
Colbert can’t rise above a poor connection. I pay both Hulu and Time Warner Cable, so why is there a problem?

What’s happening is as the traffic on the web has consolidated into a few large players, those players are both a threat to their existing video businesses and a source of revenue for ISPs who control the access to the end consumers. As those players build out their infrastructure, the ISPs are halting them at the edge of their networks with refusals to peer or to peer for pay. The result of that “negotiation” between the two sides can be a slowdown in service as certain CDNs or transit providers are unable to peer directly with an ISP without paying up.

As frustration mounts, intervention seems far away

In conversations with sources at ISPs who are uncomfortable or prohibited from speaking on the record, the feeling is that content providers need to help pay for the upgrades to the last mile network that the rise in overall traffic is causing, as well as frustration that Netflix and others are somehow “getting around paying for transit or CDN services” by building their own systems. ISPs also say they don’t want to have to host half a dozen servers that cache content for the big internet providers with the prospect of more coming as new services grow, citing power and space constraints.

All vehemently deny throttling traffic while pointing out that certain transit providers such as Cogent (every ISP will use Cogent as a scapegoat) are known bad actors and won’t pay to peer directly with ISPs. Unfortunately ISPs gloss over the real debate, which is whether transit providers, content companies and CDNs should have to pay to peer — that is, pay for the right to deliver all of the traffic that an ISP’s users are demanding — given that the end user has paid the ISP to deliver the content the user has asked for?

That is the heart of the debate with issues such as the lack of broadband competition at the last mile, and the possibility that ISPs who have their own pay TV businesses have an interest in blocking competing TV services just adds more complexity. The challenge is proving that such slowdowns are happening, show where they are happening and then have a debate about what should be done about this. The data from M-Lab is a start, and if it can refine the data to deliver proof of ISP wrongdoing, then the FCC should take it into consideration.

So when the Measuring broadband Report eventually comes out, a lot of people will be looking for the M-Lab data. As for right now, myself and other consumers are looking for a conversation about broadband quality that so far the agency isn’t having.

38 Responses to “There’s something rotten in the state of online video streaming, and the data is starting to emerge”

  1. ktslhtn

    The record attempts to mix all the elements, Bantling Raising, Race, and Be fond of Life throughout the whole thin skin or coating. No less than 20 minutes in to the thin skin or coating all the possible Be fond of Interests, careers, and pl of child were introduced in to the account recital except Jan’s be fond of concern youjizz who came closer to the end.

  2. It was already pointed out that the issue is insufficient peering capacity. I am on AT&T UVerse. Netflix does not perform well in the evenings and traces for Netflix streams come through Level 3 and Cogent.

    Who is responsible for capacity from arbitrary content providers to the end user? Is it the content provider transit ISP or the broadband ISP?

    Netflix buys the cheapest possible internet access they can. This past year it appears Cogent has gotten a lot of the bandwidth augments Netflix bought. But Cogent does not have enough capacity to the tier 1 ISPs.

    Non-tier 1 ISPs like Cablevision buy their backbone Internet transit from other providers. For them it makes sense to use Netflix Open Connect because Cablevision has to pay for the bandwidth from their transit ISP. With Netflix estimated as 30% of all Internet traffic Cablevision probably saves money with Open Connect off loading from the transit ISP (which is not true for other countries – Netflix only recently started their services in other countries outside US and Canada and the number of subs is relatively small).

    Netflix transit providers like Cogent and Level 3 (and others) have negotiated cheaper prices for bandwidth than Netflix could get from the broadband tier 1 ISP (AT&T, Verizon, TWC, Comcast). But then Cogent and Level 3 don’t have the capacity into those Tier 1s under their contracts with those tier 1s. This has been argued in news stories.

    This is a business negotiation between Netflix and their transit ISPs and the broadband ISPs. Netflix does not want to pay a ton of money to put their content on the internet. They already have to pay a ton for the content. They would love to be in the cable TV model where their content becomes so important to the broadband customers that the broadband ISPs pay the costs to get the content onto the Internet. They have publicly asked their customers to pressure their ISPs to join Open Connect.

    The transit ISPs for Netflix have the issue of selling cheap bulk Internet access and not having interconnect contracts to carry the additional load.

    As broadband customers – we don’t care how the content arrives to us.

    Business forces will sort this out.

    Not unlike the public fights Cablevision has had with the Yankees and MSG or TWC had with Fox. Those were all business negotiations.

    I am guessing that Cogent is unable to afford the additional peering capacity that is beyond their existing settlement free peering contracts. They had hoped to publicly pressure the tier 1’s into different contracts.

    But, just recently Comcast has negotiated direct connections for Internet access with Netflix. And news sites report AT&T and Verizon also negotiating the same. My guess is that Netflix did not like their deteriorating service with the tier 1 broadband providers and decided to get additional capacity directly. And the broadband providers don’t like the customer complaints and so are probably providing better prices. For Netflix it may not be as cheap as Cogent but is probably better than they could get previously from the tier 1s.

    And these market forces may hurt the growth prospects of pure transit ISPs like Cogent and Level 3. It is like the global trade – where one country tries to dump goods below market. Cogent sold Internet for cheap to Netflix hoping they could just cross the street/exchange point and dump the traffic on the tier 1 for nothing. Maybe Cogent will be happy to get rid of Netflix.

    Meanwhile, all the other small content providers don’t have the power and demand of Netflix. They get the same ISP or content hosting prices everyone else gets and suffer whatever peering capacity problems are out there.

    One bright spot, if Netflix negotiates direct connects with the tier 1s, it will deload a ton of bandwidth from the existing peering and improve the performance of the content sharing fate on those congested connections.

    I would have to guess that a streaming service like Redbox from Verizon probably performs well relative to Netflix since it would be coming in the opposite direction of congestion into the other ISPs (Verizon being a tier 1 with most traffic currently coming in rather than out).

  3. It seems clear to me that the government should either break up Comcast and Verizon like they did with AT&T in the 1980s or else take direct control of these pipes and operate it as a public monopoly, somewhat like the interstate highway system.

    Without stretching the analogy too far, the present system is like having a private, completely unaccountable corporation arbitrarily restrict the type of vehicles permitted on the road and placing arbitrary restrictions on their speeds to while charging expensive tolls to the operators of every vehicle except for those that they own and operate, which can do whatever they want regardless of whether they are carrying any cargo of value to consumers.

  4. One question I have for everyone that no know.

    why aren’t other countries having the same issues and problems with video?

    I have heard of no other country complaining that their networks can’t handle all the video, that people need to pay more, etc…………………….

  5. I have had netflix ever since it came out and never had any issue streaming video on either Verizon DSL, or comcast since day one. Just over a year ago I upgraded to Verizon FIOS. At first have had no issue at all. Could stream HD any day any time, same with

    Soon after Netflix starting streaming SuperHD to everyone, my FIOS netflix stream was useless. Youtube was not working either, however that has improved recently.

    Netflix has not worked for my for over 5 months too well, most of the time I get 235kbps and nothing more.

    If I however, VPN to my work network/office a few miles down the road, I get 3000kbps every time within 2 seconds going over my work network which is not Verizon.

    Netflix and youtube are not the only services I have had problems with lately, you can dropbox to it as well.

    My tests show 80mb down and 30mb uploads everytime.

    Also tried connecting my friend in England to my to computer at home to download some photos. The connection download speed was no more than 500kbps even though I have a fast connection and so does my friend in England.

    Doing traceroutes to his computer didn’t show any problems, but tracerouts from his PC to my pc show this:

    1 15 ms 27 ms 11 ms Rysiek-PC []
    2 29 ms 14 ms 12 ms []
    3 32 ms 18 ms 18 ms []
    4 33 ms 72 ms 22 ms []
    5 88 ms 89 ms 89 ms []
    6 101 ms 107 ms 106 ms TenGigE0-0-1-0.GW8.NYC4.ALTER.NET []
    7 * * * Request timed out.
    8 191 ms 191 ms 194 ms []

    running the same photo downloads from my work office I get around 20mbps, not 500kbps….
    So it’s def. not my home network and computer.

  6. Maruis T.

    Artificial Business isn’t innovation. Creating demand by throttling video to make consumer outrage over slow online video performance, then offering a premium service to let consumers “pay to play” is exactly how the Internet is going to be stifled by big industry.

    But that is exactly the point, isn’t it? Since the Internet is too big for one group to control, a bunch of providers colluding to slow it down is probably appreciated by government, big content, and investment groups all at once.

    Of course, there’s probably some telecom jackarse who is willing to come on here and state I’m drinking Kool-Aid of a certain color and fashioning tin-foil hats when telecom providers all want to give consumers more choices and freedom, and that I know nothing to begin with. (Straw man is still a fallacy, but commenters don’t care about rules and logic.)

  7. Netflix and Amazon are not tiny babies when it comes to looking out for their own interests, and it should not be rocket science for them to prove to themselves and then take appropriate action to get impartial hard evidence of ISPs throttling their services and favoring other services. It begs credibility that they would stand around waiting for the FCC to release data that might or might not have collected.

  8. This is why we will all eventually either pay our ISPs by the GB or “pay by the GB, minus what some sponsor paid for.”

    Once the ISPs know they will get money from “someone” per GB – they won’t care who – they will be happy to peer. Better peering means happier customers, happier customers means more data flowing through their pipes, more data flowing through their pipes means more dollars flowing into their pockets.

    As long as each GB flowing through their pipes is either a net cost to them or at best a wash, they will have incentives to game the system in their favor or the favor of their affiliated non-Internet-based content-delivery companies *coughCableTVcough*.

    As for me, I’d rather pay my ISP Xcents/GB and have advertisers who want to sponsor content reimburse me directly.

  9. I work for one of the above companies and i can tell you we do not throttle or bw choke anything. The issue is the amount of CDN traffic chokes out certain peering partners. When you actually read a cable modem customer agreement it guarantees the BW to the CMTS for most companies then after that it is best effort. I could write another article about this article but i will do a quick summary. CDNs will offer telecom companies to host their equipment on the telecoms network but they provide the cache server and the telecom company is forced to provide power and upkeep at an exorbitant cost in a colo so most companies do not do this method. You also have peering companies like cogent that offer crappy network service but at a high rate of transit traffic as sometimes they are the only route to a certain CDN or such and sometimes companies will choose to run low on ports and have those peering ports choked rather than pay a ridiculous amount for more ports.There is more to the article above just saying

  10. They need to stop trying to turn the internet into another tv channel. The internet is great for reading, but now everything has to be video. Employers block video but allow text. Enough with the video. I stop going to nbc site when they changed it.

  11. OldMayfield

    “…not wanting to give Netflix — which increasingly competes with many ISPs’ own pay TV services — a “free ride”…”.

    I don’t know about you, but Netflix gets no “free ride” to my TV. I pay Time Warner almost $200/month for TV, phone and internet. Does not seem “free ride” to me.

  12. Omar Sayyed

    Don’t ISP’s realize that it is in their best interest to allow for better content streaming? It is in their best interest to focus on technologies on providing better service for their customers utilize their service. Internet without content is nothing.

    The first company that realizes this and provides a better service at a cost of lower margins will get customer loyalty which has a more useful long term business case.

  13. What’s the traceroute supposed to prove? The numbers look pretty similar except for the first hops to (the local gateway). The business connection has as much as 18 ms delay, which indicates a lot of LAN congestion. The largest single RTT was over an internal link in the Verizon core, 67.679 ms. That’s pathological, but it only happened once. Looks like buffer bloat to me.

    I question the Texan Rafael’s assumption that Netflix streams over AWS. They mostly use AWS for reformatting, and mostly stream from their own CDN. Given that the Netlix CDN is relatively new, it wouldn’t surprise to learn that they’re having some teething problems with it.

    In any event, the data don’t support Rafael’s guess that Vz is deliberately screwing Netflix because the demise of the FCC’s unlawful OIO says they can. That story is a little too pat.

    The M-Labs mess just goes to show you how hard it is to measure Internet performance in a meaningful way, which is one reason certain deep-pockets sponsors have pulled out of the M-Labs consortium. It has less to do with illuminating than with stirring up mob sentiment. Does anyone care how fast you can reach a server on some academic network from the real world?

    This story is starting to look like that fabricated Topolski story about the only man in the world using a dark piracy net for legal purposes.

    I watched Star Trek the Motion Picture from Netflix over Comcast two nights ago in HD, and had no problems at all. Never saw the appeal of the bald robot woman, but what evs.

  14. I would love to see the physical plant separated out of all the rest. e.g. a separate company (not division or any other handwaving) needs to own and maintain the last mile. This should break the siloing of services. Every provider pays to access and no one gets a free ride.

  15. I had some bad Netflix streaming problems on Southern California AT&T starting about three months ago. HD became impossible except at odd hours. I was streaming through an LG BD390 Blu Ray Player. I recently replaced that player with a Samsung. ALL streaming problems went away. Initial buffering now takes under ten seconds. Beautiful HD all the time! The look of Netflix was also completely different. My guess is that the old player had a limited subset of CDN pathways it could use.

  16. We’re on Time Warner Cable and have been watching Netflix for years. A little over a year ago, I became frustrated with the quality of Netflix we were receiving. We were using an older Roku that did not do real-time adaptive streaming, so every time we started a video it would begin in HD and inevitably stop, buffer at a much lower quality, and start again. To go back up to the higher quality the video needed to be restarted. But you never knew when the connection had improved, so I’d have to periodically restart the video! At the time, we had 15/1 internet, so I did a test and upgraded to 50/5 internet to see if that would help. It did absolutely nothing. When I called to lower back down, I explained to the rep that I would gladly pay more for internet if it actually bought me something, but if the numbers are just made up and have no connection to reality, why would I pay Time Warner Cable $30 more a month?

    We upgraded to a Docsis 3.0 modem, and that did stabilize the connection somewhat. Ultimately upgrading to a Roku 3 was the best choice, because it allowed the picture quality to fluctuate without stopping or becoming stuck at a lower level. A few months later, Netflix stopped sporadically degrading. Now we get Super HD 95% of the time during peak hours. Frankly, that’s all that really matters. I don’t think 7 Mbps is too much to ask for when we’re currently paying for 20.

    People complain about bandwidth caps, but I think the best way to improve broadband speed is usage based pricing. It creates a strong incentive for the ISP to offer better internet so that people use it more and pay more. Otherwise, ISPs are just interested in offering fake download numbers. They need to be incentivized to improve service.

    • Here here on usage. Treat it like a utility, and if the people who use many times more than I do with bit torrents or others, let them see the rate per bit, and decide if they want to keep using it. Tired of subsidizing the people who use oodles more than I.

      • Jerry Leichter

        The ISP’s have been using the “evil bit torrent users” as an excuse to institute data caps – and then to move to “usage based” pricing, with their own or favored services excepted, for years.

        It’s difficult to know just how many bit torrent users there are actually are and what their impact is – all the data comes from the ISP’s who use it for their own purposes. But let’s assume there are a whole bunch of people who download much more data a month than you do. How do they get there? If you look at the numbers, there’s only one way: By downloading for many, many hours a day – often around the clock. How do their downloads at 3AM affect your streaming at 9PM – or the ISP’s costs? Not at all. Your streaming only needs the network while you’re actually doing it; and the network has to be there all the time anyway. It’s not as if the ISP can shut down the power and roll up the lines to save money overnight!

        Meanwhile, at 9PM – you’re entitled to the bandwidth you paid for for your streaming, and the bit torrent user is equally entitled to the bandwidth *he* paid for for his downloading. If he potentially uses more of the 20Mbps he paid for than you do, why is that your business? The ISP offered 20Mbps, and you and he are equally entitled to it. Complain instead that they don’t offer you a discount for a 7Mbps link, which may be all you need. And complain even more that the offer they are making is bogus. It’s “up to 20Mbps”, which they can satisfy by delivering nothing at all. More fool you for accepting it – not that you have a choice, since the ISP is either an effective monopolist or a one of a duopoly.

  17. At the very least: if a content company (or CDN) is delivering data all the way to the local market of the customer, the customer’s lastmile ISP should accept that data and deliver it to the customer (at his/her contracted data rate) without congestion or usage cap. If the lastmile ISP is not willing to do that, then what exactly is the customer purchasing from the lastmile ISP?

    • txpatriot

      But suppose the CDN is delivering a fire hose worth of data to a user that has subscribed to only a garden-hose sized pipe. Are your blaming the last mile ISP because the garden hose can’t deliver a fire hose worth of data?

      • This is silly argument. A user subscribing to a garden hose isn’t asking for the fire hose of content that arrives at the ISP network edge. It is hundreds of paying ISP customers asking for a garden hose that equate to that fire hose. But that’s how networks work, and the ISP knows that and builds for that. IF not, then yes, I am blaming my ISP because I pay it $75 a month to deliver a garden hose. And if the garden hose fluctuates a bit because of oversubscription that’s fine. But if my 55 Mbps connection dries to a cocktail straw-sized 1 Mbps then that’s not acceptable network architecture. And that’s the argument ISPs are making.

        • Jason Kennedy

          I think it’s pretty fair to say that the majority of the people complaining loudly about slow Netflix know how much bandwidth they’re supposed to have, and how much is necessary to run Netflix. It’s simple math after that, and it’s not adding up.

          Stacey hits the nail on the head: if you pay z bucks for x internet speed and you can consistently verify that you’re actually getting x-y due to some sort of throttling, that’s a problem. I’m not sure about any of you, but I certainly didn’t sign an agreement that said my ISP was going to throttle certain types of traffic.

          This needs more light on it.

          Also, simply changing your ISP isn’t a fair answer. In rural communities, there’s often a single ISP servicing the area, unless you want to use satellite.

          Companies need to give you what you’re paying for or come clean about what they’re doing and deal with those consequences.

          • Omar Sayyed

            @jason, agreed. I don’t think the average reader here at all unaware of what MBPS their paying for vs. what they’re actually getting. I’m lucky if i get 2mbps during peak hours even though I get charged for 5mbps.

            The issue really is the throttling of wifi. At my house we don’t have anything that is plugged. I’m sure I get throttled.

  18. M-Lab doesn’t really measure broadband ISPs… What they are measuring is THEIR ISP.
    M-Labs primarily used Cogent which has a long history of peering and performance problems as their sales tactics are to sell at the lowest price while maintaining the lowest quality. Cogent oversells their capabilities and deliverers on performance.

    Now add Netflix moving to their own CDN recently and uses Cogent as a primary supplier… you have what has actually changed and negatively impacted their service (not your broadband) and the M-Labs measurements

    I’m surprised technical reporters don’t put together the obvious variables vs jumping to conclusions about what the problems are and who is causing them.

    I’m also surprised Netflix uses them as one of their primary ISPs.

    I get great bandwidth, but crappy Netflix. Try another video service and compare.

    • I asked M-Labs about Cogent being their primary transit provider. They vehemently deny that and offer up a list of transit companies that the tests traverse. They include: Level3, Cogent, XO, Voxel, ISC, Telia, GRNET, HEanet, Tinet, Altibox, TOPIX, RTR, go6, REANNZ, AARNET, Vocus, Victoria University of Wellington, Tata, WIDE, National Chi Nan University, SEANET. I know at least two of those providers already pay for peering with companies, so they aren’t relying on Cogent to take down ISPs.

      I, however am not surprised that an anonymous guest commenter whose IP address is from Verizon would wave the Cogent flag as an attempt to discredit a legitimate and real issue for consumers.

      • Stacy, ask for the specific Mlab transit ISP being used to reach problem destinations (its not a long list) Then ask Netflix what path they are using for the same. Also ask Netflix when they started moving most of their traffic to these transit paths.

        The Mlab data is interesting, but it doesn’t say that ISPs are making this happen.

          • The problem with data is when you use selective data and make assumptions to prove your point… Like I assumed Guest accounts were about anonymity and administrative privileges were not used for Gotcha posts.

            That said, the data proved to be inaccurate in both cases. I AM NOT an Verizon employee, the IP address that was investigated is a customer address

            Like any data sources, you need to look at a balanced view. In this case the data shows a mix in performance. In most Internet applications performance is good. In some measurements it is bad.

            This suggests that the problems lie upstream in the application selection process around the ISP being used. The problems are TRANSIT SELECTIONS, not broadband ISPs. If it was Verizon it would show across many services.

            PS Please let me know if anonymity is no longer part of GigaOM comments. Also perhaps Netflix employees should be identified by IP address as well. They have a stake in framing the Gotcha data their advantage (hence the blame game)

        • I agree with you. The data I published doesn’t show throttling at all. That’s what the M-Lab researchers are trying to develop. All this data shows is that there is a marked slowdown in throughput at select ISPs. However, the people I talked to suspect that this data looks this way because of congestion at peering points. And there’s no denying that there is congestion at select transit providers because of peering negotiations and that the end result is problematic for the consumer.

  19. David Laubner

    Thank you, Thank you, Thank you .

    I have been having a debate with both Comcast and Netflix for months with no resolution about the cause of sometimes poor service. Let’s face it. Video is a bandwidth hog and I realize that these services are still trying to adapt to a reality that their networks were not meant to deal. However, I would expect and respect an ISP with a bit more openness and honesty.