Blog Post

Cable guys to FCC: ISPs aren’t the bottleneck, Google is!

You would think that the National Cable and Telecommunications Association would be thrilled that cable performed well in the latest broadband quality report released today, that it wouldn’t resort to some kind of attack on content companies that make its broadband service so compelling to end customers. You would be wrong.

Even at a supposed high point for the cable guys, they just can’t let their beef with Google(s goog) and Netflix (s nflx) go. Today’s dig comes courtesy of this blog post, which lauds cable’s achievements in today’s FCC report, and then turns around to blame the web world for delivering slow-loading sites and services that can’t make use of the TOTALLY AWESOME speeds cable provides.

From the post:

With two successful tests of wireline broadband providers under its belt, it may be time for the Commission to turn its attention elsewhere. For example, as described in a recent article in the Boston Globe, slow speeds on content provider websites often prevent consumers from receiving the full benefits of the “last mile” broadband access service they have purchased. Consequently, to obtain a fuller picture of the performance consumers are experiencing, the Commission may want to solicit the participation of popular content and application providers, such as Netflix and YouTube, in developing a voluntary testing regime for application providers.

In other words, cable’s service is amazeballs but those lunks in Silicon Valley are gumming up the works, so the FCC should totally stroll over to Mountain View and track how well Google’s site loads. If you click over to the Boston Globe article, you’ll see that the cable guys are distorting the problem (surprise!) that the newspaper is discussing. The Globe’s story focuses on Verizon’s new 300 Mbps service and covers two issues. The first is that customers may not find value in faster speeds because there aren’t a lot of web services out there to take advantage of them, and the second is the idea that those speeds are irrelevant because data centers on the back end can’t ship content at 300 Mbps. From the Globe article:

The problem is that most of the Internet isn’t transmitting data fast enough to take advantage of such rapid broadband speeds, [Roger Entner, an Internet analyst for Recon Analytics LLC] said. If a server computer transmits an Internet video at, say, 20 million bits per second, having a 300-million-bits-a-second connection won’t make any difference. “The website you are connecting to is the bottleneck,” he said.

Hold on there, Entner! The idea that a 300 Mbps connection is pointless because Google isn’t pumping out YouTube streams at 300 Mbps is laughable. Then add NCTA’s idea that the people at the FCC should investigate the contents of Netflix’s data centers in order to ensure that the over-the-top-streaming company is not somehow scamming customers (or interfering with Big Cable’s ability to make money selling faster pipes), and you have a straw man the size of the Empire State Building.

Between your computer and Google’s servers are a lot of steps.

A lot happens between Google’s servers and your router when you request a YouTube video. There are connections between the cores on the chip processing your request, connections between servers in the data center that look up the video you asked for and then the possibility of multiple hops between different providers to get the packets that make up that video from Google to your screen (including that hop on your home wireless network which may also be constrained). And the important thing to realize is that at every one of those points there are multiple providers who compete to deliver the fastest possible speeds while optimizing for cost and quality.

The primary point is there isn’t a lot of competition is the last mile, where the packets hop onto a cable, DSL or fiber network. And that’s why the FCC needs to keep its eye on cable, DSL and fiber providers. Because the truth is that as long as ISPs cap their services and drag their feet when it comes to speed upgrades (Time Warner’s(s twc) transition to DOCSIS 3.0 was a long time coming in AT&T dominated markets), some of the services such as 4K video streams that require at least a 12 Mbps connection and can consume several gigabytes of data per movie, won’t launch. It’s hard to push the fast and fat apps before the broadband cart.

Don’t miss the big picture

So the big picture on the FCC’s broadband quality report has nothing to do with Google’s servers and everything to do with Google, Netflix and others trying to serve their customers in a market where broadband resources are constrained by caps or where operators refuse to invest in their networks. If you doubt me, look at Netflix begrudgingly lowering the quality of its streams in Canada or Google building out its own fiber network just to get people thinking about what apps a superfast gigabit network could enable.

Only in a market where their access to the end customers is interrupted by a monopoly would it make sense for Netflix to deliver a lower quality product or Google to spend billions in a working around that provider. So yeah, cable, congratulate yourselves on being better than DSL, but don’t try to get all high and mighty trashing the companies that make your product worth buying.

17 Responses to “Cable guys to FCC: ISPs aren’t the bottleneck, Google is!”

  1. Virtuous

    The ISPs try to pretend they have competition. If the Verizon – cable deal goes through there will be even less competition. Verizon and Comcast are essentially monopolies.

  2. Now that’s what I call in your face reporting. You got a 10 for 10 from me on this. So don’t listen to the “Oh noessss the SkyNet is falling” hater I see here. They are usually just that Orwellian Doublethink Trolls just doing what those AntiGOOG voices in their heads tells them to day!!!

  3. fgoodwin

    “Between your computer and Google’s servers are a lot of steps. A lot happens between Google’s servers and your router when you request a YouTube video.”

    As far as I’m concerned, those two statements are the crux of the entire issue, and why the FCC’s speed measurements are so pointless. When ISPs sell me access, it’s just that: ACCESS. By definition, they CANNOT ensure the speed or quality of my connection to an infinite number of end-points on the world wide web.

    If the FCC wants to truly test the ISP’s speed claims, they need to measure the speed of MY access connection (or yours) and NOTHING MORE because that is all the ISP is providing me.

    My ISP cannot be responsible for how quickly (or slowly) a Netflix server responds to my request, so what is the point in measuring that and holding ISPs accountable for that?

    It’s ridiculous.

  4. So, just for kicks, I ran the Verizon speedtest thingy from here (UK). It told me my download speed is 1.55Mbps. My actual connection speed is around 40Mbps,* which I’ve maxed out when downloading files from my (tiny and US-based) VPS. So who’s got the slow server now?

    (Yeah, I know this test proves nothing, but that’s kinda the point.)

    * Which costs, incidentally, about half what Verizon would charge for an equivalent connection. And the UK’s broadband provision is really not that great.

  5. Madlyb

    One piece missing from this article, and completely missing from the strawman constructed by the telcos, is the modern, connected home is typically accessing multiple points of content at the time and while I would love Youtube to stream at 300Mbps, it is much more important to me that I can conduct a business call over Skype while my kids are watcing videos and playing online games and that means the last mile needs to be a lot thicker than the size of a single content stream.

    • Thanks for introducing some sanity and insight into this discussion. The author has written a lot of good stuff that makes sense and is well thought out, but this piece was not in that category. How in the world can you argue that something is wrong with the logic behind the statement that you don’t need 300 Mbs to deliver 20Mbs streams? How are the fact that web services don’t run at 300 Mbs and the fact that web data centers (the implementation of the web services) don’t run at 300 Mbs two different issues? Even though most readers aren’t smart enough to juggle two different concepts at once, there really are two things going on here and Ms. Higgenbotham is conflating the two. The world can’t consume any single application at hundreds of megabits because there aren’t any single services producing that or compelling applications identified that need that. I’ve spent lots of time working with ISPs who would love to identify such an application to drive demand and so far they’ve struck out. Simultaneously many of the services looking for single digit megabits to the end-station can’t get it because access networks are being massively oversubscribed (under-provisioned) by cheap/greedy ILEC/ISPs. The major thesis, that local access providers are (yet again) disingenuously deflecting scrutiny of their anti-consumer practices, is important to document, but the supporting arguments are confusing and ill-considered. Also lacking is exactly the kind of use case you provided that actually captures a real world-scenario where the content consumer might need massive aggregate bandwidth to the home that is not driven by any single service. Throw in typos and errors that suggest this was never proof-read and the result is a product that is not worthy of the author’s talent and reputation.

      • By their own admission, they “self-edit” the blog posts here, and nobody on the staff can validate them for accuracy because none of them have ever been educated or employed in any facet of networking. It’s just someone’s opinion, and yes, it’s full of false information and ill conceived conclusions. 

        If the writer is so smart on the subject and has the answers to the world’s network challenges, she should welcome testing end to end performance instead of spreading FUD and making ad hominem attacks against those suggesting it.

        The NCTA PR actually committed a major gaffe by naming YouTube, for a bunch of complicated reasons that would be way out of her league to understand. 

        If you seek medical or financial advice from bloggers who have no practical experience in medicine or finance, then you’re at home here trying to understand complex subjects like packet networks. 

      • DRL, I’m glad you brought that up. My intention wasn’t to conflate the two issues, but I didn’t address them in depth to avoid making the article an epic. PErhaps with another paragraph and some careful linking I could have gone deeper. However, I’m glad you brought it up in the comments. Although I could have done without the dig at my editing :)

  6. Charles

    Rock on! Stacy, you make a good argument. Talk about the pot calling the kettle black….Big Cable should be supporting Mountain View websites that makes great content so Americans would continue to purchase increasingly speedy Internet service.

  7. Jack N Fran Farrell

    Typical telco nonsense. Verizon will never make a nickle on my tablet. It is WiFi only. If IKEA, Costco, O’Hare Airport, want to do business with my shoptomized tablet the better support my Google Maps while I walk around their place of business.

  8. Kalan Petty

    .. and on top of all this, many Cable companies are actually trying to put limits on your bandwidth usage.

    Which, IMO, should not be allowed.