8 Comments

Summary:

Video isn’t breaking the web, the way that the web’s biggest players are trying to optimize their costs at the expense of the best consumer experience is.

peering equinix carrier fiber cables
photo: Jordan Novet

The web can’t handle video, goes the common refrain. For example, Comcast estimates that if people wanted to watch the television content they watch on its pay TV service using the web, each home would consume 648 gigabytes per month. But we’re not nearing some technical tipping point where Netflix, YouTube, Hulu or even pay TV’s on-demand applications are about to break the web.

No, it’s worse. We’re at the point where the web giants, ISPs, backbone providers and content companies are all trying to make their own set of rational decisions about delivering video to avoid having their servers or network assets sit idle while also trying to avoid over-investment. And as a result, consumers are stuck in the middle with no way to know what’s wrong or who’s at fault when their online video stream sucks. So whether it’s a peering battle between Verizon and Cogent that causes a poor Netflix experience or an inability to get a high quality YouTube stream at lunchtime, the consumer experience can sometimes feel like an afterthought.

packetpath

The problem is that video delivery on the web is fragmented. A Netflix video starts out in Amazon’s cloud but might be delivered via Level 3, Akamai or Cogent to an ISP’s network, or it may be cached at an ISP’s data center or on a Netflix Open Connect box. Once the movie stream is at the last mile, it must traverse an ISP before hitting your home and what may be a flakey Wi-Fi network or merely a congested pipe to the house could cause further problems.

There are multiple points of failure. And finally, a report out Monday from Sandvine makes those many points of failure clear. Sandvine, which makes deep packet inspection gear for ISPs, has spent years tracking traffic and internet trends, but with this report decided to tackle the opaque world of peering and interconnects. The goal is show where the web is failing and why.

The why is pretty easy as it turns out — it’s business. Sometimes problems arise because a content company doesn’t have enough servers to meet the simultaneous demand for content, something Sandvine says happens to YouTube during the lunchtime rush (see chart).

sandvineyoutube
At others, it’s problems caused by congested ports where the middle mile provider like Cogent or Level 3 meets an ISPs’ network. The report tries to explain how the internet works and where those points of interconnections are, while also detailing some technical truths that illustrate how it’s not always the ISP that’s behaving badly when it comes to network congestion.

For example, the report explains how companies like Netflix or Google can save on their content serving costs by delivering their packets in a big burst. But that bursting behavior tends to drive the last mile network to operate at its peak, causing problems for latency sensitive packets such as VoIP and driving up the peak-driven network capital cost. From the report:

Streaming servers are more efficient when they do less context-switching. This in turn incents a content-provider (either through their hosting provider or their CDN) to burst towards the user and then switch to another user. This ‘pulse-width-modulation’ bandwidth reduces context switches on the server, and causes no additional bandwidth to be served from the hosting facility. However, when the other end of the stream reaches the consumer, these bursts can cause considerable additional cost to the access provider.”

The report also accuses the sites that use adaptive bit rate streaming, like Netflix or Hulu, of degrading the performance of other traffic on the user’s network by “pushing aside” non-adaptive content such as gaming, VoIP, and HTTP. The conclusion is that each player on the network is being selfish at the cost of a quality internet experience for all.

Don Bowman, the CTO at Sandvine recommends that the solution to this selfishness isn’t regulation, but transparency. He would like to see data on cost and quality published at every interconnection point. This helps consumers and regulators know where to place the blame and might incent all players to act if not, altruistically, at least more fairly. The secrecy around peering arrangements and the many conflicts of interests within the players and between them create the perfect opportunity for speculation and accusation, but little chance for constructive change.

  1. The best comment I can make are in French,

    Quelle surprise! Quelle dommage!

    :-(

    Share
  2. Regulation to force transparency is what’s needed.

    Share
  3. People seem to forget on the one hand that the IP protocol stack evolved out of a close-knit, trusting institutional community of users and into the mass-markiet data bypass protocol of choice to get around the byzantine, over-regulated, and over-priced PSTN. In the beginning IP, particularly when the data-app world was mostly 1-way store and forward, was cheap and simple; especially when layer 2 was commoditized in the WAN and flat-rate dial-up made edge access very affordable. No thanks to the internet players who believe they created net neutrality, but rather Bill McGowan who broke up AT&T in 1983 and the FCC who sheltered nascent ISPs in the dial-up world in the early 1990s.

    That clearly is not the case anymore. We need to critically assess the PSTN (analog, circuit) and IP (digital, packet) business models and understand their respective strengths and weaknesses. The biggest weakness with IP is lack of settlements. Market driven settlement exchanges (scaled out of ad exchanges, big data and the current telco settlement exchanges) whose transaction fees reflect marginal cost (very very very cheap) will solve a lot of headaches, including the ones raised in the article.

    Bill and keep actually fosters monopoly and stifles new service introduction and infrastructure investment. Balanced settlements (not the 2-sided settlements the incumbents are piggishly calling for because they can as “closed” layer 2 monopolies) will lead to rapid service introduction, greater privacy and security, and greater investment in the lower layers. A good deal of this is because centralized procurement for 2-way HD VPNs (far bigger bandwidth hogs than 1-way video) for telework, telemedicine, teleeducation, etc… will drive edge investment.

    Just to point out how ill-prepared our networks are, imagine a world (just 10 years away) of 8k video for sporting events with 30 different camera angles. The average person might want to watch 3-4 different angles simultaneously, each session requiring 500 megs (and that’s compressed from the actual 48 gig signal). And here we are fretting over 2 meg streams?????

    We need to change our business models quickly. A grand compromised would be one where we trade-off open access in the lower layers for balanced settlements in the middle layers. Everyone wins and the business models adjust quickly with little government intervention.

    Share
  4. How, exactly, do you get transparency without regulation?

    Share
    1. The threat of regulation? Enough industry awareness that such transparency benefits everyone? We can’t even get the FCC to collect data on special access, so I feel like the industry has to come to this conclusion because regulation isn’t likely. Also, on the international stage, I’m not sure who would regulate it.

      Share
  5. Ever consider that other countries are dealing with these problems? Korea and Homg Kong for example. IPTV is reality and hasn’t caused the sky to fall. Check out now tv or Hkbn.net to see.

    Share
  6. Great article and comments, thx. My solution, as always, is laser internet from space.

    Share
  7. Although I agree largely with Sandvine, I do believe that they have avoided the main issue of “settlement” which Elling discussed in his post. Moreover, the business model for settlement is broken… U c, if settlement is enforced at the current cost/bit then video services would be uneconomic… so the critical issue is to lower the cost per bit for video settlements but given that video requires an order of magnitude more bandwidth than non-video traffic it would require a cost/bit that tends to zero… and that is not feasible given the “”bit factory” architectures and business cases that exist… what we need is to completely decouple traffic from cost and that requires new architectural and business models… Furthermore it is important to recognize that selling video requires lots of “trailers” and that demands even more traffic/bits…But one thing’s for sure “bit factory” models are dead… but the implications are profound, new business cases are required to pay for the IP routing fabric all traffic depends upon… There is little doubt that “inventory” and “stock” business model will have to come into play…

    Share

Comments have been disabled for this post