6 Comments

Summary:

When it comes to broadband we spend a lot of time talking about how much data we can transfer quickly but applications from video chat to gaming require a different type of network speed. If latency doesn’t matter to you yet, it soon will.

digital data flow through optical wire

When it comes to broadband we spend a lot of time talking about how much data we can transfer in a set amount of time (measured in Mbps). But what about latency? Applications from video chat to gaming need networks where packets get to their destination quickly, which requires a different type of network speed.

Plus if we’re going to stream everything from video to music in addition to our latency-sensitive apps, we’re going to have to add some metrics to our definition of broadband and some intelligence to our home networks. The question is how carriers can respond to both the threat and the opportunity this development offers and what other companies might play in this space.

What is latency and when does it matter?

Currently, consumers think about their broadband connections in terms of how fast they are. But delivering a 30 Mbps connection means only that the pipe is fat enough to deliver 30 megabits every second. This matters greatly when you’re trying to download a fat file, but when dealing with real-time applications a different level of speed matters. Latency measures how fast the packets get from the original server to your device and is measured in milliseconds.

The average home network ideally has a latency of less than 25ms, but everything from distance, the number of gateways between your device and the originating server and network congestion can affect latency on your home network. And the effects are noticeable in more applications we use today. Gaming has always been a popular example, since waiting even a few milliseconds more to dodge a bullet or throw a punch can mean the difference between progressing to the next level or starting over.

One of the many real-time apps I use every day.

But now most people are engaged in video chats on both home and mobile networks, and nothing introduces a user to the concept of latency better than paying for a 30 Mbps connection and still dealing with pixellated or choppy video chats. It also adds to the wait time when downloading objects and scripts from a web site, making load times feel sluggish. A StrangeLoop survey this month found that latency at the desktop was between 65ms and 145ms. An entire industry around front end optimization has been built to help reduce such performance issues, and many of those startups have already been acquired, such as Akamai’s acquisition of Blaze.io.

On mobile networks it’s worse, even with next generation LTE networks coming online. Michael Thelander the president of Signals Research said Verizon and AT&T had network latencies of greater than 40ms about 20-30 ms longer than some of their European peers. But with video chat on mobiles, plus our general impatience when using those networks trying to find a restaurant while walking toward it, improving latency on mobile is also a real issue.

“The importance of latency can’t be overlooked for applications, such as social media and web browsing that involve small transfers of data. As part of my testing I basically proved that higher throughput becomes irrelevant at a certain point …. However, latency can never be too low and in that regard many of the US operators have their work cut out for them,” he said via an email message.

Who ya gonna call?

The issue of latency is known in the industry — the FCC was has even asked if the traditional Mbps was the best metric to stick with, or should latency and other elements matter. But as consumers spend more time in real-time, latency will become a more noticeable problem.

“You can’t exactly call up a carrier and ask them to improve your latency,” said Sean McCann of Killer Technologies, a Qualcomm Atheros company that makes latency-reducing network cards for gamers. Short of putting in new fiber or dedicating network resources, this isn’t an easy problem, he added. As a side note, banks and other firms engaged in high-frequency trading DO pay a lot for dedicated low-latency networks. They also place their IT next to the trading floors to help lower latency caused by distance as well.

But your average consumer doesn’t have or need that level of dedicated bandwidth. And they couldn’t afford it either. But that doesn’t mean we will keep paying for fatter pipes in the home and still have bad gaming or video experiences. Simple fixes like easier home network management software will help. Killer Technologies is working on the problem, although McCann couldn’t go into a lot of detail about it. Alcatel-Lucent also has a service it sells to carriers that can be used to help manage the home network.

Such software can help consumers prioritize network traffic in their own homes and eventually it could proactively look at traffic and give it more space on the pipe. For example, a software update downloading for your computer can wait, while packets from Netflix have to hit the box in a hurry. Right now it’s a free for all on most networks because nothing is telling your router that the software update can cool its heels for a millisecond while the Netflix packet has to hit your Wii right away or you get downgraded to Standard Definition.

The question is how consumers will implement this type of software. Will you visit a carrier web site to prioritize your traffic as it hits your ISP’s modem? Will consumers buy a box that plugs into the router with a simple UI that lets them press video to give their Hulu streams a “speed boost?” Perhaps router makers will just bake something into their products and let people manage it from a web site.

Either way, devices in the home are becoming more numerous and chattier while the services and applications we’re consuming are more sensitive to packet delays. You don’t have to be a network engineer to understand that something has to be done.

  1. I’m disappointed to see no mention of bandwidth delay product in this article (http://en.wikipedia.org/wiki/Bandwidth-delay_product). The capacity of a link is severely impacted by its latency. A long fat pipe, like a Satellite link or undersea cable can have gigabits of capacity but the average user seeing 200-600 milliseconds of latency will only be able to pull a few megabits per second with the stock TCP stack on most operating systems. No one reports on this, but there are new TCP stacks (HSTCP, H-TCP, TCP Vegas, etc) which minimize the BDP implications of high latency, but Microsoft and Apple are still using TCP stacks written in the 1980s! This is critical as bandwidth to the home continues to expand, but at 40ms to 60ms average round trip time to most websites I can only pull a few megabits and don’t come close to saturating my pipe.

    This article does do the concept of speed of web browsing affected by latency justice. Rather than pushing network providers to lower latency (which is hard) we should be pushing software providers, server and client side, and operating system manufacturers to implement new options like SPDY. These are far more achievable methods of improving browsing speeds.

    1. Good point about the B-D product of TCP connections. Back in 2004-2006 I had designed and implemented a TCP flow control algorithm that sets the sender-side window adaptively as a function of estimated bottleneck bandwidth in the connection route. Because this magic accelerator box of ours sat at the edge of the wireless access network, we could nicely keep a real-time estimate of the bandwidth between our box and the wireless client. What resulted was an accelerated TCP connection whose throughput was purely a function of the bandwidth instead of the bandwidth-delay product.

      By the way, SPDY is only for HTTP traffic. It won’t improve latency for games and other non-HTTP apps. The DiffServ and RSVP standards from IETF were supposed to solve the latency problem but never got deployed end-to-end due to scalability concerns in routers.

  2. Prioritizing of bandwidth has already been in use in most routers for a while now… it’s called QoS (Quality of Service).

    1. Exactly, I just got a new DSL Modem+Router and set it up to prioritize my Ethernet that goes over Powerline to deliver 4 ports to my Roku, and laptop. I prioritized my mobile phones as well. I figure, Im paying for the service, so I should have priority over my roommates, and I use more bandwidth intense apps that need high speed, low latency.

      QoS settings have definitely helped!

  3. Vishwas Manral Friday, April 6, 2012

    In my view as we move to the Cloud we need a service aware network, and latency is one of the concerns. http://h30507.www3.hp.com/t5/HP-Networking/Enterprise-networking-reaching-for-the-clouds-4-steps-to-service/ba-p/102671

Comments have been disabled for this post