5 Comments

Summary:

Could distributed computing hold the future for scaling out the internet and meeting our increasing demands for broadband? The CEO of BitTorrent argues it does have a place in next generation architectures.

If we persist in thinking of the internet as an information superhighway, then we’ll continue to handle congestion by adding more lanes, via expensive upgrades in the core network, at the edge and at the last mile. The end result of our love affair with connectivity is a losing proposition for ISPs who are forced to upgrade their networks to meet the ongoing demand for broadband without taking enough of a share from the growing internet economy to meet their margins.

Or so writes Eric Klinker, in the Harvard Business Review blog, in a solid post about how we’re going to manage the growth of the internet. While Klinker sounds like many a telco-funded astroturfer in his worries about ISP profits, he’s actually the CEO of file sharing site, BitTorrent. And his arguments are worth listening to on both sides of the internet divide — the ISPs and the content companies looking to ride those pipes.

In the post, which is similar in spirit to one he wrote for GigaOM in 2011, he agues that the problem on the Internet is congestion, and that there are far more ways to address congestion than just adding more lanes. And of course as the CEO of BitTorrent, which has a proprietary file transfer system that is composed of masses of distributed computers, his main idea is distributed computing. From the article:

Distributed computing systems work with unprecedented efficiency. You don’t need to build server farms, or new networks, to bring an application to life. Each computer acts as its own server; leveraging existing network connections distributed across the entirety of the internet. BitTorrent is a primary example of distributed computing systems at work. Each month, via BitTorrent, millions of machines work together to deliver petabytes of data across the web, to millions of users, at zero cost. And BitTorrent isn’t the only example of distributed technology at work today. Skype uses distributed computing systems to deliver calls. Spotify uses distributed computing systems to deliver music.

The challenges associated with this are obvious. Customers have to download clients in order to use such networks, and they will still affect the end user’s connection at the last mile or in the airwaves and at cell sites on mobile networks. Thus, they can tax ISP networks (although they can be optimized). But with video a huge driver of congestion on the consumer side, it’s a solution that could work, since people will download software in order to watch TV. Even ISPs have tested distributed computing when they tried out the P4P network protocol way back in 2008.

Distributed computing would force many popular web services to reconsider how they build their applications and stream their files, which could have a large effect on big web sites such as Facebook or Google as well as content companies and content delivery networks. Another option, and one that we’re inching toward, is smart routers and prioritization schemes where the user can set their own network parameters to best use the bandwidth they have available. Software-defined networks will also make such prioritization easier and cheaper to manage inside the core telco network as well.

There’s also a more controversial idea of ISPs charging more for broadband during peaks times, as opposed to current data caps that limit people no matter if they download information at 2AM or during prime time. True congestion pricing would also force users to bear to cost of overburdening the ISP network, although ISPs would then have to be open about how often their networks are congested and would risk consumers losing their appetite for broadband. My hunch is that neither the ISPs or the content companies want that to happen, although it’s still far from clear that upgrades are the death knell for the cable and telco companies, as opposed to a painful shift in their margin profiles.

Regardless, we’re only asking for more broadband and more internet services, so Klinker’s article is a welcome reminder that none of that will come for free.

  1. Why not use something like http://www.open-mesh.com/ (found as first result when googling “low cost mesh networks”) and create a solar-powered mesh network on the roofs and on the high-tension power transmission lines? Use radio/microwave as a backup, not as a primary.

    It’s already proven cheaper in places without large wired installations to use large networks of repeaters rather than installing thousands of miles of wires. And they’re very resilient to accidental damage.

    If there’s congestion, just add a few more of these and you’re good to go!

    Share
  2. H. Murchison Monday, March 25, 2013

    Just reduce the amount of data that you’re sending. The problem isn’t adding more lanes it’s reducing the “fat”. Funny that the same issue affects human biology (over consumption of fat).

    Share
  3. 1)ISPs in the US already have very high profit margins.
    2)adding bandwidth to the fiber backbone is easy and cheap. Just add
    more wavelengths. This is a minor cost at the ends.
    3)cable (like wireless) is shared. Coaxial cable is old and due for
    upgrade anyway.
    4)so the big problem is upgrading the last mile. ISPs don’t want to do that
    because of #1 above. Until it happens, we get asymmetric service which
    means bittorrent isn’t much help.

    Share
    1. And to the extent that the last mile is the bottleneck, distributed technologies like BitTorrent are silly. It reminds me of some of early parallel computing mainframes. Sure you can get a job done 10x faster by using 10 processors simultaneously, but kind of silly on a time sharing system, to have 10 jobs taking turns using all 10 processors.

      Share
  4. James T Fletcher Tuesday, March 26, 2013

    Isn’t this reinventing the wheel of a content delivery network? Caching content closer to users giving a better user experience. The problem is not the ‘congestion’ of the internet but actually the delivery mechanism, most sophisticated web masters know that one data center can’t service a global audience.

    Thats why users who utilise CDNs like CDN.net, CDN77.com, Dediserve etc. are improving not only the performance of sites but they’re assisting in making the web faster for everyone.

    Share

Comments have been disabled for this post