8 Comments

Summary:

Today’s telecoms, networking vendors, and cloud providers can learn a few things from the past by studying how Intel and AMD responded when their processors evolved so quickly that they couldn’t get data off of them fast enough. Cloud vendors facing a similar problem on a […]

structure_speaker_seriesToday’s telecoms, networking vendors, and cloud providers can learn a few things from the past by studying how Intel and AMD responded when their processors evolved so quickly that they couldn’t get data off of them fast enough. Cloud vendors facing a similar problem on a larger scale are discovering that WAN optimization, a form of Application Delivery Networking, will be required for clouds to be efficient and usable.

In the late ’90s, Sun Microsystems coined the term, “The network is the computer.”  Although ahead of its time, the company was right — if you take into account today’s relative abundance of bandwidth and the emergence of huge cloud computing operators, the entire Internet starts to look like one big computer.

Rewind the TiVo about 15 years. Intel and AMD were following Moore’s law, doubling performance every 18 months. Even though processors were getting faster, it was getting hard to move data on and off of them. In microprocessor architecture, the system bus is responsible for getting data into and out of the processor. At the time, the aging ISA system bus couldn’t grow in performance as quickly as processors did. In other words, there was a bottleneck that prevented consumers from making full use of the increasingly fast processors.

It’s not a stretch to picture a cloud compute-enabled data center as a giant processor plugged into a giant system bus — the Internet, which varies dramatically in capacity. Cloud computing providers today laugh at Moore’s law because they don’t wait 18 months to double capacity. If they want to (only) double capacity, they simply double the number of servers in the cloud. Oh, and by the way, they still get the benefit of faster processors developed over time.

But these cloud compute providers, liberated from the shackles of Moore’s law, can’t grow network speeds as quickly as they can add servers, creating exactly the same problem that CPU vendors faced when their CPUs grew faster than the system bus. It’s getting worse, too — according to the lesser-known Nielsen’s Law, Internet bandwidth grows at an annual rate of 50 percent, compared with compute capacity, which grows at 60 percent, meaning that over a 10-year time period, computer power grows 100X, but bandwidth grows at 57X. Ouch.

DaveAspreySo what did Intel and AMD do when faced with the same problem? They looked for a fix they could apply quickly.  The quick fix was to add a cache to the processor, which allowed the CPU to run at full speed and store results in temporary memory until they could move across the slower system bus. It also allowed them to keep selling faster processors while they tackled the longer-term project of improving standards for bus speeds.

Cloud computing vendors need to take the same approach. It will take a long time to increase the speed of the “system bus” — every hop on the Internet between a data center and an end user — so they need to start working on shorter-term solutions. The most obvious one is WAN optimization. When cloud compute providers roll out ADN equipment as a part of their offerings, cloud consumers will instantly see much faster access to cloud compute resources using less bandwidth, which will increase cloud usage and unlock the value of the cloud for real enterprise computing. (Full disclosure: I work for Blue Coat Systems, the market leader in the ADN space.)

The future of cloud networking, and the only way to enable the full value of cloud compute cycles, is in WAN optimization. It’s a strategy that has worked well for Intel and AMD — and it ought to work for Amazon EC2, Microsoft Azure, Rackspace Mosso, and even Google.

If the cloud vendors try to skip the WAN optimization piece, the potent combination of Moore’s law and expanding data center deployments will hopelessly outpace our ability to deploy (and pay for) new broadband infrastructure.

Dave Asprey has focused his career on finding better ways to use data centers, virtualization, cloud computing, and networking. When he’s not busy as VP of Technology and Corporate Development for Blue Coat Systems, you’ll find him at an anti-aging conference.

  1. Dave, this is a really informative and timely piece, thanks for sharing. For the enterprise, I understand how WAN optimization could do a lot to boost availability of mission-critical and other important apps. But what about consumer? As telcos/MSOs start to see a larger share of their overall traffic coming in the form of IP packets (thanks to the Hulu, xBox Live, and other bandwidth-heavy Internet services) it could really have an affect on the role of QoS for services outside of voice. This is further influenced by the the rise of thin clients, obviously on mobile but also on desktop, and potentially for the living room), there could be an endless number of consumer-facing applications demanding prioritization to the end user as they start to replace their traditional counterparts. Do you see WAN optimization or similar solutions playing a role in helping the end-user experience?

    Share
  2. Not sure I would completely agree with this.

    Network demand will certainly come from services to the user like video streaming but my experience of cloud computing on network bandwidth is that it actually reduces demand. Whereas client-server type programs would transmit data across the network with cloud computing only browser pages are transmitted – far more efficient. Also – once more data is processed in the cloud then the more it will stay in the cloud – again more efficient.

    Share
  3. [...] 14 Jun 2009 18:36:00 GMTWhat Intel Can Teach Google About the Cloud – Gigaom.comToday’s telecoms, networking vendors, and cloud providers can learn a few things from the past by [...]

    Share
  4. I must say, I’m a bit of a techie myself, and quite fascinated with the different technological trends and Moore’s law even if I’m not able to follow everything statistically that closely. Your article sure got me thinking about a few things, and I agree how there can be different ways of leveraging technologies among each other, resulting to a long term end result for the benefit of all, rather than instant fixes most of the time.
    Again, neat stuff!

    Share
  5. [...] read Dave Asprey’s “What Intel Can Teach Google about the Cloud.” I was surprised. Mr. Asprey wrote: But these cloud compute providers, liberated from the shackles [...]

    Share
  6. [...] What Intel can teach Google about the cloud [...]

    Share
  7. [...] of dedicated mirrored resources surrounded by a cloud of content and application delivery and/or WAN acceleration to reduce latency for key applications. An optimal architecture is hybrid: dedicated and shared, [...]

    Share
  8. As an ISV that provides cloud archive services (full disclosure on my part, I work for RainStor) we’ve addressed the problem of shipping large datasets into the cloud through data compression http://tinyurl.com/nyt65a. We provide a client-side VM to compress (40x) and encrypt data before sending it to cloud storage. I’d argue that data compression will become an increasingly important technique for addressing bandwidth issues and fuelling cloud adoption.

    Share

Comments have been disabled for this post