The public Internet and the cloud shouldn’t mix, according to a paper out today from Joe Weinman of HP. Cisco seems to agree, if Tuesday’s announcement of its CloudVerse suite of products is any indication. A growing number of endpoints, the multiple services built within web applications, and the infinite variety of demands made on any web-based service mean the network can’t be trusted to run over the top.
The network is the cloud, so it needs to be agile, smart and billed based on usage.
Instead, the industry will need to move to pay-per-use, dynamic networks where possible to improve the economic benefits of cloud scenarios and deliver defined quality-of-service for applications that will require low latency, argues Weinman. Weinman, who moved over to HP from AT&T last year, is a deep thinker on the economics of cloud computing. He also argues that bandwidth will eventually be charged on a pay-per-use model for both consumers and enterprises.
He makes a good case for the importance of a smarter network in the context of delivering cloud services, something Cisco’s CloudVerse announcement Tuesday also supports. CloudVerse basically organizes Cisco’s existing networking products for the data center and links them back to the networking gear already in carrier and service provider networks, with the idea being that an intelligent network can take the fuzziness out of managing applications in the cloud.
Complex apps and infinite endpoints make quality of service more important.
It’s true that applications are growing more complex and relying on more protocols to deliver a variety of services over the web. Take, for example, an application like Google+. There are real-time streaming elements, a video conferencing set-up and document sharing. Each different element requires different levels of network quality, which is why Weinman argues for networks that run faster, not just on a megabit-per-second basis, but also with less latency. From the paper:
Human performance studies show that 200 to 250 milliseconds is acceptable for multimedia conferencing and collaboration applications. However, interactive tasks such as keystrokes and mousedowns must be responded to within about 150 milliseconds , and emerging online games require even lower latencies.
Add in the complexity at the end point in terms of the number of devices that connect to the network and it gets worse. Sensor networks, plus more devices per person and more concurrent streams coming in per device (as in personal video recorders) require more bandwidth. It also requires more intelligent bandwidth that can allocate resources and deal with emergent effects like in-office or in-home congestion and odd spikes in traffic in case of unexpected events. For example, a pipe breaking in a sensor-equipped home in the middle of the afternoon when the house is empty may create a sudden spike in traffic as humidity sensors activate, power gets shut in certain areas and you check in via a home camera system to see why your home network is going crazy. But because that’s an unexpected spike in a normally dull time, will your service provider have the bandwidth capacity to meet that event?
Of course, there’s something on cloudbursting and software-defined networks.
Weinman also offers the Holy Grail of true cloudbursting as an example where adding network intelligence makes it easier to scale a workload from one data center to another in times of peak demand. He lists five ways of doing this, beginning with the simplest idea of dividing up tasks between various clouds, which requires little to no network intelligence. He concludes with a network that can push a huge amount of data as needed and very quickly, but which would require infinite bandwidth. Since this last approach is impractical, he suggests providing pay-per-use bandwidth as the easiest way to instantly replicate data while keeping costs in line.
To help deliver the type of fine-grained control that intelligent networks will need, Weinman believes software-defined networks, such as those created using protocols like OpenFlow, are a way to add intelligence and flexibility. Using open protocols to create the networks are a good way to make sure that the added intelligence doesn’t act as a way to lock in users. Weinman also covers additional topics that will require research in bringing forth these new networks for cloud computing, and I highly recommend folks check out his paper.
This sounds great; so how do we co-opt it to sell products?
So what does this have to do with Cisco’s marketing effort around CloudVerse? Essentially, with the suite of products that wrap data center networking in with the networks of service providers for wireline and mobile broadband, Cisco is recognizing that a holistic, intelligent network could be a huge selling point for those concerned about piecing together their own fragmented network elements to deliver web services and cloud services. A quote from the Cisco release sums up the news nicely:
“Until now cloud technology resided in silos, making it harder to build and manage clouds, and to interconnect multiple clouds, posing critical challenges for many organizations,” said Padmasree Warrior, Cisco senior vice president of engineering and chief technology officer. “Cisco uniquely enables the world of many clouds – connecting people, communities and organizations with a business-class cloud user experience for the next-generation Internet.”
Cisco and Weinman are not alone. Alcatel-Lucent recently outlined its vision of as service provider cloud that adds intelligence to the network in a way that many enterprise and business customers will find appealing.
Could someone build a fully functioning network without resorting to all-Cisco gear, or perhaps even Weinman’s view of the intelligent network? Yes, but it takes skill and dedication that places such as Google, Yahoo, Facebook and other webscale operators have, and other companies just don’t seem to want to bother with.