10 Comments

Summary:

The current public cloud computing providers have done an excellent job in bringing innovation and cloud computing technology to the masses. Cloud computing, however, is not yet a fully evolved technology and may take another decade to grow up and deliver on its full potential.

istock_000005152521small1

Cloud computing is no longer just the “next big thing.” It has arrived in the consciousness of the mainstream with industry buzz, TV commercials showcasing its power, and the real promise of revolutionizing computing as we know it. But for those of us dancing on the ground trying to make clouds appear out of the clear blue sky, the next generation public cloud is still just over the horizon.

The current public cloud computing providers have done an excellent job in bringing innovation and cloud computing technology to the masses. We have seen numerous examples where the public cloud has scaled exceptionally well, and provided unparalleled compute power to customers. Cloud computing, however, is not yet a fully evolved technology and may take another decade to grow up and deliver on its full potential.

There are a number of features that the next generation of cloud computing should introduce, such as data portability, stricter security measures and more infrastructure transparency. However, I’d like to focus on the fundamental requirements that we should aspire to, so the public cloud can transform the industry: higher availability, an order of magnitude more scalability and more computing flexibility.

The next generation of public clouds will be more available. Infrastructure and services that have acceptable failures of hours per year do not live up to the full potential of cloud computing. In an ideal scenario, each server, storage, or service in a public cloud needs to be available 99.999 percent of the time (that’s approximately five and a half minutes of downtime per year). While this availability goal is lofty – and some say fundamentally unattainable within reasonable expense – if cloud computing is going to provide the basic infrastructure and services for all compute on the planet, the industry needs to start think about a transformational change.

Cloud providers should aim for this availability goal at all levels of the cloud – networking, compute, storage and applications. The cloud needs to be available as close to 100 percent of the time as possible and if that means building N+3 redundant power systems, fault-tolerant networks that provide rapid sub-millisecond convergence, servers deployed in high availability clusters, local and network-attached storage with RAID 10 and applications that are deployed across multiple continents, then let’s embrace this goal as an industry.

Beyond being nearly always on, the next generation of public clouds will need to scale compute capacity by at least one order of magnitude. As of today, there are few public clouds that can scale to handle the largest enterprise IT or web company workloads. Let’s scale public clouds from the purported hundreds of thousands of processors to millions or tens of millions.

This could be achieved by building larger-scale clouds and adding more available processors powered by the public cloud providers or by other innovative ideas. For example, like grid computing of the past decade, public clouds could potentially harvest distributed compute power similar to projects like SETI@home and others that leverage extra processing capacity connected to the Internet. Imagine leaving your computer on at night and getting paid by a public cloud provider for your extra processor cycles, such as the solution from Plura Processing. Regardless of the implementation details, public clouds should have grander scaling aspirations and never have compute capacity limit their adoption and use.

Lastly, the next generation cloud computing should be flexible enough to run any operating system, application, database, or storage system that users require. Today’s public cloud is flexible, but not as flexible an environment as buying a compute server from a vendor such as Dell, IBM or HP. On your own server you can run literally nearly any program you want –MySQL, IBM DB2, Oracle, SAP, Microsoft CIFS file system, RAW partitions, NFS mounted disks. However, on many public clouds you are restricted to a specific set of operating systems, programming languages, databases, storage file systems and networking capabilities. For many public cloud computing users, these environments work just fine, but they do place restrictions on the applications that can run in the public cloud. Greater flexibility is critical for cloud computing to reach its full potential as a transformational technology.

So, picture this…. your public cloud provider almost never has a failure, scales processing power beyond what you ever need, and allows you to run any service or application. Public cloud 2.0 might look like a bright sunny day after all.

Allan Leinwand is CTO – Infrastructure at Zynga, Inc. and is the founder of Vyatta.

  1. Matthew Hardy Saturday, April 30, 2011

    Or, both the term and the concept ‘cloud’ may become a pejorative.

    I wouldn’t yet bet against the popularity of decentralization, privacy and control. Interoperability across all platforms and networks for every data/file type is no longer a technical hurdle; it is a policy decision.

  2. Items to agree with both the author and Mathew.

  3. “However, on many public clouds you are restricted to a specific set of operating systems, programming languages, databases, storage file systems and networking capabilities” – so not that much different from any normal well-run corporate IT infrastructure…?

    1. Allan Leinwand Pen Monday, May 2, 2011

      Agreed – but a well-run corporate IT infrastructure can often flex with the needs of the specific business needs while public clouds tend to build for the masses.

  4. shenoyjoseph Sunday, May 1, 2011

    cloud 2.0 has better security features

  5. Avi Kapuya Monday, May 2, 2011

    I think it is now more for the industry to take the basic concept of cloud and the infrastructures that exist , such as Rackspace, Amazon etc’, and to enrich it with all the right products, this effort is too big, and also, should not be adressed by one giant (see Google efforts, which prove not to succeed). open ecosystem and the right motivation will bring this vision to fulfillment. Xeround for example is an independent player in the cloud database market, and as such it can run on any cloud. Many other companies are in different stages of product releases. Cloud 2.0 is the cloud ecosystem in my view.

  6. If you haven’t checked out Wuala yet you should. It is public online cloud storage.

    http://www.wuala.com/referral/7MJ4AGA6C6NJMACFHH4F

    For Dropbox users there is a learning curve: http://www.gadberry.com/aaron/2011/04/29/wuala-for-dropbox-users/

  7. Paul Calento Monday, May 2, 2011

    Is public cloud 2.0 is pragmatic cloud … where there’s a local and/or alternative failover option? Will tethered to the Internet ever be 100% viable?

  8. This is from a guy who cant make farmville work… please!!

Comments have been disabled for this post