5 Comments

Summary:

When it comes to cloud computing, the main value proposition for the user — better economics — is pretty straightforward. But it needs to be economically feasible for the providers, too.

The main value proposition of cloud computing is better economics, that it’s cheaper to rent hardware, software platforms and applications (via a per-usage or subscription model) than it is to buy, build and maintain them in the corporate data center. But if we expect that cloud computing is here to stay –- and not just a passing fad –- it must be feasible for the cloud providers themselves. So how do they do it?

They do it by leverage economies of scale. Put simply, the idea is that one very large organization can more efficiently build and operate its infrastructure than many small firms can on their own. To better understand this, let’s break down some of the financial advantages leveraged in cloud computing:

Specialization: Specialization is also known as division of labor, a term coined by the father of modern economics, Adam Smith. A company for whom running a large-scale data center is a core part of its business will do so much more cost-effectively than a company for whom it’s merely one aspect. The former will hire the best experts in the world, and will have the management attention required to continuously innovate, optimize and improve operations. And the overhead costs associated with doing so will spread thinly across massive usage. Case in point: Since it needed to use hundreds of thousands of servers, it was worthwhile for Google to build its own, homegrown devices to fit its exact power supply and fault-tolerance needs.

Although in software, anyone can build anything with enough people, time and money (as my old boss used to say, “It’s all ones and zeros”), it makes no sense for individual companies to develop capabilities such as dynamic provisioning, linear scalability and in-memory data partitioning when they’re readily available from off-the-shelf products.

Purchasing Power: Large organizations buy in bulk, which they can leverage to negotiate lower prices. So presumably the cloud provider can acquire lower-costing servers and networks, operating systems and virtualization software. Furthermore, they can negotiate better interest rates, insurance premiums and other contracts.

Utilization: This is perhaps the most important one and what I like to call the Kindergarten Principle, or “sharing is good.” In computing, tremendous savings can be achieved by having multiple companies share the same IT infrastructure.

Experts estimate average data center utilization rates range from 15 percent to 20 percent. If you include the processing, memory and storage capacity available on company-owned laptops and desktops as well, utilization rates may be as low as 5 percent. That’s a lot of waste. Imagine if this were the case in the hospitality industry. In most cases, a hotel with even 50 percent average occupancy rates would quickly go out of business.

So why is this happening with corporate IT?

Application loads are volatile; they experience peaks and troughs based on time of day, day of the week or month, seasons and so on. To avoid hitting the “scalability wall,” companies need to overprovision. So if a company expects a certain daily peak volume (for example, the opening of the trading day for an e-trading application), it will provision enough hardware so that utilization rates at the peak reach no more than 70 percent (leaving some room for unexpected loads – hey, Steve Jobs may announce the next iPhone today). But at other times utilization rates could go as low as 10 percent, with the average somewhere in between.

So the difference between peak loads and average loads drives overprovisioning and a high rate of unused computing capacity. But if we aggregate the activities of several companies, we will not face such volatility in application loads. Let’s see why.

Follow the Sun: In many cases, peaks and troughs in application volumes can largely be attributed to the time of day. Human-facing applications are active during daytime and face very low activity during the night. When New York experiences the opening bell trading spike, London is in the midday lull and Tokyo is going to bed. Same goes for e-commerce sites, social networking sites, gaming sites and others, though these types of applications might experience peaks after business hours as well.

If companies around the globe and in different industries share the same resources on the cloud, higher utilization rates will be achieved by the cloud provider, lowering its costs – savings that it can turn around and pass on to its customers. This model of shared resources even addresses the need to overprovision for unexpected peaks, as it is unlikely that all the cloud users, in all geographical regions and all industries will face peaks at the same time. This is similar to the notion of a bank not having all of the cash reserves necessary to handle the cash commitments to all customers at the same time (is there an equivalent to a bank run in cloud computing?).

Follow the Moon: And with so much focus on energy costs, data center power consumption and cooling (not to mention the environment), there’s also a cloud computing approach known as Follow the Moon. It posits that a cloud provider with physical data centers in several different geographical locations can run the applications that are active from the day side of the world in centers on the night side of the world, taking advantage of lower power and cooling costs.

Cloud computing, therefore, is an economically feasible strategy. Over time, the cost savings will be too compelling for all but the very largest companies to ignore.

Geva Perry is the chief marketing officer of GigaSpaces

  1. Geva, you missed one obvious economic influence on computing: the legal and regulatory environment. Several days ago, Nick Carr referenced a post by Bill Thompson highlighting the elephant in the room: that cloud computing runs on hardware that is physically located in some geography with its own political and legislative realities. This lead me to explore a new theory: Follow the Law computing.

    The theory goes like this: I think many organizations (and “organizations”) are going to look at whether there are strong economic incentives to move computing load to where they can get the most favorable legal system. The banking industries key clearing house for international inter-bank transations, SWIFT, has already moved its computing to Switzerland in order to escape the hazards of the Patriot Act. We all know that gambling sites have almoste entirely moved off shore. With Canada refusing to allow public applications to run in the US, it seems like computing is starting to follow the law well before follow the sun or moon become a reality.

    Share
  2. James — Thanks for this. I actually intentionally left this issue out, as I am planning a separate post on it (I had a tight word limit). I agree with your observation and I think there are some other interesting angles to the legal/political/compliance issue, which I hope to write about soon.

    Share
  3. Geva: We need to factor in the cabling required to make the global cloud work. Using resources from a part of the world will mean a lot of latency. Also an “global” cloud means that it is very vulnerable to threats. We can even protect ourselves agains bots, spam an other “simpler” nasties. How are we ever going to protect this cloud against hackers, terrorists, anarchists and just about anyone who want to take a pick-axe and hack some cable.

    Good thought experiments, but it ain’t gonna happen.

    Share
  4. Vix –

    Not only will it happen, it’s already happening, with the exception of follow-the-moon, which as far as I know is still just an idea. However, I think that given power and colling costs these days, it’s an idea that will be seriously looked at.

    I agree with you that bandwidth is an issue to consider. Some application are very sensitive to latency, some are not sensitive at all. For example, many applications already being deployed on clouds are batch computational applications. Such applications are not sensitive to latency. You upload the data and the computational model once, run it on the cloud (for 40 minutes or 40 hours)and get the results at the end.

    Share
  5. [...] On Clouds, the Sun and the Moon The main value proposition of cloud computing is better economics, that it’s cheaper to rent hardware, software platforms and applications (via a per-usage or subscription model) than it is to buy, build and maintain them in the corporate data center. B (Stichworte: economies_of_scale cloud_computing) [...]

    Share
  6. Geva,

    The ideas you introduce are not new to a very old industry – the electric utilities. They use centralized power generation plants which meet the dynamic demands of several users.

    However, I have always wondered how cloud computing is different from utility computing which seems to have disappeared into oblivion? Grid computing is another concept which has fallen by the wayside.

    Fortunately there are some real cloud computing successes demonstrated by Google’s email and search services as well as Amazon’s elastic cloud. Interestingly the pay per use definition has not yet been fully embraced by software providers.

    Eventually, economics and user willingness to change will decide the fate of cloud computing.

    Ranjit Nayak

    Share
  7. [...] There was a recent post on GigaOm (a daily must read for me) by Geva Perry titled “On Clouds, the Sun and the Moon”.   As you might expect, the main topic of the post is cloud [...]

    Share
  8. [...] On Clouds, the Sun and the Moon – GigaOM Useful biz-oriented backgound info on cloud driving forces (tags: cloud saas opinion) [...]

    Share
  9. Reading: On Clouds, the Sun and the Moon – GigaOM: http://t.co/fSys3qQe

    Share

Comments have been disabled for this post