40 Comments

Summary:

Public utility cloud services differ from traditional data center environments — and private enterprise clouds — in three fundamental ways. These three key differences in turn enable the sustainable strategic competitive advantage of clouds through what I’ll call the 10 Laws of Cloudonomics.

If your enterprise has access to the same things — virtualization, automation, performance management, ITIL, skilled IT resources, etc. — as cloud service providers, would clouds provide any real and sustainable benefit?

Public utility cloud services differ from traditional data center environments — and private enterprise clouds — in three fundamental ways. First, they provide true on-demand services, by multiplexing demand from numerous enterprises into a common pool of dynamically allocated resources. Second, large cloud providers operate at a scale much greater than even the largest private enterprises. Third, while enterprise data centers are naturally driven to reduce cost via consolidation and concentration, clouds — whether content, application or infrastructure — benefit from dispersion. These three key differences in turn enable the sustainable strategic competitive advantage of clouds through what I’ll call the 10 Laws of Cloudonomics.

Cloudonomics Law #1: Utility services cost less even though they cost more.
An on-demand service provider typically charges a utility premium — a higher cost per unit time for a resource than if it were owned, financed or leased. However, although utilities cost more when they are used, they cost nothing when they are not. Consequently, customers save money by replacing fixed infrastructure with clouds when workloads are spiky, specifically when the peak-to-average ratio is greater than the utility premium.

Cloudonomics Law #2: On-demand trumps forecasting.
The ability to rapidly provision capacity means that any unexpected demand can be serviced, and the revenue associated with it captured. The ability to rapidly de-provision capacity means that companies don’t need to pay good money for non-productive assets. Forecasting is often wrong, especially for black swans, so the ability to react instantaneously means higher revenues, and lower costs.

Cloudonomics Law #3: The peak of the sum is never greater than the sum of the peaks.
Enterprises deploy capacity to handle their peak demands – a tax firm worries about April 15th, a retailer about Black Friday, an online sports broadcaster about Super Sunday. Under this strategy, the total capacity deployed is the sum of these individual peaks. However, since clouds can reallocate resources across many enterprises with different peak periods, a cloud needs to deploy less capacity.

Cloudonomics Law #4: Aggregate demand is smoother than individual.
Aggregating demand from multiple customers tends to smooth out variation. Specifically, the “coefficient of variation” of a sum of random variables is always less than or equal to that of any of the individual variables. Therefore, clouds get higher utilization, enabling better economics.

Cloudonomics Law #5: Average unit costs are reduced by distributing fixed costs over more units of output.
While large enterprises benefit from economies of scale, larger cloud service providers can benefit from even greater economies of scale, such as volume purchasing, network bandwidth, operations, administration and maintenance tooling.

Cloudonomics Law #6: Superiority in numbers is the most important factor in the result of a combat (Clausewitz).
The classic military strategist Carl von Clausewitz argued that, above all, numerical superiority was key to winning battles. In the cloud theater, battles are waged between botnets and DDoS defenses. A botnet of 100,000 servers, each with a megabit per second of uplink bandwidth, can launch 100 gigabits per second of attack bandwidth. An enterprise IT shop would be overwhelmed by such an attack, whereas a large cloud service provider — especially one that is also an integrated network service provider — has the scale to repel it.

Cloudonomics Law #7: Space-time is a continuum (Einstein/Minkowski)
A real-time enterprise derives competitive advantage from responding to changing business conditions and opportunities faster than the competition. Often, decision-making depends on computing, e.g., business intelligence, risk analysis, portfolio optimization and so forth. Assuming that the compute job is amenable to parallel processing, such computing tasks can often trade off space and time, for example a batch job may run on one server for a thousand hours, or a thousand servers for one hour, and a query on Google is fast because its processing is divided among numerous CPUs. Since an ideal cloud provides effectively unbounded on-demand scalability, for the same cost, a business can accelerate its decision-making.

Cloudonomics Law #8: Dispersion is the inverse square of latency.
Reduced latency — the delay between making a request and getting a response — is increasingly essential to delivering a range of services, among them rich Internet applications, online gaming, remote virtualized desktops, and interactive collaboration such as video conferencing. However, to cut latency in half requires not twice as many nodes, but four times. For example, growing from one service node to dozens can cut global latency (e.g., New York to Hong Kong) from 150 milliseconds to below 20. However, shaving the next 15 milliseconds requires a thousand more nodes. There is thus a natural sweet spot for dispersion aimed at latency reduction, that of a few dozen nodes — more than an enterprise would want to deploy, especially given the lower utilization described above.

Cloudonomics Law #9: Don’t put all your eggs in one basket.
The reliability of a system with n redundant components, each with reliability r, is 1-(1-r)n. So if the reliability of a single data center is 99 percent, two data centers provide four nines (99.99 percent) and three data centers provide six nines (99.9999 percent). While no finite quantity of data centers will ever provide 100 percent reliability, we can come very close to an extremely high reliability architecture with only a few data centers. If a cloud provider wants to provide high availability services globally for latency-sensitive applications, there must be a few data centers in each region.

Cloudonomics Law #10: An object at rest tends to stay at rest (Newton).
A data center is a very, very large object. While theoretically, any company can site data centers in globally optimal locations that are located on a core network backbone with cheap access to power, cooling and acreage, few do. Instead, they remain in locations for reasons such as where the company or an acquired unit was founded, or where they got a good deal on distressed but conditioned space. A cloud service provider can locate greenfield sites optimally.

Joe Weinman is Strategic Solutions Sales VP for AT&T Global Business Services. The views expressed herein are his own and do not necessarily reflect the views of AT&T.

This post also appeared on BusinessWeek.com.

  1. First, it was called economics
    Then, it became freakonomics
    later, it was wikinomics
    Today, I hear cloudonomics..

    Next is what??

    Please…not samsung mobil .. ;)

    Share
  2. [...] Firmos skaitmeninių resursų iškėlimas į internetą turi nemažai privalumų. AT&T strateginių sprendimų viceprezidentas Joe Weinmanas, svečiuodamasis GigaOM tinklaraštyje, juos sujungė į „dešimt debesonomikos dėsnių“: [...]

    Share
  3. formula for #9 should be 1-(1-r)^n, not *n

    Share
  4. Mark,

    You are correct. The superscript on the exponent n got removed during the conversion to HTML. The numbers and conclusion are correct however – for finite n, the total system reliability will be < 100%, although asymptotically approaches 100% in the limit.

    Joe

    Share
  5. [...] of the 10 Laws of Cloudonomics can be found at BusinessWeek, or as originally posted on the GigaOM [...]

    Share
  6. Thanks Joe. A question on the big picture for you. Telcos are seeing the success/potential of AWS and getting into the game, Google is getting into the game, Microsoft will be there soon and is already under some definitions. Dont the telcos have a huge disadvantage from a cost and technical perspective? They have to transition their IT estate to IP, and the entire estate if they want e2e. AWS is building from scratch, and it will be secure and it will be scalable and run by good engineers. Yes, your customers want it but why you instead of AWS?

    Share
  7. I think the important thing to focus on re: point #2 is — how many clouds are going to be truly ‘on demand’ in the sense that they will have infinite capacity (i.e. can scale up to meet demand of any single customer) ?

    IMHO the only folks who will be able to build clouds of this scale (economically) are Google, Amazon, HP/EDS, IBM, and maybe a few others (telcos?)

    Mid-sized (and smaller) cloud providers are still going to have to behave like legacy hosting providers in that when a big customer is expecting a large traffic spike, that cloud provider will need to go and buy new hardware to support the load – and then figure out what to do with the extra capacity after the spike has ended.

    Maybe this creates a market for overflow cloud services (provided by other clouds with ‘infinite’ capacity) where hosting providers can dump excess expected or unexpected traffic.

    Share
  8. [...] 10 Laws of Cloudonomics The reliability of a system with n redundant components, each with reliability r, is 1-(1-r)n. So if the reliability of a single data center is 99 percent, two data centers provide four nines (99.99 percent) and three data centers provide six nines (99.9999 percent). (tags: cloudcomputing) [...]

    Share
  9. Joe,

    A very good perspective on the economics of running a cloud infrastructure indeed. I do however find the analysis mostly from a hardware infrastructure perspective. But what about the software? What about the cost of operations management software, and the cost of rearchitecting software to run in a multi tenant environment? What about utility pricing of business services as opposed to utility pricing of hardware resources? Google sells business services not hardware. Amazon is closer to selling hardware infrastructure, and in fact at one point was calling the newly christened “cloud” – hardware as a service.

    Ranjit Nayak

    Share
  10. Great theory as to why businesses should buy cloud services! The only question now is, are they profitable to sell? e.g. Why are salesforce.com’s margins so thin, on $1B annualized revenue?

    Share

Comments have been disabled for this post