Blog Post

The 10 Laws of Cloudonomics

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

If your enterprise has access to the same things — virtualization, automation, performance management, ITIL, skilled IT resources, etc. — as cloud service providers, would clouds provide any real and sustainable benefit?

Public utility cloud services differ from traditional data center environments — and private enterprise clouds — in three fundamental ways. First, they provide true on-demand services, by multiplexing demand from numerous enterprises into a common pool of dynamically allocated resources. Second, large cloud providers operate at a scale much greater than even the largest private enterprises. Third, while enterprise data centers are naturally driven to reduce cost via consolidation and concentration, clouds — whether content, application or infrastructure — benefit from dispersion. These three key differences in turn enable the sustainable strategic competitive advantage of clouds through what I’ll call the 10 Laws of Cloudonomics.

Cloudonomics Law #1: Utility services cost less even though they cost more.
An on-demand service provider typically charges a utility premium — a higher cost per unit time for a resource than if it were owned, financed or leased. However, although utilities cost more when they are used, they cost nothing when they are not. Consequently, customers save money by replacing fixed infrastructure with clouds when workloads are spiky, specifically when the peak-to-average ratio is greater than the utility premium.

Cloudonomics Law #2: On-demand trumps forecasting.
The ability to rapidly provision capacity means that any unexpected demand can be serviced, and the revenue associated with it captured. The ability to rapidly de-provision capacity means that companies don’t need to pay good money for non-productive assets. Forecasting is often wrong, especially for black swans, so the ability to react instantaneously means higher revenues, and lower costs.

Cloudonomics Law #3: The peak of the sum is never greater than the sum of the peaks.
Enterprises deploy capacity to handle their peak demands – a tax firm worries about April 15th, a retailer about Black Friday, an online sports broadcaster about Super Sunday. Under this strategy, the total capacity deployed is the sum of these individual peaks. However, since clouds can reallocate resources across many enterprises with different peak periods, a cloud needs to deploy less capacity.

Cloudonomics Law #4: Aggregate demand is smoother than individual.
Aggregating demand from multiple customers tends to smooth out variation. Specifically, the “coefficient of variation” of a sum of random variables is always less than or equal to that of any of the individual variables. Therefore, clouds get higher utilization, enabling better economics.

Cloudonomics Law #5: Average unit costs are reduced by distributing fixed costs over more units of output.
While large enterprises benefit from economies of scale, larger cloud service providers can benefit from even greater economies of scale, such as volume purchasing, network bandwidth, operations, administration and maintenance tooling.

Cloudonomics Law #6: Superiority in numbers is the most important factor in the result of a combat (Clausewitz).
The classic military strategist Carl von Clausewitz argued that, above all, numerical superiority was key to winning battles. In the cloud theater, battles are waged between botnets and DDoS defenses. A botnet of 100,000 servers, each with a megabit per second of uplink bandwidth, can launch 100 gigabits per second of attack bandwidth. An enterprise IT shop would be overwhelmed by such an attack, whereas a large cloud service provider — especially one that is also an integrated network service provider — has the scale to repel it.

Cloudonomics Law #7: Space-time is a continuum (Einstein/Minkowski)
A real-time enterprise derives competitive advantage from responding to changing business conditions and opportunities faster than the competition. Often, decision-making depends on computing, e.g., business intelligence, risk analysis, portfolio optimization and so forth. Assuming that the compute job is amenable to parallel processing, such computing tasks can often trade off space and time, for example a batch job may run on one server for a thousand hours, or a thousand servers for one hour, and a query on Google is fast because its processing is divided among numerous CPUs. Since an ideal cloud provides effectively unbounded on-demand scalability, for the same cost, a business can accelerate its decision-making.

Cloudonomics Law #8: Dispersion is the inverse square of latency.
Reduced latency — the delay between making a request and getting a response — is increasingly essential to delivering a range of services, among them rich Internet applications, online gaming, remote virtualized desktops, and interactive collaboration such as video conferencing. However, to cut latency in half requires not twice as many nodes, but four times. For example, growing from one service node to dozens can cut global latency (e.g., New York to Hong Kong) from 150 milliseconds to below 20. However, shaving the next 15 milliseconds requires a thousand more nodes. There is thus a natural sweet spot for dispersion aimed at latency reduction, that of a few dozen nodes — more than an enterprise would want to deploy, especially given the lower utilization described above.

Cloudonomics Law #9: Don’t put all your eggs in one basket.
The reliability of a system with n redundant components, each with reliability r, is 1-(1-r)n. So if the reliability of a single data center is 99 percent, two data centers provide four nines (99.99 percent) and three data centers provide six nines (99.9999 percent). While no finite quantity of data centers will ever provide 100 percent reliability, we can come very close to an extremely high reliability architecture with only a few data centers. If a cloud provider wants to provide high availability services globally for latency-sensitive applications, there must be a few data centers in each region.

Cloudonomics Law #10: An object at rest tends to stay at rest (Newton).
A data center is a very, very large object. While theoretically, any company can site data centers in globally optimal locations that are located on a core network backbone with cheap access to power, cooling and acreage, few do. Instead, they remain in locations for reasons such as where the company or an acquired unit was founded, or where they got a good deal on distressed but conditioned space. A cloud service provider can locate greenfield sites optimally.

Joe Weinman is Strategic Solutions Sales VP for AT&T Global Business Services. The views expressed herein are his own and do not necessarily reflect the views of AT&T.

This post also appeared on

40 Responses to “The 10 Laws of Cloudonomics”

  1. The current clod model is still missing a piece of critical infrastructure and that is a correct security model. All major service providers today have devices with security except the internet. Your Phone has strong hardware based securty that assures the device is allowed and that the service can’t be easily messed with. Set top boxes have security and changing channels is easy. Credit cards don’t have strong security and look at the mess we are in. The cloud needs two core security components.

    Identity: This is not user ID and password as we know it but strong access control and ultimately strong federation. This is critical so that cloud operations can be performed anonymously.
    an example would be location based service calculations. There is know reason that google latitude needs to know who you are it just needs to process data. Ideally this data is process in a manor that if the feds showed up with a subpeona they would not be able to track an individual. perhaps a user can send an invitation with strong Crypto that would enable another device or user to see where you are but google does not know who it is. To do this well we need the ability to hold strong tamper resistant and theft resistant keys in every device and we need to store Hundreds of them. I beleive the Trusted Platform Module that is now on 300 million PC would provide that role.

    The second key thing that is needed is strong bulk encryption. This is so that we can store a backup of my harddrive in the cloud but only I know the data. On my machine it is in the clear on the cloud it is encrypted. I beleive that in time we will see the emergence of Cloud -Client computing models that are the result of central infrastructure in the cloud but local computing to keep the information secure. It is not necessary we trust the cloud and we need to be very carfull about how the LAW views service providers verses internal data. Encryption and the key managment that goes with it can really drive utility computing to a whole new model.

    The future is an enterprise without Walls with all devices on the network. This software configurable enterprise will be based on Keys and Identities. The hardware technology is much farther along then the big picture thinking and applications like Medical Records and joint goverment development projects are beginning to understand that Every PC deployed has a common securty chip on the motherboard.

    Steven Sprague
    Wave Systems Corp

  2. Minor point perhaps, but the statement in rule #4:
    ‘Specifically, the “coefficient of variation” of a sum of random variables is always less than or equal to that of any of the individual variables.’ is not true.

  3. First of all: Joe did a good job at defining those laws. Because laws (in less aggressive wording: policies and guidelines) are definitely necessary in the current IT industry which sometimes still look like “wild west cowboy territory”.

    For those wondering how this all should be realized: the IT industry could learn a lot from the strategies and tactics of another utility world: electrical energy. Massive amount of electrical energy is very expensive to store, so this industry lives and breathes on the concepts of keeping demand and supply in sync. This can not be achieved without neutral coordination between a wide range of suppliers and even wider range of consumers. As such it is not a world where there is only room for the big players.

    Once every consuming organization (large or small) start to realize that maintaining their own often over sized IT environment is not smart in the long run, I foresee that brokerage between supply and demand will be able to make cloud service a reality. In order for brokerage to be effective, we still have a long way to go on standardization of interfaces, protocols and agreements. And that will be hard in an industry which raked in huge profits on proprietary products.

    So those in favor of cloud services: start blowing the horn on standardization of interfaces. A standard virtual machine would be a good place to start. Sure, this is not all, but we have to begin somewhere were it is needed most.

  4. John Lazzaro

    When I read through these laws, I can’t help but see an analogy between cloud utilities and mortgage loan securitization … similar fallacies, similar pitfalls … let’s hope the two movies don’t have the same ending.

  5. Nice post Joe, but I think you also need to discuss the reliability of the connections to get a picture of total systemic reliability, and imho thats where the weak link is as most people don’t have redundancy in that area (though no doubt AT&T could supply it ;0 )

  6. Great theory as to why businesses should buy cloud services! The only question now is, are they profitable to sell? e.g. Why are’s margins so thin, on $1B annualized revenue?

  7. Joe,

    A very good perspective on the economics of running a cloud infrastructure indeed. I do however find the analysis mostly from a hardware infrastructure perspective. But what about the software? What about the cost of operations management software, and the cost of rearchitecting software to run in a multi tenant environment? What about utility pricing of business services as opposed to utility pricing of hardware resources? Google sells business services not hardware. Amazon is closer to selling hardware infrastructure, and in fact at one point was calling the newly christened “cloud” – hardware as a service.

    Ranjit Nayak

  8. jgannonwp

    I think the important thing to focus on re: point #2 is — how many clouds are going to be truly ‘on demand’ in the sense that they will have infinite capacity (i.e. can scale up to meet demand of any single customer) ?

    IMHO the only folks who will be able to build clouds of this scale (economically) are Google, Amazon, HP/EDS, IBM, and maybe a few others (telcos?)

    Mid-sized (and smaller) cloud providers are still going to have to behave like legacy hosting providers in that when a big customer is expecting a large traffic spike, that cloud provider will need to go and buy new hardware to support the load – and then figure out what to do with the extra capacity after the spike has ended.

    Maybe this creates a market for overflow cloud services (provided by other clouds with ‘infinite’ capacity) where hosting providers can dump excess expected or unexpected traffic.

  9. Thanks Joe. A question on the big picture for you. Telcos are seeing the success/potential of AWS and getting into the game, Google is getting into the game, Microsoft will be there soon and is already under some definitions. Dont the telcos have a huge disadvantage from a cost and technical perspective? They have to transition their IT estate to IP, and the entire estate if they want e2e. AWS is building from scratch, and it will be secure and it will be scalable and run by good engineers. Yes, your customers want it but why you instead of AWS?

  10. Mark,

    You are correct. The superscript on the exponent n got removed during the conversion to HTML. The numbers and conclusion are correct however – for finite n, the total system reliability will be < 100%, although asymptotically approaches 100% in the limit.