Bezos’s law signals it’s time to ditch the data center

20 Comments

Credit: Gigaom Illustration/Steve Jurvetson

Wherever you stand on the debate over which cloud giant will reign supreme, it’s clear that the economic forces shaping the market are evolving quickly. After nearly three decades helping companies move their enterprise applications into the modern era, whether to new servers, operating systems or clouds, I’ve seen the cycle before: innovation leads to rapid expansion, which leads to consolidation, shake-out and more innovation. We’ve been down this road before.

Now comes new cloud computing data based on Total Cost of Infrastructure (TCOI), proving cloud providers are innovating and reducing costs in areas beyond hardware. The result is a more compelling case for cloud as a far cheaper platform than a build-your-own data center. Further, the economic gap that favors the cloud provider platform will only widen over time.

In many ways, cloud computing is bringing to the enterprise world what Henry Ford brought for cars. Ford developed and designed a method for manufacturing that steadily reduced the cost of manufacturing the Model T, thus lowering the price of his car. The result was a decline in the number of US auto manufacturers from more than 200 in the 1920s to just eight in 1940. This astounding 96 percent reduction in manufacturers over 20 years foreshadowed what could happen to enterprises running their own data centers in the not too distant future.

Previously, I posited that the future of cloud computing is the availability of more computing power at a much lower cost. I termed this “Bezos’s law,” and defined it as the observation that, over the history of cloud, a unit of computing power price is reduced by 50 percent approximately every three years.

Bezos’s law measures the cost of a given unit of cloud computing over a period of time, as compared to Moore’s Law, which we know is “the number of transistors on integrated circuits over a period of time.” While Moore’s law measures the rate of change of CPU, a small fraction of the cost of a data center or cloud, Bezos’s law is a measure of the rate of change of Total Cost of Infrastructure Ownership (TCIO).

Why is TCIO so relevant?

The team from IBM SoftLayer commissioned McKinsey to do a study around TCIO.  The comprehensive analysis highlighted the following about total costs:

Slide courtesy of IBM.

Slide courtesy of IBM.

When considering the rate of Bezos’s law in light of IBM’s analysis, it is clear cloud providers are innovating and reducing cost in areas beyond hardware.

A chart of the percentage of monthly cost from Amazon’s James Hamilton’s blog about overall data center costs — which does not include cost of building a data center or operational labor — points to power as 31 percent of the monthly costs. And that was back in 2010.

There have been many articles written about power usage efficiency (PUE) to build clouds at scale from FaceBook and Google. Enterprises can only keep pace with the TCIO of cloud providers if they innovate and drive cost out of data center operation beyond reductions in hardware.

But a glimpse at data center efficiency indicates that enterprises are not improving from a PUE metric and have a much higher (1.7 self-reported) PUE than cloud providers. People in an enterprise don’t lose their jobs if the data center is not that efficient. Cloud providers not only lose their jobs but put the business in jeopardy if the resulting product is not power efficient.

There are obvious drivers ensuring the compounding trend line as described in Bezos’s law will continue for many decades.

  • Scale: Every day, Amazon, Google, IBM and Microsoft are adding huge amounts of capacity capable of running most Fortune 1,000 companies.
  • Innovation: The cloud market is competitive with innovative approaches and services being brought to market quickly.
  • Competition and price transparency: While the base IaaS service varies among providers, they are close enough for customers to easily compare offerings.

Let’s assume that on average the Fortune 5000 each have seven data centers for a total of 35,000. Bezos’s law will drive (think Henry Ford’s Model T) a similar titanic shift away from data centers to the cloud, which will result in 90 percent reduction (approximately 30,000) in enterprise owned and operated data centers by 2030. Given Gartner’s prognostication that public cloud services will hit $131B by 2017, this seems obvious. (There is likely to be new businesses dedicated to repurposing data centers to retirement homes or new fangled dance clubs.)

Just as people first thought automobiles were toys, early critics said the cloud would only be for limited use — test/development environments and spiky workloads. Now consensus is that the cloud can be used for almost all applications. Early cars were expensive and unreliable, but the evidence revealed a compelling reduction in TCIO that put the whole country on wheels. It may be the end of the road for the data center, but the economic forces shaping the cloud signal it’s the beginning of a better idea for the enterprise.

Greg O’Connor is CEO of AppZero.

20 Comments

Tatersolid

This ignores a simple reality: big, complex systems fail in unpredictable ways. We’ve been on a slow transition to the cloud from our premises DC to avoid a networking and power upgrade cycle. But the frequent widespread outages at Amazon, Azure, telcos, etc give us pause.

It’s hard to beat the reliability of the short strands of fiber currently connecting most of our employees to their critical applications.

As long as the US ISP market is dominated by companies like Verizon and Comcast, betting your business on WAN/internet connectivity seems stupid. SLA credits cannot come close to the costs if downtime. Our main access-to-DC connectivity has gone down once in 15 years due to a Cisco bug, but all of our various WAN/Internet circuits have multiple outages per year.
Fix the Internet and the cloud might conquer, but good luck taking down the big telcos Mr. Bezos. Amazon can’t even deploy IPv6 in 2014!

Brian McAuliffe

Regarding:

“The result was a decline in the number of US auto manufacturers from more than 200 in the 1920s to just eight in 1940. This astounding 96 percent reduction in manufacturers over 20 years foreshadowed what could happen to enterprises running their own data centers in the not too distant future.”

I could see the comparison to other cloud computing companies, i.e. automakers to cloud makers if you will but I fail to see the correlation to companies running their own data centers. I’ll generally agree the trend is to move into the cloud, but to say a company will cease to be competitive or would fail because they decided to run their own DC’s. It might produce a competitive disadvantage, but not necessarily a fatal one. In a world with ever increasing regulation from SOX to HIPPA and data breaches galore, cloud companies still need to make the compelling argument that their particular infrastructure is resilient and hardened to the point that breaches and violations will not occur. A CTO might not find much longevity if his or her only answer to the data breach was ‘it was the cloud company not us’….

The flip side of that argument is the cloud company, by virtue of it’s eminent cloudiness will provide better protections than a company can in it’s own DC’s. That;s a nice fantasy, the reality is most of them have agreements that grant them little liability to such breaches.

Brian

Alex

This is such rubbish.

At *any* scale ( i.e. you need more than a couple of servers ) Amazon is simply not cost competitive. Amazon does not generate anything – it packages what exists and resells it to the not very bright crowd that cannot do fundamental algebra.

If Amazon buys something for $X then you can buy it for $X + 1% any day. Amazon sells it to you for 5x to 10x.

John Booth

This article is full of “cloudburp” there are very few “cloud” providers with their “own” data centres, amazon, google, and microsoft to name a few, most cloud providers reside in colocation facilities and colocation facilities are not that energy efficient.
An energy efficient data centre is one that adopts as many of the best practices as contained in the EU Code of Conduct for Data Centres (Energy Efficiency) or the DCEP programme and this includes moving into the software stack and eliminating bad code.
The author should read “Data Centre 2.0, The Sustainable Data Centre by Rien Dijkstra for some real insight into how data centres can become more efficient.

grego1

John – The point is not about if i know how to build an efficient data center. The point is most enterprises can not build, operate and maintain a data center at anything close the cost of what AWS, Microsoft, IBM and Google can.

Thomas

I will not argue with the article – I find it quite true. But (there is always a but), the problem with efficient use of the cloud is very fast internet. And unless internet speeds are like 5G for every device everywhere (or at least highly populated areas), cloud computing will not be popular, thus it will remain at high cost, because users will be limited to large companies with optical links.
And looking at the current market, large telecoms are totally milking the internet users. Oh you get unlimited, but its not unlimited, after 1-3Gb internet gets slow…. what cloud computing then? Optical connection in MT, Utah or Main… forget it in the low populated states. Cloud computing will be suffocated if things remain as they are and it seems they will not change soon.

monty

He’s not taking into account, the government / spy factor.
Companies like their proprietary information private.
As data centers grow, the government puts their copying machines in them.
Companies do NOT LIKE THAT.

dongateley

Bezos will buy up all those empty data centers he is displacing and use them as warehouses and depots for his same day retail delivery. Then and only then will he turn on the profit spigot by bumping prices.

quiviran

A data center is a data center, whether you build your own or rent part of someone else’s. So a provider of data center rentals (“the cloud”) has a law that concludes you should buy from him, or someone like him, seems disingenuous at best. The clear technology trend is decentralization. Computing, refrigeration, power generation, you name it. And, while there are many entrants into an emerging field, that doesn’t foretell that the field will survive at all. Try buying a mini-computer today. Long term the cloud must fail due to security and privacy issues. No wise CEO is going to lets the crown jewels reside in a location out of their control. It would be like trying to conduct your business using Google Docs and Sktetch-up. You’re OK until the provider decides he doesn’t want to do that any more or he decides his business objectives outweigh your privacy needs.

Tor Björn Minde

@ased Maybe they calculate including the hardware exchange rate of 3-4 years? That means you cannot use the latest IC improvements beacause you have your “old” hardware. Taking IDC prediction of data growth 40 times from now to 2020. The computing efficiency improvement from Moore’s law would cover it all and we shouldn’t see any new datacenters at all.

Darrell O'Donnell, P.Eng.

Reblogged this on and commented:
As more and more operational (e.g. first responders, security professionals, and emergency managers) consider the cloud, this article should be used as a reference

David Mytton

It’s true that building your own data center now is probably a poor choice because of the amount of time and investment in efficiency and cost by the big providers – just looking at the innovations from Google, Amazon and Microsoft in all areas of cooling, power, networking and hardware.

But the number of people who are actually considering building their own facilities has to be tiny, which makes stressing this point mostly irrelevant. The decision is not cloud vs build your own data center, it’s cloud vs running your own equipment in someone else’s data center i.e. colocation. And it’s this comparison that is important because once you hit scale (some say that’s monthly spend of around $200k), it is almost always better to run your own. This is from a perspective of support, cost and control. Flexibility is where public cloud can be compelling.

So, talking about building your own data center is a smoke screen meant to distract. The real discussion has to be around colo vs cloud.

Todd Shaffer

David,

The total cost measurement can just as easily be applied to the colo environment. You’re still paying for lag time, labor, electricity, real estate, operating systems, networking equipment, servers, storage (SAN, fiber), failures of the aforementioned, opportunity cost, software/hardware maintenance contracts and geographic vulnerability.

What he’s pointing out is there is far more to calculating the cost of operating a datacenter/colocated environment than just the monthly out-the-door check to the facility. Regarding building your own real datacenter is that the smaller niche competitors out there are going to turn into the Fords and GMs and/or they’re going to die.

[email protected]

Who is McKenzie… Any relationship to McKinsey?

ased

Hhm, It’s probably because I don’t know enough about how cloud computing costs are counted, but I can’t quite wrap my head around the Bezos’s law, or mainly it’s link to Moore’s law.

Does Bezos’s law state that these cloud providers rip us off if it takes twice as long for the price of “a unit of computing power” to be cut in half than it takes according to Moore’s law for the price of technology to do it to be cut in half? Or are you saying that the same “unit of computing power” = an actual server new which is now four times as powerful (in 36 months) and still the price goes to half because of scale, innovation, competition etc., in which case it would be a great deal?

The following is the same calculation everyone has made, so you miss nothing by skipping reading it.

So according to Moore’s law, amout of transistors on IC doubles every 18 months. Can we also assume that this also applies to amount of operations we can do with same amount of energy? This would mean that a server still uses the same amount of enery even though if has double the power.

If a server takes the same amount of maintanance and the same amount of energy and space while its computational power doubles, shouldn’t the cost of maintanance (TCIO) per computation unit (E.g. megaflop or more practically hour of use of an instance with same virtual processor power and memory) be divided now by two. If we get this doubling every 18 months, shouldn’t the price of a computation unit half every 18 months instead of 36 months? Maybe even faster due to the things you mentioned, scale, innovation and competition.

vexxed72

The use of “unit of computing power” is a misleading, but I believe the author is talking about a complete machine and the cost of running, maintaining, etc that machine, not just the raw processing power of the CPU.

The CPU is the only thing that follows Moore’s law and these days it’s often not even the most important scale point for cloud hosted services (network, memory and data storage are often the biggest pain points).

Comments are closed.