2 Comments

Summary:

In this era of cheap-and-reliable renta-data centers run by Amazon, Rackspace, and others, does it make sense for a company to build a new data center on it’s own? Unsurprisingly, Amazon’s own James Hamilton doesn’t think so. More surprisingly, other IT pros agree.

2754478731_6cac6d30a8_z

In this era of cheap-and-reliable rent-a-data centers, does it make sense for a company to build a new data center on its own anymore?

Amazon’s data center guru James Hamilton is pretty clear that he sees no reason for most companies to keep constructing new data centers from scratch, but if they have a huge compute load and really have to, they should build way more capacity than they need and sell off the excess a la Amazon itself.

While Hamilton has a vested interest in people moving their compute loads to Amazon’s infrastructure, his build big or don’t build at all mantra resonates with several other IT experts. The consensus: It makes sense for most companies to trust their data center needs to the real experts in data centers — the companies that build and run data centers as a business. More companies will start moving more of their new compute loads — maybe not necessarily all the mission critical stuff — to the big cloud operators. That roster  includes the aforementioned players as well as Google, Microsoft, IBM, Hewlett-Packard, Oracle and others that are building out more of their own data center capacity for use by customers.

And for startup companies, the decision to not build is a no brainer. Connectivity to the cloud is the real issue for these companies. “If I was starting a greenfield company, the data center would be the size of my bathroom; there wouldn’t necessarily even be a server, maybe a series of switches — all my backoffice apps, my sales force automation, my storage would be handled in the cloud,” said David Nichols, CIO Services Leader for Ernst & Young, the global IT consultancy

David Ohara, GigaPRO analyst and co-founder of Greenm3 holds a more nuanced view. Companies with mid-sized loads really have to think things through, he said. “Once you get to the 5MW to 7.5MW data center, that’s just big enough to be super complex but the economics are weird. At that point you should probably build a 15MW data center and sell off the other 7.5MW to someone else or partner with Digital Realty Trust or some other company to share costs,” Ohara said. Data center size is typically described in terms of megawatt (MW) electricity consumption.

It’s in that 5 to 7.5MW area where the company starts having to know about  the niceties of chillers and power systems, he said.

“When you break through the 10,000 server barrier — that’s when you start needing 3 to 5MW of power and now you’re getting into major facility costs where you have to have multiple diesel generators, and complex power and cooling systems. And it’s in that 10,000 to 100,000 server zone where costs soar. At that point, there aren’t many companies on the planet that can achieve the scale of an Amazon, a Rackspace, a Google, or a Microsoft. So why not trust your loads to the experts?

There will always be pushback on this point but it’s starting to change. Asked what type of data or task should not be entrusted to a cloud provider, the CIO of one big company said, “The formula for Coke. But that’s about it.”

Chris Perretta, CIO of State Street Bank admits he may be an outlier here, but his company — which manages $23 trillion of other people’s assets, is going to hold onto its own data centers for the foreseeable future. “We want our own data centers,” he said. “Right now, I’d have a hard time convincing customers or even myself to use someone else’s.”

Database guru Michael Stonebraker, co-founder and CTO of VoltDB, fully backs Hamilton’s thesis. There is simply no way for more than a handful of huge companies to achieve Amazon’s data center scale; the same low electricity costs, the experience standing up data centers. As long as those companies are okay with running in the public cloud their decision is simple. “Sooner or later, if you’re a small guy, there will be huge incentive to move to the public cloud. You’ve either got to be really big or run on someone else’s data center,” he said.

Photo courtesy of Flickr user jphilipg.

  1. This article could use some additional clarification. I personally would never recommend “generically” that a company build more than they need so they can sell/lease it. The focus of any company should be on maximizing their ability to create, sell, and be efficient in their area of expertise. Becoming a data center provider is not a core element the average company should be attempting to build. Also, there is no magic cutoff point for when generators or redundancy are needed. The fact is, if you’re running critical load in your facility, then you need generators and other facets of redundancy. If you can’t afford to do it, then you should be paying someone else to.

    Thanks,
    Mark Thiele

    Share
  2. The article should mention AWS service level agreements.

    http://aws.amazon.com/ec2-sla/
    “If the Annual Uptime Percentage for a customer drops below 99.95% for the Service Year, that customer is eligible to receive a Service Credit equal to 10% of their bill (excluding one-time payments made for Reserved Instances) for the Eligible Credit Period. To file a claim, a customer does not have to have wait 365 days from the day they started using the service or 365 days from their last successful claim. A customer can file a claim any time their Annual Uptime Percentage over the trailing 365 days drops below 99.95%.”

    60min * 24hrs * 365days = 525600min

    525600min * .0005 = 262.8min of downtime a year or about 4.38 hours.

    If your internal app has a SLA of 99.999% you will still need to build it in house.

    Good luck.

    Share

Comments have been disabled for this post