In this era of cheap-and-reliable rent-a-data centers, does it make sense for a company to build a new data center on its own anymore?
Amazon’s data center guru James Hamilton is pretty clear that he sees no reason for most companies to keep constructing new data centers from scratch, but if they have a huge compute load and really have to, they should build way more capacity than they need and sell off the excess a la Amazon itself.
While Hamilton has a vested interest in people moving their compute loads to Amazon’s infrastructure, his build big or don’t build at all mantra resonates with several other IT experts. The consensus: It makes sense for most companies to trust their data center needs to the real experts in data centers — the companies that build and run data centers as a business. More companies will start moving more of their new compute loads — maybe not necessarily all the mission critical stuff — to the big cloud operators. That roster includes the aforementioned players as well as Google, Microsoft, IBM, Hewlett-Packard, Oracle and others that are building out more of their own data center capacity for use by customers.
And for startup companies, the decision to not build is a no brainer. Connectivity to the cloud is the real issue for these companies. “If I was starting a greenfield company, the data center would be the size of my bathroom; there wouldn’t necessarily even be a server, maybe a series of switches — all my backoffice apps, my sales force automation, my storage would be handled in the cloud,” said David Nichols, CIO Services Leader for Ernst & Young, the global IT consultancy
David Ohara, GigaPRO analyst and co-founder of Greenm3 holds a more nuanced view. Companies with mid-sized loads really have to think things through, he said. “Once you get to the 5MW to 7.5MW data center, that’s just big enough to be super complex but the economics are weird. At that point you should probably build a 15MW data center and sell off the other 7.5MW to someone else or partner with Digital Realty Trust or some other company to share costs,” Ohara said. Data center size is typically described in terms of megawatt (MW) electricity consumption.
It’s in that 5 to 7.5MW area where the company starts having to know about the niceties of chillers and power systems, he said.
“When you break through the 10,000 server barrier — that’s when you start needing 3 to 5MW of power and now you’re getting into major facility costs where you have to have multiple diesel generators, and complex power and cooling systems. And it’s in that 10,000 to 100,000 server zone where costs soar. At that point, there aren’t many companies on the planet that can achieve the scale of an Amazon, a Rackspace, a Google, or a Microsoft. So why not trust your loads to the experts?
There will always be pushback on this point but it’s starting to change. Asked what type of data or task should not be entrusted to a cloud provider, the CIO of one big company said, “The formula for Coke. But that’s about it.”
Chris Perretta, CIO of State Street Bank admits he may be an outlier here, but his company — which manages $23 trillion of other people’s assets, is going to hold onto its own data centers for the foreseeable future. “We want our own data centers,” he said. “Right now, I’d have a hard time convincing customers or even myself to use someone else’s.”
Database guru Michael Stonebraker, co-founder and CTO of VoltDB, fully backs Hamilton’s thesis. There is simply no way for more than a handful of huge companies to achieve Amazon’s data center scale; the same low electricity costs, the experience standing up data centers. As long as those companies are okay with running in the public cloud their decision is simple. “Sooner or later, if you’re a small guy, there will be huge incentive to move to the public cloud. You’ve either got to be really big or run on someone else’s data center,” he said.