Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Sometimes, it’s hard to tell when a novelty becomes a trend or a trend becomes the new normal. This is not one of those times. The era of the on-premise server is clearly behind us, with the cusp of change literally on our calendars.
In just the past week, we’ve seen significant server-shedding events and announcements from Google, Box and Amazon Web Services. Even Microsoft finally seems to get it: enabling people to work from anywhere is more important than keeping them leashed to a platform going nowhere.
This is no longer just a matter of doing things a little more cheaply, or a little less painfully, by doing them in the cloud. It’s bigger than that. Today, running your business on private servers is on the same level of odd behavior as carrying scuba tanks to provide a private air supply. Does it give you more control of exactly what you breathe? Certainly, but can you make a business case for all that excess weight? Going forward, the notion of owning your own server farm is looking equally eccentric.
Servers were a cost-effective stopgap during the period when processing power got cheaper, much more quickly than our planet-wide networks became pervasive and interoperable. The 1971 debut of Intel’s first microprocessor, the 4004, came three years before the word “internet” was introduced; the birth of the modern Internet’s TCP/IP protocol was still eleven years away, and that was seven cycles of Moore’s-Law processor improvement. Having your own room full of servers made sense, like having your own air dome on the surface of Mars: inside our little habitats, we could have pockets of breathable “air” of computation and connectivity, but outside it was still a near-vacuum.
As early settlers on Planet Computing, we all got pretty skilled at charging up our air tanks and patching our space suits for those dangerous walks across the landscape from one dome of air to the next. Remember your brief episodes of “going online” with your dial-up modem? Over time, though, we’ve made the atmosphere of connectitvity and shared processing power much more breathable. We’ve even leapfrogged ourselves: as one CEO recently asked his facility manager, “We spent how much to put state-of-the-art Wi-Fi in this building? And I get more bandwidth on my 4G LTE smartphone?”
We’ve also changed, in a fundamental way, the kinds of workload that we perform. When business computing was automation of internal record management, data originated at a predictable pace; analysis was needed on a regular schedule. Owning a battery of servers that can handle today’s world of bursty, externally driven data would be like carrying tanks that are big enough to get you through a marathon run at a sprinter’s speed. Even if you can afford the cost, you rightly dread the unproductive burden.
So far, furthermore, we’ve only talked about gross computational capacity. We haven’t even begun to discuss the more complicated tasks that we want to be able to perform, like real-time collaboration. When you own your own servers, collaboration requires you to simulate the sharing that a cloud makes completely straightforward. If something happens on Department X servers in one building, and something related happens on Department Y servers elsewhere, it takes a ziggurat of middleware to make it look as if a shared process is taking place in a shared space. This is more expensive, more failure prone, and can’t possibly be more effective than the real thing: an actual, single, concurrently accessible work product on the shared foundation of a cloud service provider. That could be any of several reputable innovators, but almost certainly will not be someone who’d rather sell you software to run on your own machines.
Imagine a day when colonists on Planet Computing wake up to the news that the air is now dense enough to breathe. Would some people say, “I don’t know: I’m not sure I trust it”? Would some people strap on their air tanks, out of long-established habit, and say, “You go ahead and try breathing that stuff. I’m sticking with what I know”? Of course. Every new capability has its late adopters.
But will those resolute tank-breathers be late to work, tired by their load, and preoccupied with watching their air gauges while others are sprinting ahead? Absolutely–and that’s what we see today in business, education, government, health care and every other institution. Those who shed the dead weight first get more done, sooner and better and at less cost.
We live on a fully connected planet, surrounded by tasks that increasingly demand immediate scalable access to rich processing power–mediated by ubiquitous networks. No new company starts out with a budget line item of “buy servers”; even in the largest enterprises, few today would want to risk joining the hall of shame for managers who build over-budget and under-performing server farms, instead of marshalling modern cloud services to solve the problem.
RIP, the server. You were what we needed when there was no alternative. You’re now a relic of a time that’s almost unrecognizable today; you represent a cost that’s unaffordable, and unreasonable, for all future time to come.
Peter Coffee is VP for strategic research at salesforce.com, where he serves as a liaison with the IT and business community to define the opportunity and clarify customers’ requirements on the company’s evolving Salesforce1 Platform.