Despite the inexorable move to the cloud, some companies cling to the idea that the risks outweigh the benefits. Dave Girouard, former President of Enterprise for Google, argues that the logic these skeptics use is, well, “insane.”

photo: John Wollwerth/Shutterstock.com

In the years that I led the Google Apps team, I heard every imaginable objection to cloud computing. Back in 2007, perhaps, those arguments may have had more merit, given the immaturity of most services and limited track record of the providers.

But over time, it became clear to me that those who rejected cloud computing (typically in favor of that unicorn of technology: the private cloud) were experiencing a form of insanity that, if left untreated, would put the very existence of their companies at risk.

When I left Google last year (to found Upstart), I jumped over the table and became a consumer of the cloud. As CEO of a tech company that does not even own a computer, tablet or phone, I now get to fully experience the cloud from a customer’s perspective. So before I get into any specifics about the myths of the cloud averse, allow me to recount a couple of anecdotes to give a little context.

I’m entirely obsessed with Google Analytics’ real-time dashboard, so it was with much dismay that on the morning of Jan. 16 of this year  I saw our traffic at Upstart drop to zero.

Zilch. Nada. Zippo.

Checking quickly with our engineers, I learned that Heroku had gone down and since we’re hosted on Heroku, we got taken down with it. Hard. Because Heroku is an application platform that runs on Amazon Web Services, I didn’t know whom to blame. To me, it didn’t matter – we were down for 40 minutes or so, and that sucked. I checked Heroku’s status page, and figured out what had happened, and what they were doing to fix it.

But all I could really say to our users was “we’re waiting as fast as we can!”

That is one of the chief conundrums of cloud computing: you are powerless to fix a problem, and entirely dependent on somebody you can’t see, hear or yell at, to fix it. People hate that.

I was on the other side of this sort of panic many times during my years at Google. Despite the BS about the “end of email,” it’s still the most broadly and voraciously consumed business application in the world. So I occupied an elite circle in Hell when our services failed to deliver. In short, when Gmail went down, pandemonium ensued – particularly in Silicon Valley. In fact, the outcry from a sizable Gmail crash was enough to bring down Twitter, too.

I recall a particularly terrible outage that happened three or four years ago. I was in a hotel in Philadelphia when my own email stopped working, and my Twitter feed lit up like a roman candle. Gmail was down, and I’m not talking about one of those outages that affects less than 1 percent of users (you know, like a few million people). I’m talking about a big one.

I called a couple of engineers who I knew were close to the situation and were working to resolve it. But I got off the phone quickly, because I knew talking to me wasn’t helping anything. (To the contrary, I was wasting their time.) So I went out for a run, just praying that Gmail would be back up by the time I returned (which it was).  So even as President of Google Enterprise, I was powerless to do more than ensure that the best people were working on resolving the outage.

And this, in fact, is the essence of the cloud. As a consumer or corporate buyer of cloud computing, your task is ridiculously simple: Make sure the best people are working on it. And in fact, those engineers working on Gmail are so good that sizable outages are extremely rare these days. Yet whether it’s service reliability, data protection, or regulatory issues, there remains to this day an insane resistance to cloud computing that is quickly becoming the “Darwinian litmus test” for companies in every industry.

This insanity has three pervasive dimensions to it:

Insanity #1: These big outages mean we should keep things in house

I have news for you: a big public outage is actually a sign of success for a cloud vendor – after all, it means tons of customers are relying on its service, no?  (When was the last time you read about IBM experiencing a hosted Lotus Notes outage?) But underlying the essence of Insanity #1 is the presumption that a cloud outages implies your in-house IT organization could do it better.

In reality, outages merely provide your IT department with excuses to protect their kingdom. The facts are that Gmail uptime is in the range of 99.99 percent – meaning the average user experiences about four minutes of downtime per month – and Amazon targets 99.95 percent for AWS. So, can your team beat that?

Further, this confused IT leader thinks his team can manage a service more reliably than a company whose entire existence depends on its ability to do so. To put it bluntly, Google has assembled the greatest collection of computer science talent in the world. Similarly Amazon has a multi-year lead in delivering compute power by the drop, with which it’s happy to provide to you with the single-digit gross margins of a successful retailer. Your IT organization simply doesn’t rate at this level.

Insanity #2: I need somebody to talk to when a service interruption occurs

I’ve never understood why IT departments seem to care deeply about how important they are to their vendors. It’s a dysfunctional need that can only result in you paying far more to your vendors than is necessary, so that they can afford to show you the love.

Needing to talk to a cloud vendor when there’s an outage is a striking example of this: Would you rather have your cloud provider spending millions on account managers to call you when something goes wrong, or instead to spend their money on world-class engineers working to fix the problem?

You can’t have both (unless you want to pay a lot more for your service). By this time, any credible cloud vendor has mastered the art of providing online status updates (just as Heroku did for us here). It’s no small challenge to do this right (we worked on it for years at Google), but it’s critical to servicing customers well in the cloud. So why are so many large companies turned off by the idea of getting updates via a website or RSS? Because it doesn’t make them feel special.

As a consumer of cloud computing, your goal is to be as unimportant to your cloud vendor as possible – to ride the curve of innovation and cost reductions that result from their efforts to serve an enormous and diverse customer base.

Insanity #3; Cloud is OK for non-critical applications with non-sensitive data

If you believe that cloud vendors are just plain better, faster and cheaper at delivering IT services, then it’s another level of insanity (and illogic) to limit the use of these services to inconsequential applications that aren’t critical to your organization. This is the status quo’s last stand – “this data is too sensitive to enable Company X to manage it for us!” (A similar concern is for those who find perverse comfort in actually knowing where their data physically resides.)

This thinking is backwards. If you care about the reliability, security, and the protection of your data, then you should entrust it to those who are most capable of managing it. If you believe you can match the capabilities and rigor of Google’s Security Operations team, I wish you well.

Of course, the objection to the cloud heard more often than any other is “it’s not me, it’s them.” In this case, “them” means the boss, the lawyers, the executive team, or the board of directors. And just as frequently, “them” is a varied assortment of regulators whose statutes invariably fail to give clear guidance on whether cloud computing is, in fact, legal. Strangely, even when you speak with these regulators, you will hear the same thing: It’s not me, it’s them.

Ultimately, the spoils of cloud computing will accrue to those organizations that break through the insanity, that resolutely fight through these distractions and ambiguities to drive this radically better approach to computing throughout their organizations.

Dave Girouard is founder and CEO of UpStart. Previously he was President of Enterprise at Google. Follow him on Twitter @davegirouard.

Photo courtesy of John Wollwerth/Shutterstock.com.

  1. While the points you mention are some of the reasons businesses use, you failed to bring up the #1 reason: security/privacy.
    In fact, Google’s own press releases showing how many times information was released to various agencies underscores this perfectly. I’m not blaming Google for this, they are abiding by the law. Everyone knows that getting the same information from the data owner would require a higher standard of probable cause.

    1. Thank you, that is spot on. And then there is the Not too far-fetched possibility that e.g. your Google Account gers closed without any explanation, you don’t have access to any of your data, your mails, your photos, your calendars, your Android apps (several 100$) etc. You enter a Kafkaesque nightmare of automated replies that tell you nothing can be done (especially if you don’t speak English and don’t live in the U.S.).
      The cloud is fine and dandy as an additional third or fourth backup storage, but rely on it, trusting them with my business, my livelyhood? Never. I would not trust Google (or Amazon etc.) as far as I can sling a piano.

    2. Spot on! Dave Girouard’s view is bordering on the arrogant , kind-off ‘those who don’t get cloud are server-hugging retards who are just proctecting their dinosaur-turfs’. The Google’s and AWS’ of this world still don’t get enterprise-type Service Level Agreements and compliancy/regulations. This is what is preventing 99% of the enterprises from adopting Cloud Computing in a massive way. Most CIO’s ‘get’ the cloud, and they can’t wait to use it. It’s the vendors who need to get their enterprise-act together. Dave’s display of his ignorance of this reality is, well, not helpfull.

  2. Hmm,,so I guess in the event critical e-mail couldn’t be delivered for 40 minutes nobody remembered how to use a phone? or are these the same people when you call them numberous times you only get voice mail and either respond or not by e-mail? oh yes they forgot that device in their hand also allows you to speak? they built this disfunctional new information communication mutant of a drunk monkey with open head wounds..if this author
    thinks I felt his pain,,,I hope he’s in the hell he helped to build..

  3. I think you contradict yourself here a little bit with regard to communication and Myth #2. As president of Google Enterprise, your first reaction was to call for insight about the problem. Why should paying customers deserve less? Like you, they’re also responsible to their clients.

    1. yes that’s at least partially true. but it was my team working on it on behalf of millions of others, so I think it’s somewhat different. I was in position to understand what was happening and know the people involved in resolving. Note that I didn’t try to call Heroku a couple of weeks ago.

      1. Matthew G Trifiro Sunday, January 27, 2013

        Google sells premium support packages, as does Heroku, which makes cloud an even more robust offering for businesses that need that extra level of direct contact. Some of these support packages include dedicated phone numbers and even connect routes to specific individuals. Having a full range of options allows you to build and scale your cloud infrastructure according to your needs — including when you need to get somebody on the phone now.

        Support as an add-on is very different from the historical business software sale. If you buy say, a big SQL database from a non-cloud vendor, you will surely pay a mandatory +20% for support on top of your license, whether you needed it or not. WIth today’s cloud vendors, extra levels of support can be bought and sold a la carte. The cloud a la carte model lets you tailor your spend according to your needs and ability to pay

        With cloud, not only do you get a lot of these services more efficiently (especially from TCO perspective), it also opens up top-tier technology to a new segment user: Smaller businesses and new businesses (even new projects within larger businesses) now have a low-cost entry path into enterprise-grade tools.

  4. _I have news for you: a big public outage is actually a sign of success for a cloud vendor – after all, it means tons of customers are relying on its service, no?_

    That means Microsoft’s Office 365 is a huge success?

    1. “a big public outage is actually a sign of success for a cloud vendor … it means tons of customers are relying on its service”

      Actually, it means tons of customers are without service. But this is “success” somehow. OK.

    2. I’m sorry Swapnil …did you mean to say “a big public outRage”?

  5. While these comments are spot on with regard to commercial enterprises, the last point takes on a different tone for education or government customers.

    Their ability to accept the perceived risks is often far more limited, and much more career limiting.

    Our company focuses on educational institutions running Google Apps and over the last few years we’ve seen a huge shift in attitudes, driven by a combination of natural (or accelerated) attrition of mgmt stalwarts AND availability of outcomes data.

    The outcomes data makes clear the opportunity cost of the status quo. “We could have a huge impact on student literacy, or we could keep the data right here in the server room.”

    The bigger point is not really about moving to the cloud, it’s about moving to better tools.

  6. Bah, #3 addresses non-issues. Having critical data hosted outside of direct control of the enterprise is still a no-no.

    Critical data for enterprise is by definition not only the one which must not be damaged or lost, but must also remain protected and must be guaranteed to be destroyed. With cloud or any other outsorcing company, this is tricky at best and for majority of enterprises simply a show-stopper.

    While I can agree that preventing damage or loss is something that Google and others do well, what really happens to data that needs to be “deleted” isn’t all that clear. How many times did Google have “oops” when it found wireless data that it wasn’t supposed to have any more. I agree that this is apples and oranges comparison, but really: how do I know and ensure that destroyed data is actually destroyed and won’t come back to haunt me? Obsessed with where my data resides? I sure am as well as I’m obsessed with ensuring that I follow my local law that Google or some other random cloud provider doesn’t even have a clue about.

    Similarily goes for the protection. While I don’t think risk of somebody malicious gaining access to data is any bigger on the cloud as opposed to “private cloud”, theer still remains serious threat of lawful legal requests for data where Google can be compelled to both release the data and stay quiet about the fact that somebody elses data was released. And this data can be data from AppEngine, e-mail, IM services and include both content and metadata about that data.

    If enterprise would be in sole posession of that data, there would be exactly no risk of that data ever getting out without ability of enterprise to challenge subpoena or warrant before that data is given, to ensure that only minimum required is given and be completely aware of what, when, to whom, why and how was given.

    While the last point isn’t something that cloud provider can influence, but rather limitation of legislation in digital age, it still means that for certain kinds of data outside hosting simply doesn’t cut it. Cloud or otherwise. Recommending to store or even process such data on the cloud simply isn’t good recommendation. While nobody really cares about netflix movies or petabytes of funniest home videos, enterprises care about internal communications, business plans, customer data and possibly other critical information as well.

    It would be insanity to give out this kind of data before thinking long and hard with your legal team about what could happen if somebody will obtain this data without ability to challenge order for release of said data. Fluffy blog post about “how great is our technical team” simply doesn’t cut it, because techology isn’t the issue here. Legislation is.

    Oh, by the way: it doesn’t matter where my data resides. If CA judge issues a warrant for data and gag order, Google will have to comply. Technical detail of where the data is stored, as long as it is in posession of Google, is irrelevant. EU law, which deals with teritory where data is processed is absurd on its own, but that doesn’t change the fact that there is no law that would ensure that posession and ownership in digital realm don’t always match like they do in physical realm. Ignoring that fact and staking your buisiness on it is special form of instanity. One that many enterprises are guily of.

    1. Anthony Porcano Sunday, January 27, 2013

      “Bah, #3 addresses non-issues. Having critical data hosted outside of direct control of the enterprise is still a no-no.”

      There is a valid point here. If I want to prove that an AWS EBS volume was destroyed I can’t do that. There is no way (now) to receive a confirmation that data was wiped according to AWS’s policies. You just have to trust that the processes and procedures they are audited on are being executed in all cases. That said I expect this is a feature we could expect to see in the future. In the interim it’s not stopping large enterprise and government from using AWS.

      1. Of course you can. Always store the data encrypted (do that regardless of any of these arguments, it’s easy enough). Don’t keep the key inside AWS except in host memory where it can’t be (reasonably) extracted.

        To delete, delete the 20 bytes that form the password. The data is gone, and nobody, not google, not amazon, no CA judge, not barack obama himself can order it restored.

  7. I think IT is mostly reluctant to the cloud because it makes their job increasingly irrelevant.

    1. Alexander Conroy Saturday, January 26, 2013

      “I think IT is mostly reluctant to the cloud because it makes their job increasingly irrelevant.”

      You hit the nail on the head with that one. The more we outsource our data to companies that automate everything the less technical jobs their are. By outsource I don’t mean moving business overseas, but moving it to another non affiliated company.

      I agree with everything in this post as to why the cloud is better and more efficient, but it is not good for economy at all.

      Cloud companies specialize in cutting costs for both client and themselves, reduction of the amount of people it takes to support IT needs is the downside of this.

      I <3 the cloud, love my Google Drive account and being able to share it with my co-workers, but I can see how if I had a large department and just payed a cloud company to deal with my data and servers, I may skip on hiring a IT manager or two.

      The problem with tech in general is that it is turning jobs into automated processes. I wish I had an answer as to how to make it a beneficial economic situation out of this inevitable process.

      1. Alexander, your comment is an example of what economists call the “Broken Window Fallacy”, the idea that make-work is good for the economy — and it’s a fallacy. You might not see the IT work managing those private servers as make work, but if there’s a way that it can be accomplished as well (or better) without those jobs… it is.

        While it is hard on individuals who need to retool and retrain, and it can be difficult for the economy if changes occur faster than people can adapt, in the long run if all of those services can be provided with a tiny fraction of the staff, the economy is energized. All of the capital and labor freed up can then go out to attack new problems and create new services. Unless you believe there are no more problems to solve and no more products to build.

      2. Alexander Conroy Saturday, January 26, 2013

        The Broken Window fallacy does make sense. I understand the point, and I think it is mostly the speed of change to automation that concerns me. It’s not that I don’t see a long term solution along with automation, but I see a very painful interim.

        Also with the tendency for large companies to sit on large sums of money, the economy isn’t always immediately energized by these new found efficiencies. If most of corporate profits were used, I would wholeheartedly agree that automation could energize the economy.

        I am very interested to see new problems to tackle, and I do not, by far, believe we have run out of problems to solve or products to build. I appreciate your opinion on this matter.

      3. Just the other day, I mentioned in passing on a technical forum that I was the Google Apps admin for my school. I was ridiculed by more traditional IT folks for that not being a real IT job. My response? It’s not. I am a teacher, and I manage GApps, schedule training, and try to script solutions for our school in my very little spare time. I put in a few hours a week, and this is sufficient to not only keep the system working, but to improve its performance every quarter.

        I _have_ done more traditional IT work, and this system is much better for our organization. Do we have downtime? Of course. The government-supplied Internet connection goes down or Google is unresponsive for a few minutes. In my experience with school IT, it’s definitely less downtime than we would see by running it in house. I get to focus on new solutions instead of grinding to keep us up and running.

      4. “I agree with everything in this post as to why the cloud is better and more efficient, but it is not good for economy at all.”

        It still takes in-house guys to utilize hosted services (I know very few product managers capable of setting up a functional EC2 instance); they just get to work smarter rather than harder.

        Also, don’t overlook the armies of engineers that it takes to run these hosted services.

      5. …and …”the less technical jobs their are” – The LESS technically qualified POSSIBILLITIES
        that remain in the ‘Technical GenePool” to help hold all of ‘this’ up – and RUNNING. When stuck on “The Information Highway” better to know mechanics – than to ‘Eventually’ wait for one (a qualified mechanic) to ‘show up’ …and – when he decides to get there!

      6. - Conroy you’re right …”the less technical jobs their are”
        – The LESS technically qualified ‘Critical Thinkers’
        that remain in the ‘Technical GenePool” to help hold all of ‘this’ up – and RUNNING.
        When stuck on “The Information Highway” better to know mechanics
        – than to wait for one (a qualified mechanic) to ‘Eventually’ show up’
        …and – when he decides to get there!

      7. This whole thing reminds me a lot of the “mainframe” versus “PC” discussions. Academics and huge companies … firmly on the side of the mainframe (which, like the cloud today, was a vendor-managed datacenter, it was even drawn in schematics exactly the same way : a little cloud). The PC fought on the other side. People could actually do things with PC’s while vendors just sold their pre-made inflexible applications.

        There is no question in my mind as to who will win this time around, same as last time. People should look at it in two ways. First there’s a nice opportunity forming for migrating companies into the cloud, and you can focus on that. Certainly lots of cloud migrations will happen in the near future.

        The huge opportunity, of course, is to found the company that will be to Google what Microsoft was to IBM. It will be a lot easier if you do it while everybody still has windows laptops.

        Specialized custom applications versus generic vendor-controlled huge applications. When it comes to cost-saving generic solutions the cloud beat the cloud. When it comes to adapting to changing circumstances, small, custom, distributed applications will, once again, eat the cloud alive.

  8. tl;dr : Disagreement != insanity. Your unwillingness to accept possibilities you haven’t encountered or avoided doesn’t mean those possibilities don’t exist or can always be avoided.

    Point #1: Bandwidth + power is sometimes cheaper than bandwidth + CPU cycle costs in many cases. If you have a massive spike, you may not worry about down time, but you will worry about the bill. There are plenty of occasions where the cost is unacceptable by comparison. And this is even after initial investment.

    Point #2: If you stick to a generic storage or CPU package that needs no modification, then yes, go with the cloud. But when, you need to switch priority for some CPU threads without jumping through hoops or dynamically allocate high-priority data to primary search spaces and low-priority data to archives, then you’ll suddenly find it comforting to know you have physical access to the servers and primary control interfaces.

    Point #3: Evaporates quite quickly when the U.S. government says cloud data isn’t technically stored by you, so they can do whatever they please with it. Since you’re not the cloud provider, and it’s off-site, they can claim “it’s not your data” and use it as evidence. Read : Megaupload.

    Don’t laugh, that’s an actual argument used often by them during court cases.

    You store your data in the cloud and if you want it to remain private, it better be encrypted.

    Bottom line: Any extra “Stuff” you add to the stack is another layer that can potentially fail. More “stuff” doesn’t make a better platform, but BETTER “stuff” does.

    There’s something very comforting about saying “Hey, Mary. We just lost power and we’re running backups. Could you make sure the generators kick in?” and hearing back, “Sure, I’m on it.”

    1. Anthony Porcano Sunday, January 27, 2013

      “Since you’re not the cloud provider, and it’s off-site, they can claim “it’s not your data” and use it as evidence. Read : Megaupload.”

      These guys were in colo space. There are very few companies that can afford to build out their own data centers, and even where you do it doesn’t stop federal prosecutors when they show up with a warrant.

      1. The US actually doesn’t care about warrants or whether your data is outside the country. Warrants can be challenged in court, but it’s harder when it’s not obvious the data is yours. But my point is that all this is quite a bit harder when you own your own metal. I.E. You own the servers on your native soil.

  9. If your technology is not part of your core functions of our organization, then dump it to the “cloud.” But, if it is, and you can afford it, you are still crazy to not have the expertise in-house.

    In my opinion, the “cloud” is much like Costco. You can probably get everything you need there food-wise to survive but just because you can doesn’t mean you should. Many, they are fine relying on whatever Costco brings in to sell and they work their diet around those limits.

    For others, it is a great place to load-up on toilet paper because we can cook our own stuff and may not want the portions we would have to deal with.

    1. Alexander Conroy Saturday, January 26, 2013

      Love the Costco analogy. Exactly right about the cloud with that. This is why I have a junk Dropbox account :)

  10. “A similar concern is for those who find perverse comfort in actually knowing where their data physically resides.”

    Your point is generally well taken, however, there are probably some (edge) cases where legalese could put a company in a bad situation with respect to where the data is actually stored. Admittedly, the vast majority of prominent applications in the cloud are with one of the major provider, but imagine that (for whatever reason) a company decided to use a provider whose country did not have laws making them liable for any misuse or mistreatment of data; if that provider did something negligent, intentionally or not, that led to your data being accessed/exposed/deleted/etc, you would likely not be able to hold that provider accountable and would instead just end up losing customer/client confidence (and business).

    Obviously this really depends on where the company is versus where the data physically resides, but it’s really not unreasonable to just “be aware” of where said data is and potentially using that as one of the sanity checks for whether or not a cloud provider is viable (of course, setting aside other things like “will the location of this data cause unacceptable latency” or, conversely, “I don’t have to care about transfer overhead because my application isn’t hitting the metal with every operation”).


Comments have been disabled for this post