5 Comments

Summary:

Google open sourced a Docker-centric tool called Kubernetes that lets its cloud computing customers automate their resource management similar to how Google does it internally. It’s part of a sustained approach to prove Google’s chops as a cloud provider by pushing its vision of computing.

Inside a Google data center. Image courtesy of Google
photo: Google

When Google announced the Kubernetes Docker-management system on Tuesday, it wasn’t open sourcing its cloud computing “secret weapon” as much as it was open sourcing its viewpoint on how applications should be built and deployed. Google, the cloud computing provider, will always be compared with Amazon Web Services, but pushing technologies reminiscent of Google’s own vaunted Omega system could become a strong point of distinction and a major draw for developers.

This has been Google’s approach to cloud computing all along, if you’ll recall. The promise of App Engine, its now 6-year-old platform-as-a-service offering, is being able to deploy web applications on Google’s infrastructure stack and being confident they’ll keep running and scale as needed with little handholding. Not long after Google opened up its Compute Engine infrastructure-as-a-service offering, it announced a feature called live migration that promises to dynamically move virtual machines from region to region when Google is doing scheduled maintenance in the original region. It also offers container-optimized VMs.

Kubernetes seems to split these approaches down the middle. Docker and its container-based approach already appears to have stolen some of the thunder from early PaaS efforts by giving developers an easy way to build applications and also to deploy them across various environments (Google, in fact, now also supports Docker within App Engine). Google was already using containers heavily internally, all managed by Omega to move them from place to place and ensure its services keep running. It wasn’t too big a leap (one could argue it was a no-brainer, in fact) to essentially rebuild a less-sophisticated version of Google’s system specifically for Docker containers.

A diagram and explanation of Docker. Source: Docker

A diagram and explanation of Docker. Source: Docker

In theory, developers, operations staff  and Google should all be happy. Google’s cloud customers get to build applications and cloud-based systems that run like Google’s do, and they don’t have to give up too much control, give up on the tools they like or do too much heavy-lifting to get there. Google gets to prove the awesomeness of its cloud (and its engineering smarts) by getting those customers doing things the way Google thinks they should be done. Aside from higher resiliency to server failures, the Docker-plus-Kubernetes combination in lower bills because better resource utilization means fewer Compute Engines instances are required.

Open sourcing Kubernetes is the icing on the cake — albeit some very critical icing. If someone can fork it to run on environments other than Google Compute Engine (which is what the code Google released is built for), Kubernetes acts as a stick with which to beat the still bigger and badder AWS. Companies fear being locked into a cloud platform, but a Kubernetes that can run on virtual machines, bare metal or even (gasp!) AWS would mean Google’s approach to computing now travels well — something that can’t presently be said about AWS’s approach to computing.

Inside a Google data center. Image courtesy of Google

Google wants users to run like it does, minus all the servers. Source: Google

All of this ties nicely into one of the undying themes of our Structure conference, which kicks off a week from today (on June 18) in San Francisco. That theme, which I have written about very recently, is the osmosis of web infrastructure through the corporate filters and into the mainstream. It’s not just Google that doesn’t particularly care about server virtualization or the idea of individual machines at all, but a growing numbers of large web companies as well. Some very smart folks from Google (Urs Hölzle), Facebook (Jay Parikh), Twitter (Raffi Krikorian) and Airbnb (Mike Curtis) will be presenting at Structure and speaking about how they design systems capable of functioning at web scale.

Twitter and Airbnb will likely mention Mesos, an open source technology originally created at the University of California, Berkeley, that was inspired by Google’s cluster-management systems and currently underpins infrastructure at those companies as well as many others. Mesos also supports Docker — eBay, in fact, has published a lengthy blog post detailing its Docker-on-Mesos environment — and can can turn any collection of Linux servers into a pool of shared resources. Here’s a handy presentation illustrating some Mesos deployments at various companies.

A diagram of the Docker on Mesos architecture at eBay.

A diagram of the Docker on Mesos architecture at eBay.

Florian Leibert, a former engineer at Twitter and Airbnb, and founder and CEO of a startup called Mesosphere, has referred to the Mesos stack as being like a PaaS for your data center. Or, in the case of Airbnb and HubSpot, for your AWS resources. Like Google’s Omega and its offspring Kubernetes, Mesos and a related technology called Marathon (or, in Twitter’s case, a system called Aurora) simplify the deployment of applications and services and automate the process of ensuring each has the resources it needs to run.

With Kubernetes, it seems like Google is (once again) wisely trying to position itself as the cloud provider that will lets its users actually operate like a cloud provider. As Docker, Mesos and similar approaches rise above the early adopter set and into the mainstream, Google wants to be the cloud provider that can stand apart from the crowd and say with sincerity that it was built for this kind of computing.

You’re subscribed! If you like, you can update your settings

Comment

Community guidelines
Sunday, August 31, 2014
you are commenting using your account. Sign out / Change

Comment using:

Or comment as a guest

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.

5 Comments

  1. Thanks for the up date.
    Leslie

  2. I’m unsure how much of a real problem lock-in actually is. It’s certainly a theoretical problem but are cloud users actually caring enough to do something about it? The default choice is to use AWS for infrastructure the only reasons someone wouldn’t use them are features and cost. There’s a lot of people willing to build into their APIs and their products, including proprietary ones, without worrying about lock-in.

    Businesses have been “locked” into software for years and that doesn’t prevent the purchase of commercial (or even open source) products.

    So lock-in can certainly be touted as a marketing benefit for this, and it probably works well too, but I wonder how many people will actually use it for that and instead will use it because it’s a good tool for deployments.

    1. I almost totally agree. But I’ve come across some smart people who use AWS but avoid certain services b/c they can’t easily replicate that environment elsewhere.

  3. Good information and many things to think about as we move forward in search engine technologies.

  4. Alex Simonelis Thursday, August 28, 2014

    Are the multiple layers of software worth the benefits? And are they as secure as a vm? Is this unduly complex?