It doesn’t have a enticing name like cloud computing or the appconomy, but cluster management really is some sexy stuff. Important, too: Done right, it’s the thing that makes the web run by letting companies including Google, Facebook and Twitter scale to billions of users without spending every spare dollar and every spare second of engineer time managing their servers.
And now that [company]Google[/company] is in the business of selling IT, it wants everyone to know this and experience it themselves. I explained this in June when Google announced its open source container-management technology called Kubernetes in June, and again last month when Google signed up a list of big-name partners to support it. On Monday, Google took things a step further by announcing a partnership with Mesosphere that will let Google Compute Engine users spin up a self-managing cluster in a few clicks.
Mesosphere is a startup (read more about it here and here) that’s built on top of the Apache Mesos technology. Mesos is essentially an open source version of the system that Google uses to automate its data centers, with end result being that many applications and services can share the same set of resources simultaneously because the system ensures that each gets everything it needs in order to run optimally. [company]Mesophere[/company] makes it easier to deploy Mesos and achieve those benefits, and also adds some tooling on top of it.
In addition to the new Mesosphere cluster-deployment features, the two companies also worked together to integrate Kubernetes and Mesos, giving joint users the option to manage their Docker containers with Kubernetes and manage the whole cluster (Docker containers included) with Mesos. To borrow an analogy from Docker creator Solomon Hykes in a recent podcast interview, if a Docker application is a Lego brick, Kubernetes would be like a kit for building the Millennium Falcon and the Mesos cluster would be like a whole Star Wars universe made of Legos.
In an interview about the Mesosphere partnership, Google Cloud product manager Craig McLuckie described the evolution of Google’s systems from requiring an “inordinate” amount of effort to manage into the epitome of automation they are today. It was the move to containers, and then to Borg as the data center operating system, if you will, that really made the difference.
“The number of services we were able to maintain massively increased, and we were able to focus on other parts of the organization,” he said. Urs Hölzle, Google’s senior vice president for technical infrastructure, explained this evolution in more detail at our Structure conference in June.
That’s the same pitch cloud computing providers have been making for years, only few (save for those pushing platform-as-a-service offerings) really had an end-to-end automation story to speak of. The cloud has always made it drastically easier to procure resources and launch applications, but infrastructure as a service did not mean distributed architectures, high availability and pooled resources as a service. In many instances, those things still require some real effort to achieve (see, e.g., what Netflix has built for itself atop Amazon Web Services).
And although there’s no guarantee the world will buy into Mesosphere’s approach, or even into Google’s push around containers and Kubernetes, the cat is out of the bag when it comes to cluster management. Being able to scale like Google is cute, but being able to run like Google is sexy.
Companies offering cloud computing services or private-cloud software are going to have to figure out a strategy for providing this type of capability, or be left looking like yesterday’s news.