Blog Post

Google launches Andromeda, a software defined network underlying its cloud

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Updated throughout with new information from Google.

For everyone saying that software-defined networking is a pipe dream, Google is about to prove you wrong. The search engine giant and cloud provider said it has made its Andromeda software-defined network platform available in two of its Compute Engine zones, with the rest of its zones transitioning to Andromeda in the coming weeks.

So for companies using Google’s us-central1-b and europe-west1-a zones today, they can take advantage of what is truly a virtualized environment.

The basic promise behind this is that it virtualizes the network and, thus, it can scale. In the cloud, being able to scale a network means that you add agility while lowering operational costs. There are plenty of debates on how one implements software-defined networks but the implementation is something Amazon, Facebook and other large cloud and webscale companies are working on.

Google has been at the forefront of the software-defined networking revolution, first implementing an Open Flow-based software-defined network to support communications back in 2012. Now it is going live with Andromeda, the underlying software-defined networking architecture that will enable Google’s services to scale better, more cheaply and quickly. It has the added benefit of making the network faster, as well.

What is Andromeda?

Google describes Andromeda as its newly integrated networking stack with the diagram below and via a blog post:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls.


Andromeda is the enabler behind Google’s SDN efforts, so a better question isn’t what is it, but what does it allow Google or the end customer of Compute Engine to do. It’s like the hypervisor for a server, destined to become a commodity. Google has built load-balancing, security and firewall services on top of Andromeda that it can now offer to customers in an on-demand fashion. And as that customer uses more compute, the networking required to support the services on that additional compute expand with it.

No one has to plug new cables into ports or manually add firewalls to new VMs via a dashboard. Andromeda also has improved the networking performance, according to Amit Vahdat, a distinguished engineer at Google who presented on Andromeda last month at the Open Networking Summit and wrote today’s blog post.

Another interesting new service SDN and Andromeda enables is oxymoronic, isolated, multi-tenancy. Basically, by controlling the network flows Google can make sure traffic from one customer’s VMs stay within a defined cloud, isolating the customer’s data and compute jobs without restricting them to physical machines. One can also use such a network to migrate virtual machines in the case of maintenance or downtime. Those services are not available yet to Compute Engine customers yet, but they are possible.

Vahdat is working to make them not only available to Compute engine customers, but in the case of VM migration, automatic. The customer should have to do anything. He explained that Google is already isolating certain jobs on its hardware using Andromeda and will make that available to customers in time. When asked if Google planned to open source any of the software that makes up Andromeda, he said the best way to get the functionality is through Google’s cloud offerings.

As for the architecture of Andromeda, Vahdat explained that portions of it use Open Flow, but he was clear that SDN doesn’t require Open Flow. He also said that the underlying gear wasn’t all replaced to build this functionality, and that everything was done in software. But this wasn’t a trivial undertaking and he said companies aren’t likely to be able to build this type of infrastructure alone. For Google that’s sort of the point — if customers want this flexibility they should try Compute Engine.

Overall, this a pretty significant announcement for Google’s customers, although the current Andromeda network only supports IPv4 today, and its also a technical and economic advantage for Google over providers who don’t have the same underlying technology. Google can now allocate network resources easily and cheaply to deliver faster compute and data transfer rates between virtual machines. That makes its cloud faster, allocates its resources more efficiently and eliminates the networking bottlenecks that have slowed down the promise of virtualization.

We’ll discuss Andromeda and more, onstage with Urs Hölzle, SVP of Technical Infrastructure & a Google Fellow at our Structure Conference in June.

4 Responses to “Google launches Andromeda, a software defined network underlying its cloud”

  1. cruxServers

    You guys should look at CrossBow in Solaris. Fully virtualized network, probably created before Google could think Andromeda. We at use it to power VMs and different containers, on the fly. CrossBow not only includes virtualized network, but also load balancing, a basic firewall, throttling and accounting.

  2. Cloudy Times

    Advanced SDN has been underpinning AWS for many years now. That is how features like security groups and VPC’s work. I dont think that Google’s Andromeda differentiates it from AWS. AWS has more advanced SDN working and proven under scale for many years already. It is good to see someone that is now credibly challenging AWS technically (finally). However AWS is so far ahead that from a market perspective it remains to be seen whether Google can slow down the AWS juggernaut.

  3. I have been testing the new networking and there are some noticeable throughput improvements when testing in the us-central1-a vs us-central1-b zones. Given how commoditised pure compute and storage are now, the cloud vendors are going to have to differentiate in other areas, and networking is certainly one of them. This is a good step for Google but Amazon’s VPC product is incredibly advanced (and quite complex).

    Now 3 of the big 4 players have optimised networking – Amazon, Google and Rackspace all have specifically network optimised instance types available, although it looks like Amazon and Google are ahead with how customisable and detailed their SDN efforts go. The only one missing is Softlayer, who have no concept of optimised instances in their cloud.

    That said, Softlayer possibly wins on operational simplicity because of it’s VLAN spanning across data centers and no network costs which means you can consider all of your instances in any of their worldwide data centers on the same network, and transit with no cost. This is quite a big cost advantage.