3 Comments

Summary:

VMware has made two big acquisitions this month, both focusing in part on the ability to work with competitive hypervisor and cloud computing software. The company seems to get that the future is in making it easy for customers to choose.

fruit choice

Three weeks, two acquisitions — DynamicOps and Nicira — and a lot of talk about freedom of choice. What gives, VMware?

The answer is simple: VMware sees the writing on the wall, it knows acting like a dictator won’t work in an IT society that craves democracy. Half of the story around VMware’s rumored cloud computing spin-out focused on the need for the company to focus on its core virtualization business in order to fend off advances from the likes of Microsoft, Citrix, OpenStack and others. Most experts agree that embracing those competitors is VMware’s best chance to blunt their attacks.

On July 2, VMware announced it was buying virtualization- and cloud-management vendor DynamicOps, which VMware rationalized in its press release thusly:

VMware believes that customers will benefit most by a standardized architecture, but will build solutions that make it easy for customers to choose the model that best works for their needs, including heterogeneous environments/management. … DynamicOps builds on the capabilities of vCloud Director by enabling customers to consume multi-cloud resources (e.g., physical environments, Hyper-V- and Xen-based hypervisors, and Amazon EC2).

On Monday, it was software-defined networking golden boy Nicira. In his blog post explaining the acquisition, VMware CTO Steve Herrod goes out of his way to talk about openness:

They [Nicira] are major contributors to the networking capabilities of other hypervisors (via the Open vSwitch community) as well as to the “Quantum Project”, one of the key subsystems of OpenStack.

I can imagine skepticism as to whether we will continue this substantial embrace of non-VMware hypervisors and clouds. Let me be clear in this blog… we are absolutely committed to maintaining Nicira’s openness and bringing additional value and choices to the OpenStack, CloudStack, and other cloud-related communities.

VMware’s Herrod and I talking software-defined data centers at Structure.

It might sound counterintuitive that VMware would embrace the hypervisor and cloud-management competition, but it’s actually common sense. If VMware is going to position itself as the provider of intelligence in software-defined data centers, hypervisors have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.

If we’re talking about supporting multiple cloud environments, such as OpenStack and CloudStack, I have to assume VMware will simply claim its superiority. So it might not prevent people from using OpenStack for test-dev workloads and web sites, but it will make the case that customers will want to use VMware for mission-critical apps.

Supporting multiple clouds doesn’t mean encouraging their use. It’s the same logic that underpins VMware’s efforts with CloudFoundry — developers are free to use whatever components they want, but writing in Java and using Spring means VMware support and access to the gamut of SpringSource tools such as Spring Hadoop and everything falling under the vFabric banner. Want to run Hadoop on your VMs? You might be interested in VMware’s efforts to make Hadoop run on its vSphere hypervisor.

But don’t mistake VMware’s newfound love of openness for a sign incredible vision. If anything, it’s reactionary. In a world where the alternatives — even Microsoft — all play increasingly nice with each other, VMware can’t afford to be the odd man out. And in this case, what’s good for VMware should be good for customers, too.

Image courtesy of Shutterstock user Chris Howey.

You’re subscribed! If you like, you can update your settings

  1. I can handle this for inside the datacenter, but can someone in the know please explain how this SDN will change how service providers deliver service on the WAN?

  2. Keith Townsend Monday, July 23, 2012

    @twospruces the way I understand it, in theory the abstraction of the network will continue down the stack to the WAN so you can build applications that are even less reliant on the access layer. Similar to how ATM has made way to MPLS when it comes to QoS. In theory you should have CoS/QoS builtin and programmable from the application layer.

    I think we are a long way from it.

  3. @Keith, thanks. WAN bandwidth today is generally leased and the more you want the more you pay. Is there a default assumption that pay-per-drink underpins SDN on the WAN? Imagine an application wanting to consume some WAN – does the service provider need to reconfigure the service to enable the desired throughput? Or does the static service model continue?
    What’s your view.
    Cheers.

Comments have been disabled for this post