The virtualization of systems allows for efficient use of server resources and is clearly a trend that many enterprises are embracing. Systems engineers see virtualization as the next generation of tools that can help scale their servers, while network engineers see the virtualization trend headed in their direction as well. Unfortunately, it seems that server virtualization also helps foster trench warfare between the two.
I found myself witness to one small skirmish in this battle today, when I met with a startup looking for funding. The startup is building enterprise services, and for its next generation plans to make heavy use of XenSource’s XenMotion functionality to manage virtual machines on about 50 physical servers. This functionality, which is similar to that of VMware’s VMotion, promises to seamlessly move a virtual machine from one physical server to another. The startup’s service product could be running in one virtual machine on a server and if the server receives too much load or has a failure, the XenMotion functionality could move the virtual machine to another server without resulting in any downtime. For an enterprise services startup, avoiding downtime is a good idea.
I asked some questions about the network and systems architecture and found that the systems engineers had made the assumption that in the new service, any virtual machine could be allocated to any physical server. The network engineers, unfortunately, had not taken this into account. Based on the physical network topology — a classic three-tier architecture — the network engineers had set up firewall rules and access-control lists to appropriately protect the infrastructure. For example, not every server could be accessed from the Internet and only certain physical servers had permission to mount storage area network resources. If using XenMotion meant every server was expected to house any virtual machine at a moment’s notice, these were clearly issues that needed to be resolved.
The systems engineers’ expectation of being able to move any virtual machine to any physical server in the infrastructure meant a complete redesign of the network topology was required. And that is when the skirmish ensued. The systems engineers insisted that the network topology be set up to allow XenMotion to work seamlessly. The network engineers argued that their network topology was necessary for scalability and security. As far as I was concerned, they were both right, so before continuing my due diligence on their business, I sent them off to settle their skirmish amongst themselves.
But it had got me thinking: Has server virtualization added an abstraction layer that further separates systems engineer and network engineers from the physical reality of their environments? Do we need a new engineer — a virtualization engineer — that understands how virtual machines are allocated on physical servers and networks to act as a liaison between the two factions?