Creating the Virtualization and Cloud Management Puzzle


The advent of virtualization and cloud computing infrastructure has made conventional, rules-based approaches to systems management obsolete. Managing the volume, speed, and complexity of IT data being generated in virtual and heterogeneous environments now defies human analysis. But a new breed of performance management technology, which leverages powerful IT analytics and is tailor-made for virtualization and the cloud, is being proven by early adopters in financial services and telecommunications. I expect 2011 to be a breakout year for end-to-end management in the cloud. But to understand where cloud monitoring and management is going, one must look at where it came from, and what virtualization has wrought.

Flash back to 2008 … large enterprises are virtualizing their servers as part of IT consolidation efforts, and the promise of cloud computing is fueling dreams of measurably lower IT costs, greater flexibility and operational efficiency. CTOs are signing up for aggressive objectives based on the premise that virtualization and consolidation will deliver benefits, and the eventual goal is to move infrastructure and applications into private and/or hybrid cloud environments.

A typical scenario involves initially virtualizing basic infrastructure elements, such as print servers and other less critical applications. But to achieve the full economic benefits originally mandated, the company needs to virtualize about 50 – 75 percent of their applications, including those deemed critical. At that point, application performance came under increasing scrutiny, creating friction between IT and line-of-business owners whose crown jewel applications were being road-mapped for the new, virtualized environment.

At issue was the conventional approach to systems management — manually setting rules and thresholds – that, in a virtualized environment, quickly became more unwieldy and complex than even the best engineer can resolve. Getting visibility into resource utilization and root-cause analysis in virtualized environments was much more difficult, which made traditional baselining and rules-based approaches to systems management obsolete.

That was then, and now the issues associated with rules-based system management techniques have become more urgent. Today, enterprises have shiny new infrastructures that are approximately 30 percent virtualized but are still struggling with a lack of visibility and control especially in large heterogeneous enterprise environments which is preventing the migration of higher criticality applications. Engineers are spending more time trying to diagnose real-time performance and understand capacity issues, while trying to make sense of a myriad of ever changing alert threshold and policies, rather than on focusing on core business drivers.

Sound familiar? Now what?

This is Part 1 of a three-part series. The second post will run Saturday.

Nicola Sanna is chief executive officer of Netuitive, and has held the role since 2002. Netuitive enables enterprises to proactively manage the performance and capacity of their IT infrastructures – physical, virtual and cloud.

Image courtesy of Flickr user Adam_T4.

Related content from GigaOM Pro (sub req’d):

Comments have been disabled for this post