1 Comment

Summary:

Software defined networking may not be mainstream, but when it does get deployed, how will we manage the new networks and ensure performance?

networking abstract
photo: Thinkstock

The adoption of Software Defined Networking (SDN) may not have progressed as rapidly as some may have thought, with less than 10 percent of enterprises running significant production traffic through virtualized networks. While there’s no denying the business benefit of the technology, many hesitate to implement. What are the obstacles and how will the eventual move to SDN redefine the role of performance management applications for the enterprise and service providers?

What’s the holdup?

Many SDN architectures imply that a flat layer 3 network exists, which is historically not how people designed and built data centers. There isn’t an easy transition from your current data center to one that gets the most out of a software defined network.

What’s more, many organizations still need to depreciate their existing data centers down to almost nothing before they’re willing to go through a major forklift upgrade. There are some exceptions here – like HP enterprise Ethernet switches that are upgradable to support OpenFlow – but generally there is significant capital invested in legacy architectures which business need to justify replacing.

Programmable networks could mean less downtime.

Programmable networks could mean less downtime.

It’s also a matter of trust. Most people are unsure how a new SDN might potentially break. It’s like the new carbon fiber Boeing 787. History helps us understand how metal planes fall apart and what is required to fix them, but what happens when you build an aircraft out of plastic and there’s a fire on board? The same goes for SDN. No one denies the business value, but we also need to have faith that the centralized controller model will work and work consistently. If it degrades, you can screw up your entire infrastructure in a heartbeat.

SDN … not so simple, my friend

There seems to be an implicit assumption with software-defined networks that you’re getting something that is necessarily simpler. The truth is, all you’re doing is masking the complexity. In some ways SDNs do simplify the architecture of the network, especially when you think about replacing all of those dedicated special-purpose appliances in the middle with software services. But the ease with which you can create virtual structures to meet any kind of business requirement will, over time, result in significantly more complexity.

By definition, software-defined networks introduce layers of abstraction between the physical underpinnings of the infrastructure and the resources that are consuming it. More abstraction is great until you have to ferret down through them to troubleshoot a performance degradation. Understanding dependencies and constraints in dynamic, virtualized environments is a supreme challenge.

Redefining the role of performance management

The inherent complexity of software-defined networks begs the question, “How will this impact my ability to manage the performance of my network?” Today, the focus is on assuring IT service availability and continuity. Is everything functioning properly or did the new architecture completely mess up the user experience?

But day two is where the more exciting advancements happen. There is a shift taking place from availability to optimization. You have service level objectives to achieve, so how do you optimize all components – compute, storage, and networks? You’re certainly not going to get all the insight you need to make informed decisions from the controllers.

In order to maximize their value in the SDN environment, performance management software needs to make recommendations for the controllers. In doing so, it must be able to interpret metrics that are inherently abstract.

The way performance management software detects baseline performance of your infrastructure today doesn’t apply well in SDNs. It’s much more complex. That software can’t look at just one variable. It has to understand the topological underpinnings and how many variables interact with each other in order to establish dependency relationships between components. That’s the crux of performance management in SDNs – taking data that may seem irrelevant by itself and extrapolating information, so you know if a virtual machine is about to have a problem and where it needs to be placed.

Will performance monitoring and control systems merge?

cloud merge2
Performance monitoring and SDN control systems should remain separate, but aligned. The reality is no one is ripping and replacing their existing data center networks overnight. We’ll have hybrid infrastructures for at least the next decade, and you’ll need to monitor metrics from both the physical and virtual from a single screen.

A monolithic SDN controller/reporter which attempts to bring together metrics from the physical and logical – across multiple vendor platforms – would be an enormous undertaking, and one destined to be brittle and hard to manage. OpenFlow, in particular, grows out of that UNIX ‘Do One Thing Well’ philosophy, so it really requires (pardon the business cliché) a ‘best of breed’ approach. The controllers simply aren’t designed to collect the breadth of data from your environment that a dedicated performance management platform can. Especially when most controllers are pure virtual plays. They communicate to devices via OpenFlow or live entirely within the hypervisors. Their knowledge of the physical underpinning is quite limited.

The future of performance management in SDN

The Holy Grail of performance management would be complete vendor neutrality, being able to leverage a set of APIs for a real-time view of rapidly changing topologies, and being able to seamlessly combine metrics from physical switches – like bytes in and bytes out – with metrics from the controllers. If the industry is going to shift from simple performance monitoring to more advanced performance management, these applications must understand the resource dependencies and constraints in a virtualized network environment and be able to work in tandem with the controllers to recommend optimal utilization.

If we get to that model, performance management really becomes another critical network service, like a DNS or a DHCP.

Vess Bakalov is Senior Vice President and Chief Technology Officer of SevOne.

You’re subscribed! If you like, you can update your settings

  1. Your points about collecting data from disparate sources and making that available to some unknown set of data services are spot on. I doubt, however, that it translates into an ubercontroller with master visibility over all data. Rather what we need is a data services engine that various services can subscribe to and pull the data they want when they want.

    This would decouple the data from the controller. The controller becomes just another consumer of data (presumably used to optimize as you point out). But you could have applications, compute, storage, whatever also consume the data.

    -Mike Bushong @mbushong
    Plexxi

Comments have been disabled for this post