2 Comments

Summary:

Jeda Networks is another company riding the shift in how data centers run in a virtualized world. The startup wants to make storage area networks obsolete with it’s controller.

So much storage, never enough time.

As we muddle our way to a software-defined infrastructure that is capable of handling scale-out web deployments with the click of mouse or the swipe of a finger on a graphical user interface, more and more startups are coming out of the woodwork. Jeda Networks is another one.

The Newport Beach, Calif.-based company has raised an undisclosed sum from US Venture Partners and Miramar Venture Partners to build what it dubs the software defined storage network. I’m not going to call it that, not only because I’m sick unto death of software defined storage, software defined security networks and the software defined data center, but also because it doesn’t really help people understand what Jeda purports to do.

The company, which was founded by executives hailing from Emulex, wants to take virtualized networks and use the abstraction to help push data to storage without worrying exactly what machine it ends up on or what path it takes to get there. Unlike traditional storage area networks, where information is shunted off to some distant hard drive in a specialized array, the next generation of storage is abstracted much like our servers and eventually our networks.

When virtualization hit servers it untethered computing from the applications running on the processors, which allowed for flexibility, but also caused a bit of a mess for all of the physical infrastructure attached to the servers. One might run 10 different virtual machines on a physical server, but those 10 virtual machines were stuck sharing the same networking cables and access to storage. It’s almost like being able to house 10 people in a home, but still only have two bathrooms.

Those bathrooms are going to get crowded and cause delays in getting those 10 people out the door if they all need to leave by 8 in the morning. Adding more bathrooms isn’t the solution for the scaled out world, because who wants to buy even more equipment for demand that’s only happening for an hour a day? The solution on the networking side was to create an abstraction layer and then to allocate resources via software. In the bathroom example, it would be like reserving your neighbors extra guest bathroom during the 7 AM to 8 AM time frame simply by clicking a button.

On the storage side, the solution is to pool all the available memory and storage resources available to a server and create pools of storage that any virtual machine on the network can write to. Startups like ScaleIO and Convergent.io are aiming at this market while larger firms such as Fusion-io, Microsoft and even VMware in half-hearted way, are all saying that they can create pools of storage.

jedanetworks

So back to Jeda and its ambitions. Essentially it wants to build a controller for storage networks that enables this pooling, and sees the ability to replace a storage area network as one of the so-called killer apps for software defined networks. Once a network is virtualized, you can use the Jeda Networks controller to also direct data to the optimal storage option. Other companies do this via the hypervisor or some still-secret method, but Jeda is building a controller. For existing networks that aren’t yet virtualized it offers an application as well.

The company plans to offer its product in the first half of this year. So far, 2013 is shaping up to be a crucial one for storage and software defined networking. Wonder when we’ll see the first billion-dollar software-defined storage acquisition?

  1. One of the critical problems we had back when we were hosted on the Terremark Enterprise Cloud was that storage was a big black box where we got to assign specific sized volumes and they were hosted on a 3Par array. We had no control over the actual storage device and this is terrible in a shared environment – you don’t get to design redundancy, understand performance and have no idea what is happening when things go wrong. It’s the same with Amazon EBS.

    We switched to physical servers we have control over right down the physical hardware. We can see and control individual boxes (if necessary) and design around the failure ourselves so it doesn’t matter if it takes a few hours for a technician to fix it.

    Because when something goes wrong, it’s usually some horribly complex event: http://blog.serverdensity.com/cloud-storage-failures-the-perfect-storm/

    Share

Comments have been disabled for this post