Next Up: I/O Virtualization?


Austin, Texas-based startup NextIO has scored $18.8 million in a third round of funding, bringing the I/O virtualization systems maker to almost $40 million in total venture capital raised since 2003. It’s attacking one of those nitty-gritty technical problems in data centers and tossing around today’s favorite buzzword to do it.

NextIO makes chips and software to tackle I/O virtualization for the PCI Express protocol. Virtualized I/O allows a group of servers and/or storage systems to run Ethernet, FibreChannel, InfiniBand or whatever other flavor of interconnect technology through one box. This makes it easier to manage a network without having to map each server to a set endpoint. NextIO expects to ship products this year to OEMs that make servers and storage systems.

It’s not alone in attacking aspects of I/O virtualization. Others include Neterion, NetXen, 3Leaf, Xsigo and even some 10Gig-E players who have added I/O virtualization capabilities such as Solarflare Communications are trying to bring virtualization across the data center.



The very cool part of I/O virtualization isn’t the fact that it interoperates with all of the other (redundant?) transfer mechanisms, or that it makes it “easy to connect the VM” to different resources… although that’s part of it.

The thing that’s valueable is that when one provisions a piece of software to a piece of hardware (whether-or-not it’s been virtualized) one can provision a “virtual NIC” at the same time.

So picture the future: software, I/O, and presumably storage, can be allocated on-the-fly as compute demands dictate. Very much of an automated “cloud” scenario.


The summary in 2nd paragraph is confusing, and makes it sounds as if this is something like a SAN technology – as far as I can tell from the links, this is simply a way for multiple PCI Express cards in a single system to be allocated to separate virtual machines. The I/O virtualization part presumably makes it easy to connect a VM to different I/O resources, although SANs already do that so it’s hard to see the value of this.


Great to see GigaOm covering more data center infrastructure!

In addition to the folks mentioned delivering sophisticated path management, there are also companies trying to minimize the impact of application I/O requests by applying memory at the server, network, and storage layers, and delivering a “virtual I/O” response.

But more importantly, how the heck did we go from:
1. We have Ethernet to
2. We have Ethernet and Fibre Channel to
3. We have Ethernet, Fibre Channel and InifiniBand to
4. We can put everything back on 10Gig Ethernet.

Is it just me, or does this seem like we could have saved a few billion dollars over the last few years and skipped a couple of steps?

Comments are closed.