4 Comments

Summary:

Mellanox, a maker of Infiniband interconnects and switches, has doubled its sales in the last two quarters. What is behind its recent success and what does that say about Mellanox, Infiniband and the current state of scale out data center networking?

graveyard1

A lot has been written lately about networking, including the software-defined variety, but one of the below-the-radar stories has been the rebirth of InfiniBand. Mention InfiniBand band amongst venture capitalists and you will likely get some dirty looks.

The party line is likely to be that it was an overhyped technology that was supposed to displace Ethernet in the early 00s that flamed out. VCs lost piles on it, and quickly forgot. A few may recall Cisco’s acquisition of Topspin, or know that Mellanox has gone on the dominate the InfiniBand market from network interconnect cards, to switching silicon and switches.

As time marched on InfiniBand established itself as an interconnect of choice in many high performance computing applications, replacing propriety technologies like Quadrics, Myrinet, etc. InfiniBand even recently passed Ethernet as the most widely used interconnect in the Top 500 supercomputer list (see chart).

Riding the growth in HPC, Mellanox became a thriving public company, essentially a proxy for the entire InfiniBand market. Realizing the limited market of InfiniBand, Mellanox also added Ethernet to its portfolio a few years back.

But over the past two quarters things have changed as sales have accelerated dramatically at Mellanox. The chart below shows the rapid growth in InfiniBand revenue at Mellanox. Mellanox stock has soared accordingly.

The first hint that something interesting was afoot was Oracle’s major investment in Mellanox in October 2010. Oracle uses InfiniBand in its Exadata products among others. InfiniBand is a popular choice for low latency networking in storage back ends. But storage doesn’t seem to be what is driving the growth.

On its most recent earnings call Mellanox indicated that only 15 percent to 20 percent of its revenue was from storage. This means that while HPC and storage are steady and growing, cloud and Web 2.0 applications are driving the large uptick in Mellanox’s sales.

So why after all this time is InfiniBand breaking out of its HPC niche into broader commercial applications? Who is bucking the conventional wisdom of never betting against Ethernet and IP? Are the traditional Layer 2 and Layer 3 data center network architectures so broken that web scale commercial customers are jumping to InfiniBand? It seems so, at least for a few large customers.

InfiniBand does have some very interesting advantages over Ethernet. Its faster — the current generation is at 56Gbps — and offers lower latency than Ethernet. Current InfiniBand switches are also denser than the competing Ethernet options. But these factors have been true for most of the life of this technology, so why is InfiniBand taking off now?

My guess is that this is a mostly a temporal phenomena because of continued delays in the availability of cost-effective 40Gigabit Ethernet and 100GigE and the tremendous cost of building lightly oversubscribed Layer 3 data center networking with the traditional Ethernet/IP vendors. It also highlights the broader recognition that as applications scale out and become more distributed; networking latency has meaningful impact of application performance. Looks like counting microseconds isn’t just for high frequency trading anymore.

But Mellanox isn’t the only company to realize the rising importance of InfiniBand, the massive changes in the networking fabric and lessons that can be learned in HPC. Intel has seen that data center networking is getting in the way of their customers fully utilizing its powerful CPUs. To attempt to address this problem Intel has made three acquisitions in this market, Fulcrum Microsystems, the InfiniBand assets from Qlogic, and the interconnect technology from Cray. Intel buying InfiniBand technology is especially ironic because they were an early proponent of InfiniBand but dropped support in 2002 which was a seen a major setback for the technology at the time.

Whether this is a temporary blip, or InfiniBand is primed to take on a bigger role in the data center, it is fascinating to see how the networking and big data architectures from the HPC world which used to appear so foreign to the commercial market are beginning to make an impact. Even as VCs pour money into the networking sector, there are still opportunities for greater disruptions by taking learning from the world most demanding user in HPC and applying them in the commercial markets.

Alex Benik is a principal at Battery Ventures who invests in enterprise and web infrastructure start-ups. He doesn’t hold stock in either Intel or Mellanox. You can find him on Twitter at @abenik

You’re subscribed! If you like, you can update your settings

  1. Oracle’s major investment in Mellanox also may help explain their purchase of Xsigo. Xsigo’s premier solution leverages Infiniband to deliver 40/80Gbps bandwidth to virtual hosts. The more Xsigo Oracle sales, the more Infiniband Mellanox sells.

    Win/Win for Oracle. Also, marketing Xsigo as a SDN play helped promote the brand to potential customers as it got a lot of press which otherwise it wouldn’t without the SDN tag.

    1. Totally agree. Xsigo’s roots are in IB. A bit of a stretch to call Xsigo SDN but that’s marketing.

  2. Shannon Rentner Tuesday, September 4, 2012

    Wow – This did indeed seem like the next “big” thing – for a brief overview of when InfiniBand was hot, check out this article from the Austin Business Journal, circa. http://www.bizjournals.com/austin/stories/2001/02/26/focus1.html?page=all

  3. Teradata is also using Mellanox ( I think) in their latest Appliece series. That is BYNET over Infiniband.

Comments have been disabled for this post