Summary:

In introducing Exalogic, Oracle has made a big bet on tightly integrated solutions thinking that enterprises don’t want to be in the systems integration business and has made a networking technology choice that favor performance in place of interoperability and broad data center applicability.

In introducing Exalogic, Oracle has made a big bet on tightly integrated solutions. The underlying premise is that enterprises don’t want to be in the systems integration business, and that delivering pre-configured application and hardware bundles will ultimately save companies money. To do this, Oracle has made networking technology choices that favor performance in place of interoperability and broad data center applicability.

Basically Oracle thinks enterprises want a ready-made meal and customers don’t care what’s inside as long as it tastes good. In many respects, this makes sense, but it also counters a tide of “rack ‘em and stack ‘em,” where companies and service providers secure inexpensive servers and connect them together with Gigabit Ethernet. If you buy into the Oracle Exalogic vision, you’ll be using a server interconnect called InfiniBand which boasts honking speeds, but in no way has the broad adoption and reach of networking technologies like IP and Ethernet.

But boy is that InfiniBand stuff fast. Oracle is pulling out the stops with dual 40-Gigabit connections for each server within Exalogic, and claims a 2.8-to-3x improvement over 10-Gigabit Ethernet. In fact, 10-Gigabit Ethernet is only prevalent in switches nowadays, and not widely deployed within the end points of the system, or more specifically, in the servers themselves. InfiniBand originally took hold as a high-speed interconnect for compute clusters, and made lots of headway in the high performance computing markets. But it has never seen broad enterprise adoption or even recognition, until now.

The benefit of using InfiniBand is the ability to avoid extra networking layers like TCP/IP which, while incredibly helpful for Internet connections, can sometimes get in the way of high-speed data center connectivity. The downside is that InfiniBand is not Ethernet. Though there’s overlap between the cabling, transceivers, and silicon behind InfiniBand and Ethernet, they are different approaches. This means a different set of networking switches, a different set of expertise, and no way to easily share network infrastructure between the InfiniBand gear and the rest of the enterprise network. Yes, there are technologies to package InfiniBand within Ethernet, but who wants to do networking acrobatics?

Cisco for example, has taken a pure play Ethernet approach and chosen not to go the InfiniBand path with its Unified Computing Solution. The company has also worked hard on Data Center Ethernet, an initiative to bring a lossless Ethernet fabric to the enterprise.

As a self-proclaimed IP, Ethernet, motherhood and apple pie fan, I have a hard time swallowing the InfiniBand approach. But Oracle, unlike many of its data center competitors, is a software company purely trying to find the fastest platform to tout its chest-pumping benchmarks for Java applications. If it finds InfiniBand, or any other technology, will get it closer, perhaps that is the only thing that counts.

Gary Orenstein is host of The Cloud Computing Show

Comments have been disabled for this post