2 Comments

Summary:

Data centers have been hobbled by using optical interconnections designed and optimized for telecom applications, but what’s next for interconnect technology designed for scale out computing?

ethernet-garden-300x227

Since their inception, data centers have been hobbled by using optical interconnections designed and optimized for telecom applications. Yet, given their rapidly increasing slice of the optics pie, data center equipment vendors are wising up and demanding a solution that more closely fits their needs.

As a quick recap of the first part of this problem, the cheapest way of interconnecting servers via switches today is copper Gigabit Ethernet. The cheapest higher-speed connection would arguably be an amalgamation of 10-gigabit capacity delivered via optics (QSFP). And there’s a wide chasm between the two that industry is desperately trying to bridge.

All hail silicon photonics

Enter the silicon-photonics hype. While critics are quick to point out the products from current silicon-photonics startups do not actually create photons from their silicon, they are missing the point. Silicon photonics are not hot because of vanilla-sky dreams of building optical circuits side-by-side with electronic circuits on 300mm silicon wafers. The reasoning is much more mundane.

Silicon photonics allow a single laser to be used for a parallel interconnect. Since the laser diode is still a large portion of optical transceiver cost, the fewer lasers, the lower the cost.

Cisco’s acquisition of Lightwire was supposed to herald in a new age in low-cost silicon-photonic optical interconnects. At last Cisco made a shrewd move, analysts opined, as Lightwire had dirt-cheap optical transceivers running 100 GbE. So imagine the puzzled reaction when earlier this year Cisco instead launched a proprietary module that is neither small, low power nor cheap.

Another ray of hope shined brightly at Open Compute, as silicon photonics was mentioned as the technology behind low-cost 100 GbE links carrying traffic between the open-sourced elements. Supposedly this effort leveraged technologies developed for LightPeak before it morphed into the copper Thunderbolt. Yet, digging deeper, the optical specification released simultaneously had little to no technical details, and rumors quickly spread that the technology was not yet ready for prime time. The jury is still out on this optical link, and I welcome and wait for more than circumstantial evidence to substantiate.

Taking a step back to an earlier era

An entirely different approach was recently announced by Arista Networks. Instead of silicon photonics, Arista is using old-school VCSEL parallel optics similar to those used in supercomputer clusters. This is surprising, given Arista founder Andy Bechtolsheim has not been shy about extolling the virtues of silicon photonics in data centers.

Arista boasts a linecard that can be reconfigured to support either 144 10 GigE ports, 36 40 Gigabit Ethernet ports or 12 100 GigE ports — definitely a step in the right direction. Yet, a closer look yields a few caveats. The links use 24 fiber ribbon cable, which is difficult to route through structured cabling. The link is proprietary, so it’s really only useful between Arista products. And, finally, their optics are not pluggable, but something known as on board optics.

In the latest episode of “What’s Old Is New Again,” there has been a resurgence in interest in on-board-optics (OBO). After two decades of pluggable optics development, now the supposed answer to all interconnect woes is to permanently fix the optoelectronics on the host board and run fibers directly to the front faceplate.

Those not suffering from technology amnesia might remember there were reasons pluggable optics were invented in the first place. Optics have a higher failure rate than electronics, and with OBO a single laser failure means the entire board must get replaced. The optics also tend to be the most expensive part of a linecard, and OBO forces a customer to pay for all the connections up front, rather than buying optical modules as needed.

Pluggable optics also allow for different cable lengths to be installed, so the link from top of rack to server can be a 20-meter variant, while the link to the end of row can be a 100-meter optic.

While it is heartening to see industry attention finally shifting to data center interconnection needs, the early attempts are near misses, at best. In the meantime, data centers keep installing more and more 1 GigE copper links. I can’t really blame them, given the economic realities.

For now, I continue to wait for the first vendor to get it right. Given the leaps in technology that have occurred in the 14 years since Gigabit Ethernet, surely someone is bound to find the right formula.

Jim Theodoras is senior director of technical marketing at ADVA Optical Networking, working on Optical+Ethernet transport products.

  1. We use in a datacenter what makes sense. If you need the BW, you will use 10 Gbps typically, 1 is for everything else and their are still some legacy 100 Mbps implementations. 40 and 100 Gbps is used as interconnects and uplinks, rarely to connect storage or hyper vizors. Keep in mind that lots of connections are used for out-of-band management with low BW needs.

    10 Gbps is often implemented with cheap copper DAC cables. In their passive variant, they are nothing else than an electrical extension of the SFP+ interface bus and as such are much cheaper than a 1 Gbps SX optic.

    You said that ribbon cable is proprietary and hard to route, this is not the case. Ribbon cables have been the standard for a long time in structured cabling, mostly used to fan out LC connectors on either end. It is relatively new to use MTP connectors for 40 and 100 Gbps, but pinout is standardized among vendors for 12 and 24 fibers. MTP can carry up to 72 fibers in a connector the size of a an RJ45. We use them typically for datacenter fabric uplinks, replacing proprietary connections which were in the past used for matrixes and virtual chassis. 40/100 Gbps is getting more attention on providers links, specifically for fiber to the home, to connect feeders.

    The trend is away from proprietary connections to inter-vendor connectivity which brings down prices for end users. Whether to use single fibers with multiple colors or multiple fibers is a matter of cost. It is cheaper to run a 24 ribbon cable with single color 10 Gbps links a short distance than a single fiber pair with four 25 Gbps colors.

    Disclaimer: I am a design architect working for one of the large vendors. The above reflects what I am seeing in my daily work.

    Share
  2. What do you make of the Cisco ASR 9000 series cards which purport to give you 100G per card?

    http://www.cisco.com/en/US/prod/collateral/routers/ps9853/data_sheet_c78-712041.html

    Blaine Bateman
    EAF LLC

    Share

Comments have been disabled for this post