Summary:

The networking is becoming the bottleneck in scale out data centers. Kotura thinks that fiber is the answer, so it’s offering a transceiver that starts at 100 gigabits per second and can scale up to deliver a terabit per second between servers.

cables

When we talk about the flood of data and digital information on the internet, we spend a lot of time thinking about the bytes — as in how much can we store and where can we put those exabytes of data we create every day. But of increasing importance will be the bits per second — a metric used to dictate how many bits (there are eight bits in a byte) we can deliver over a network connection.

In the data center, the flood of information traveling across networks has changed in three ways. First, there is more of it. Second, the data is related to other servers inside the data center as opposed to getting a request for data and then serving it up. This is the so-called East-West traffic explosion. Instead of a server sending information up to a switch that sends it out of the data center, the server is now sending requests to a switch that then connects to other intra-data-center servers. Thus one request can now involve a few switches and several servers sending traffic back and forth across the network before it ever leaves the data center.

The third change is that cost pressures and the need to scale are pushing data center operators to flatten out the network so more servers (or virtual machines) talk to the switch. The solution here so far appears to be a networking fabric, but may in time graduate to a truly distributed and virtualized network.

The net result of these changes is that network pipes need to be fatter while the network processors need to be faster. But because this is also a data center, the components that enable this have to be relatively cheap. This is where Kotura comes in. The startup, based in Monterrey Park, Calif., offers a fiber-based transceiver that can deliver 100 gigabits per second inside the data center. The transceiver could live on a board next to the CPU or inside a switch and could eventually expand to deliver a terabit per second (Tbps).

While one 1 Tbps is crazy fast when you consider that many data centers are currently upgrading to 10 gigabit Ethernet between servers, it’s going to be necessary. Arlon Martin, VP of Marketing, Government Contracts & Industry Relations at Kotura, tells me that customers are building products for the high-performance computing sectors but also for real-time data processing. The goal is bringing a low-power and less expensive optical part into a rack of servers, able to scale up to terabit per second capacities.

If you have a rack of servers with even multiple 10 GigE ports, suddenly the top-of-rack switch, or whatever fabric is stitching those servers together needs to have a lot of bandwidth. This is something Cisco has noticed, which is why it purchased Lightwire earlier this year. Kotura provides it, and because its chip stuffs 25 Gbps into a single wavelength in a strand of fiber with up to 40 available wavelengths customers can light up the remaining wavelengths as needed to reach up to a terabit of capacity.

Some other silicon photonics vendors require more chips or line cards to upgrade, which means new equipment. Much like Plexxi, which is bringing fiber-based gear to the data center, Kotura is betting that scaled-out networks have a need for speed (and capacity) that only fiber can provide. However, Plexxi wouldn’t compete with Kotura, but would likely buy a chip from it (or another transceiver vendor) to power its gear.

Kotura has 75 employees and has raised undisclosed millions from ARCH Venture Partners, Fuse Capital, GF Private Equity and others. It has an established customer base in the telecommunications business where it has sold product since 2006. But now it’s moving into the data center in the hopes of solving looming problems in the networking sector with cheaper, low-power optical chips that can deliver a lot of capacity between servers. The new data center is going to have a lot of dense computing, low-cost fast storage, and soon, high-capacity low-latency networks connecting everything.

Comments have been disabled for this post