Summary:

The solution for better and faster storage may lie in DSSD, a stealthy chip startup backed by Andreas von Bechtolsheim, that counts several members of the Sun ZFS team as founders.

Arista's Andy Bechtolsheim at GigaOM RoadMap 2011
photo: Pinar Ozger (c) 2011 GigaOM

For almost three years many of the creators of Sun’s Zettabyte File System have been slaving away in a Menlo Park, Calif. building trying to build a chip that would improve the performance and reliability of flash memory for high performance computing, newer data analytics and networking. Funded by Andy Bechtolsheim, the startup is called DSSD, and a recent hiring campaign plus the release of several patents offers some clues as to what this stealthy startup is about.

DSSD was founded in 2010 by Jeff Bonwick and Bill Moore — both part of a select few of engineers with experience building out storage operating systems. With the backing of Bechtolsheim, a Silicon Valley rock star and co-founder of Sun Microsystems, who has backed Google and co-founded switch startup Arista, the company has some of the smartest people in the Valley working there. No one from the company wanted to comment on the story.

My sources tell me the startup is building a new type of chip — they said it’s really a module, not a chip — that combines a small amount of processing power with a lot of densely-packed memory. The module runs a pared-down version of Linux designed for storing information on flash memory, and is aimed at big data and other workloads where reading and writing information to disk bogs down the application.

This fits with the expertise of the team, but this is a problem that others are trying to solve as well with faster and cheaper SSDs and targeted software to to optimize the flow of bits to a database. But the proposal here appears to be about designing an operating system that takes advantage of the difference in Flash memory when compared to hard drives to boost I/O.

For example, on old disk drives you store a group of bits in sequential order, but in reality those bits may get dropped anywhere in the drive. After regular use, when you delete a file, a tombstone marker is placed on the “deleted” file and you have to then find that tombstone and re-write just the amount of data in that space and then find more space for the rest. So the data goes everywhere.

But the DSSD system sounds like it treats files not as a series of bits but as an object that gets a name. That name is the file’s address and it stays the same for the life of the file. The result is there’s no central index that stands between sending the data to storage and storing it, and people can write to it in parallel and not worry abut overwrites. It is both faster and can scale out.

For more details, we can turn to the six patents that DSSD has filed. In mid-March Storage Mojo unearthed patents affiliated with the company that imply it is building a type of faster object-level storage using Flash that’s more durable. From the Storage Mojo article:

So what are they building? They are taking a radically different approach to the problem of high-performance transaction processing storage. The use of flash is a given in TP, and the extra durability, scalability and guaranteed read latency would be very attractive in large TP applications.

The most surprising piece is the object storage-like characteristics suggested by the patents. But handling billions of small objects at high-speed in a flat namespace would make it easy to distribute object indexes among hundreds of users, reducing file system I/O latency. The 3D RAID could eliminate the encoding overhead inherent in advanced erasure codes while providing similar robustness, enabling way-beyond-RAID6 availability.

For those who aren’t storage or computing buffs, the problem here was well explained in a fireside chat that Bechtolsheim had with my colleague Om Malik at our Structure:Data 2011 conference. In it Bechtolsheim outlines the problem that the network causes for access to big data around the 6-minute mark and the need to build new interfaces that can take advantage of the parallelism inside flash chips compared to hard disks. If you do that, you can expand the capabilities of flash beyond just density because you can write data to it faster, meaning the network no longer gums up the works.

Of course, when talking about using flash in more places, there’s always the question of whether this architecture will offer enough of a performance gain to justify the higher price per gigabyte of flash over a hard drive, but for that information we’ll just have to wait.

You’re subscribed! If you like, you can update your settings

Comments have been disabled for this post