Blog Post

Because Hadoop isn’t perfect: 8 ways to replace HDFS

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Hadoop is on its way to becoming the de facto platform for the next-generation of data-based applications, but it’s not without flaws. Ironically, one of Hadoop’s biggest shortcomings now is also one of its biggest strengths going forward — the Hadoop Distributed File System.

Within the Apache Software Foundation, HDFS is always improving in terms of performance and availability. Honestly, it’s probably fine for the majority of Hadoop workloads that are running in pilot projects, skunkworks projects or generally non-demanding environments. And technologies such as HBase that are built atop HDFS speak to its versatility as storage system even for non-MapReduce applications.

But if the growing number of options for replacing HDFS signifies anything, it’s that HDFS isn’t quite where it needs to be. Some Hadoop users have strict demands around performance, availability and enterprise-grade features, while others aren’t keen of its direct-attached storage (DAS) architecture. Concerns around availability might be especially valid for anyone (read “almost everyone”) who’s using an older version of Hadoop without the High Availability NameNode. Here are eight products and projects whose proprietors argue can deliver what HDFS can’t:

Cassandra (DataStax)

Not a file system at all but an open source, NoSQL key-value store, Cassandra has become a viable alternative to HDFS for web applications that rely on fast data access. DataStax, a startup commercializing the Cassandra database, has fused Hadoop atop Cassandra to provide web applications fast access to data processed by Hadoop, and Hadoop fast access to data streaming into Cassandra from web users.


Ceph is an open source, multi-pronged storage system that was recently  commercialized by a startup called Inktank. Among its features is a high-performance parallel file system that some think makes it a candidate for replacing HDFS (and then some) in Hadoop environments. Indeed, some researchers started looking at this possibility as far back as 2010.

Dispersed Storage Network (Cleversafe)

Cleversafe¬†got into the HDFS-replacement business on Monday, announcing a product that will fuse Hadoop MapReduce with the company’s Dispersed Storage Network system. By fully distributing metadata across the cluster (instead of relying on a single NameNode) and not relying on replication, Cleversafe says it’s much faster, more reliable and scalable than HDFS.


IBM (s ibm) has been selling its General Parallel File System to high-performance computing customers for years (including within some of the world’s fastest supercomputers), and in 2010 it tuned GPFS for Hadoop. IBM claims the GPFS-SNC (Shared Nothing Cluster) edition is so much faster than Hadoop in part because it runs at the kernel level as opposed to atop the OS like HDFS.

Isilon (EMC)

EMC (s emc) has offered its own Hadoop distributions for more than a year, but in January 2012 it unveiled a new method for making HDFS enterprise-class — replace it with EMC Isilon’s OneFS file system. Technically, as EMC’s Chuck Hollis explained at the time, because Isilon can read NFS, CIFS and HDFS protocols, a single Isilon NAS system can serve to intake, process and analyze data.


Lustre is a an open source high-performance file system that some claim can make for an HDFS alternative where performance is a major concern. Truth be told, I haven’t heard of this combination running anywhere in the wild, but HPC storage provider Xyratex wrote a paper on the combination in 2011, claiming a Lustre-based cluster (even with InfiniBand) will be faster and cheaper than an HDFS-based cluster.

MapR File System

The MapR File System is probably the best-known HDFS alternative, as it’s the basis of MapR’s increasingly popular — and well-funded — Hadoop distribution. Not only does MapR claim its file system is two to five times faster than HDFS on average (although, really, up to 20 times faster), but it has features such as mirroring, snapshots and high availability that enterprise customers love.

NetApp Open Solution for Hadoop

OK, the NetApp Open Solution for Hadoop isn’t so much an HDFS replacement as it is an HDFS improvement, according to NetApp and early partner Cloudera. The offering still relies on HDFS, but it reenvisions the physical Hadoop architecture by putting HDFS on a RAID array. This, NetApp (s ntap) claims, means faster, more reliable and more secure Hadoop jobs.

This might be a good place to say rest in peace to two other HDFS alternatives that are effectively no longer with us — KosmosFS (aka CloudStore) and Appistry CloudIQ Storage. The former was created by Kosmix (since bought by @WalmartLabs) and released to the open source world in 2007, but no longer has an active community. The latter was an attempt by Appistry in 2010 to get a piece of the Hadoop pie with its computational storage technology, but the company has since switched its focus from selling the technology to providing high-performance computing services based on it.

Feature image courtesy of Shutterstock user Panos Karapanagiotis.

14 Responses to “Because Hadoop isn’t perfect: 8 ways to replace HDFS”

  1. Cameron

    The ParaScale distributed storage and computation platform perhaps deserves a mention here too as one of the early vendors that integrated Hadoop into the platform and submitted the filesystem plugin to the open source community back in 2009/2010 time frame. The ParaScale filesystem provided fast parallel ingest via standard NFS protocol, full read/write POSIX semantics, a distributed and replicated object store, global namespace, and many other enterprise features. Hitachi Data Systems acquired ParaScale in 2010.


  2. gengstrand

    I think that it is disingenuous to propose expensive proprietary solutions as alternatives to an open source solution. Perhaps a more valuable article would be to propose open source alternatives instead. At Zoosk, we used instead of Hadoop for the implementation of our activity feed. The Apache Solr project is our open source alternative to Hadoop.

    • Derrick Harris

      I think it depends on your use case. If you’re a Fortune 500 company, you might want to pay for performance, reliability, etc. And some alternative, such as Gluster (mentioned in comments), are open source but vendor-backed.

  3. Looks like most of the alternatives being promoted are highly expensive proprietary hardware based storage vendors who are feeling threatened by the RISE OF HDFS and crying wolf. Everyone knows that HDFS has its flaws but the benefits outweigh the drawbacks. It has got huge traction and in a few years will be the most dominant technology for data crunching. It is important for the storage vendors to re-invent themselves and not fight for a piece of the hadoop pie. That will mean slow decay for them.

    • Derrick Harris

      I saw your post, and Charles’ (from Cloudera) post. I think you’re both probably right that HDFS wins out in the end. By percentage, it’s probably very dominant now. But there will always be people looking for alternatives.

  4. Bill McColl

    Good post. At Cloudscale we tried using HDFS a while back. Given that we wanted to run realtime analytics, graph analytics, MPI, and bulk synchronous parallelism (BSP), as well as Hadoop, in a unified big data architecture, HDFS was a non-starter. For our purposes, Lustre has proved to be a far better option as a super-fast and super-scalable file system foundation for a unified big data platform. Running Lustre on AWS clusters is new. We are probably the only company currently doing it. To help others get started, we posted how to do it here

    if you want to try out Lustre on cloud clusters for yourself.

  5. Hi Derrick,

    GlusterFS, and by extension Red Hat Storage, have a drop-in compatibility library today that allows you to augment or replace HDFS. The win for customers is that they have a unified data backend they can access either via hadoop operations or via traditional NAS methods, ie. NFS or the GlusterFS client.

  6. Adam Bane

    OpenStack Object Storage deserves a mention in this list. The Hadoop implementation for OpenStack removes the NameNode requirement and streams data directly from the object-store to compute memory (no staging on local disk required).

    The real significance here is OpenStack Storage has already been implemented at HP, Rackspace, and SoftLayer (among other IaaS providers) and can be deployed privately in the Enterprise. Hadoop projects that are started at small scale in these public clouds can easily be migrated in house. Similarly, an in house implementation can easily be extended to these providers for easy access to additional storage and compute resources.