Blog Post

Nutanix Gets $13.2M for Google-like Storage Architecture

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Nutanix, a San Francisco-based storage hardware maker, has raised $13.2 million in a Series A funding from Lightspeed Venture Partners and Blumberg Capital. The company is developing an appliance combining computing and storage on the same server nodes, a story that should resonate with customers concerned with scalability and performance.

When it comes to scale-out architectures and appliances, Nutanix’s founding team knows from whence it speaks. Co-founder, President and CEO Dheeraj Pandey was VP of engineering at Aster Data Systems and designed the storage systems for Oracle (s orcl) Database and the Exadata appliance before that. CTO Mohit Aron was a lead architect at Aster Data after spending time at Google (s goog) designing the Google File System. Co-Founder and Chief Products Officer Ajeet Singh also worked at Aster Data and previously helped develop Oracle’s cloud computing strategy.

Pandey compares the Nutanix appliance to Google’s architecture in that computing and storage are both house in the same nodes. This is different from many traditional application architectures, where computing is housed on one set of servers and storage is either network-attached or housed in a separate SAN. With the Nutanix approach, Pandey explains, both storage and computing scale simultaneously, which can lead to massively parallel processing and storage. A big differentiator from other scale-out storage products, he said, is that many are file systems, but Nutanix is not so limited. The Nutanix appliance includes data management software that is akin to “bringing all of NetApp” to the system.

The key to Nutanix is virtualization, which provides the abstraction and the additional storage connections necessary to give Nutanix the performance edge it claims. The company is big on solid-state drives for performance and consolidation, but Pandey says legacy storage systems are limited to the amount of SSDs they can handle. With a virtualized computing layer, however, each virtual server and each physical node provide the requisite housing and connection to an additional SSD. The Nutanix appliance combines both SSDs and hard disk drives to achieve maximum levels of performance and affordability, Pandey said.

Despite the known difficulty of selling appliances versus software alone — something we’ve seen played out recently by both Schooner and Cirtas Systems — Pandey is confident in Nutanix’s chances. For one, he noted, converged infrastructure is hot right now thanks to products such as Cisco’s (s csco) Unified Computing System, the Virtual Computing Environment’s Vblocks, HP’s (s hpq) BladeSystem Matrix and Dell’s (s dell) vStart. However, explained Co-Founder and Chief Products Officer Ajeet Singh, those products involve separate storage components like customers could buy separately; the systems are really just enclosures. Pandey says it comes down to a choice between choosing the iPhone (s aapl) approach to integration or the Android approach of software on multiple devices, and Nutanix chose the former.

6 Responses to “Nutanix Gets $13.2M for Google-like Storage Architecture”

  1. Mukesh Aggarwal

    One of the key issues is, how many companies need to scale processing at same speed as data ? Here is my take
    1) Usually storage grows faster than processing needs. Not every piece of data needs to be processed by CPU.
    2) there is complication of making your existing applications take advantage of more CPU nodes.
    3) appliances take more space than additional drive in a data center

    I do see value for companies which are into data crunching (more cpu’s and more data at same rate), not sure if it is for masses though.

    • @Mukesh

      Very good questions – would love to get into details after we unveil our product later this year and discuss how we can independently scale data from compute for virtual machines. For now, I’ll drop a couple of hints in response to your questions:

      1. Virtualization (hypervisor) provides us a nice container that can help compute and storage grow and shrink independently for VMs.
      2. A single server-attached SSD can replace hundreds, if not thousands, of disk drives in the data center. Imagine what they can do in a scale-out cluster.

  2. Hi Joe,

    Thanks for your comments on the appliance approach. We are focused on making the virtualized datacenters very easy to deploy and manage and the appliance experience is a part of it. The other key aspect is high availability. As you pointed out, the traditional architecture mandates the use of network storage to keep data safe even if compute nodes fail. Over the last decade or so, distributed computing software written at companies like Google, Yahoo, Amazon and now Microsoft (Windows Azure) has changed the game though. This kind of technology enables data centers that combine compute and storage in a scale-out architecture and are extremely fault tolerant to disk and server failures. Data is replicated across the cluster and is always fully protected. Our system architecture follows similar principles of distributed computing and provides advanced data protection capabilities in a converged architecture.

    Ajeet Singh
    Chief Products Officer

  3. Selling appliances is a smart move. With each new bit of hardware that infrastructure software must run upon, there are always new issues that arrise from dealing with BIOS issues, controllers, network adapters, drives, chip architecture… the list goes on.

    In addition to the issues that arise from dealing with these small details, is that usually systems like these are built and tuned with a particular architecture in mind (ratio of SSD to spinning disks, for example). With software-only solutions, customers can play with these settings and get caught in unexpected use cases for the software. Control the full-stack from software to hardware — and you know what customers are doing.

    The challenge with blending compute & storage that I’d love to see an answer to is this: Compute and storage have different durability properties. It’s acceptable to occasionally lose a compute node/virtual machine/instance. It’s not acceptable to lose customer data. Does blending compute with storage add any additional risk in the complexity of the system that may put durability at risk?