Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
I recently happened upon Apple’s classic “1984” commercial (s AAPL). It had been years since I’d last seen the ad, but I was struck by its symbolism and timelessness all the same.
The ad, which features a runner heaving a hammer into a giant monitor on which a Big Brother-like figure is speaking ominously to a room full of drone-like workers, signifies a classic case of technological disruption. Apple, playing the disruptor, was introducing the Macintosh and personal computing to the market and forever changing the face of computing in the process.
Since the commercial aired, computing technology has evolved at an astronomical rate. Personal computing spurred the rise of network computing, which in turn has spurred the rise of modern technologies like virtualization and cloud computing.
While the computing side of history is well known, the storage side remains hidden from common view, like the bulk of an iceberg. It’s interesting to look at the state of storage today and compare it with the radically different environment that existed back in 1984, during the rise of the personal computer.
NFS: The dawn of the modern (storage) era
Storage as it is known today did not even come into being until the mid-1980s, when Sun Microsystems introduced the Network File System (NFS) protocol. Before NFS, servers simply consisted of direct-attached disks that connected to a general-purpose computer.
NFS was a huge step that both enabled and accelerated network computing. While it was initially met with a healthy dose of skepticism, NFS quickly gained traction in the enterprise. As it became increasingly clear that computers needed to actively share information and interact with each other, networked file systems became a central tenet of storage.
Just like that, the next generation of storage architecture was born.
During this time, I was a Ph.D. student in the computer science department at UC Berkeley, working with a talented team to perfect RAID, which is the basis of today’s multi-billion dollar storage industry. Vendors offering purpose-built systems for managing storage based on RAID technology — notably EMC and NetApp —grew rapidly. As enterprises embraced network computing, purpose-built storage systems based on RAID were quickly recognized as necessities, rather than niche products.
Virtualization demands more than general-purpose storage can offer
Twenty years later, we are in the midst of the single most significant evolution in IT since the rise of network computing: the rise of virtualized computing. While virtualization offers unprecedented benefits to the server side, including server consolidation, agility, flexibility and manageability, the full-scale adoption of virtualization has been stalled by the complexity and cost of storage. Today, storage is the single most complex and expensive component in the virtualized data center. As Tintri CEO Kieran Harty noted in a recent blog post, storage can account for up to 60 percent of the cost of virtualization deployments.
In fact, it is becoming increasingly clear that general-purpose storage is not sufficient for supporting broad virtual deployments due to fundamental limitations in its architecture. Because general-purpose storage must support a wide range of workloads, it is inefficient and difficult to configure and manage for virtualized environments. With these systems, it is difficult to see what is happening at the level of individual VMs, and you cannot perform storage management operations at the level of VMs. General-purpose systems are difficult to automate for use with virtualized environments.
Moreover, existing general-purpose storage systems were designed before the advent of new technologies such as flash-based SSDs, multicore CPUs, and 10 Gigabit Ethernet. Although most provide support for such technologies, their legacy architectures cannot take full advantage of them.
Storage is about to be disrupted
This has created a disruptive opportunity in the storage field that hasn’t been seen in over two decades. It’s not a surprise that there’s been such a flurry of venture-backed startups entering the market over the past year. Entrepreneurs — many of them coming from established tech giants — are capitalizing on technologies like flash to introduce more efficient storage solutions for today’s data center.
In the virtualized data centers 20 years from today, all aspects of computing will be virtualized, including servers, networks and storage. Virtual machines will be freed from the constraints of the underlying physical resources and will run with the same level of functionality and service, wherever it is most efficient. Full virtualization will be achieved, not only by an accumulation of new features, but by designing an architecture that eliminates everything that cannot be efficiently virtualized. Just as network computing spurred the need for an entirely new storage architecture, so too has virtualization.
Ed Lee is lead architect at Tintri, Inc., which has developed a storage system for virtual machines. Prior to Tintri, Ed was principal systems architect at Data Domain.