During the first decade of this century, we shifted fairly quickly from a relatively simple set of data storage technologies and implementations to a very large spectrum of options, with vendors proposing solutions for all kinds of data and applications. They are responding to an increasing demand for storage specialization, introduced by the necessity for overall efficiency, cost reduction, and today’s large capacities.
In such an evolved scenario, the Swiss army knife approach no longer works. Cloud, IoT, Big Data Analytics, Virtualization, Containers, Active archives as well as laws, regulations, security concerns, and policies are the driving forces which lead to reconsidering all aspects of data and how it is saved. Furthermore, each single application has different workload characteristics and must be addressed accordingly in order to stay competitive and respond quickly and adequately to new challenges. Data is now accessed locally as well as remotely, adding another layer of complexity when the same piece of information needs to be shared quickly and safely to a multitude of devices distributed across the globe.
A two-tier storage strategy is essential now for covering both latency-sensitive and capacity-driven workloads in an appropriate way.
This sector roadmap examines an expanding segment of the market—object storage for secondary and capacity-driven workloads in the enterprise—by reviewing major vendors, forward-looking solutions, and outsiders along with the primary use cases. This document covers Scality, SwiftStack, EMC ECS, RedHat Ceph, HDS HCP, NetApp StorageGRID Webscale, Cloudian, Caringo, DDN, HGST, and will give additional information on products from OpenIO, NooBaa, IBM Cleversafe, and Minio.
The final goal is to provide the reader with the tools to understand the benefits of object storage and how it can be implemented to solve specific business needs and technical challenges. This report will also aid a better understanding of how the market is evolving, and offer support for developing a long-term strategy for existing and future storage infrastructures.
Key findings in our analysis :
- Amazon S3 API is the de facto standard. The level of compatibility is not the same for all products, but it is becoming less of a differentiator, with some vendors showing a very high level of compatibility and others quickly catching up.
- In traditional enterprise organizations, scalability is not considered a major issue but scale-out is important. In fact, most enterprises initially adopt object storage for a single use case and for less than 200TB. Over time it is deployed for more use cases, increasing the capacity. Multi-petabyte environments are becoming more common but are still a very small fraction of the overall installed base.
- Cloud tiering capability is considered a benefit but in practice it is used only to manage temporary capacity bursts.
- End-to-end integrated solutions are favored by enterprises and end users who prefer ease of use as opposed to best-of-breed.
- End users like the idea of pre-integrated appliances but are now much more confident than in the past with the storage-only approach.
- Lately, most of the basic features (data protection, availability, API compatibility, UI) are taken for granted and the $/GB metric is considered a major “feature,” especially if the primary use case for the object store is to be a back-end repository for a third party application (i.e. Backup).
- Network File Systems (NFS) and Server Message Block (SMB) access protocols, via native connectors or gateways, are now a key feature for all those end users who are planning to deliver traditional file services and want to access data via different methods.
- Number indicates company’s relative strength across all vectors
- Size of ball indicates company’s relative strength along individual vector
- Introduction and Methodology
- Usage Scenarios
- Disruption Vectors
- Company Analysis
- Additional Vendors
- Outlook and Key Takeaways
- About Enrico Signoretti
- About Gigaom Research