Table of Contents
- Unstructured Data Management Primer
- Report Methodology
- Decision Criteria Analysis
- Evaluation Metrics
- Key Criteria: Impact Analysis
- Analysts’ Take
- About Max Mortillaro
- About Arjan Timmerman
The growth of unstructured data now outpaces structured data growth by several orders of magnitude. Many factors are contributing to this massive increase, and human-generated data now amounts to only a fraction of the total data generated globally. It’s now machine-generated data that is growing the fastest, thanks to the increasing number and quality of sensors, compute power, and parallelism. This is data that, in most cases, needs to be stored for long periods of time, or forever.
A host of next-generation workloads—genomics, artificial intelligence and machine learning (AI/ML), electronic design automation (EDA), and so on—combined with the overwhelming popularity of public cloud platforms, massively drives demand for unstructured data solutions. Sustained growth in this market segment makes unstructured data the standard storage choice for many enterprises and could soon steal the moniker of “primary storage,” which has been used for decades to refer to block storage systems.
Managing storage capacity with efficiency has become easier and somewhat less expensive, thanks to scale-out storage systems for files and objects. Moreover, the cloud enables vendors to expand the options available to enhance performance, capacity, and cold data archiving.
However, that doesn’t mean data storage is any less complex. In the last 10 years, we have moved from storing data locally, mostly on-premises, to storing it in several repositories that present different characteristics and access methods. This is an accelerating trend—multicloud IT strategies are becoming quite common, and data is now created and consumed at the edge as well. The right mix all depends on business, application, and user requirements. Furthermore, as we discussed in a recent report, Key Criteria for Evaluating Hybrid Cloud Data Protection Solutions, finding a solution to protect data across several infrastructures and environments is still quite challenging.
The flexibility and scalability provided by public clouds also comes at a price. In this period of financial turmoil and uncertainty, some organizations are actively seeking cost reduction opportunities: cloud-first initiatives are being reevaluated and data repatriation projects are becoming commonplace across multiple verticals. Those projects require careful planning and execution and can become very costly without a prior analysis of the data footprint.
Complexity also impacts compliance and regulatory measures; demanding regulations akin to GDPR and CCPA are being adopted around the world, making analysis and classification tougher than ever without the help of an unstructured data management solution. Data protection and management processes are crucial for complying with ever-changing business requirements, laws, and organization policies. Automating the analysis and enforcement of regulatory or internal compliance policies is a key success factor in achieving regulatory compliance.
Those two business imperatives—repatriation projects and regulatory compliance—further increase the need for solutions that can seamlessly handle data movement at scale automatically, with minimal oversight, ideally based on a policy engine.
We are coming to a point where storing data safely and for a long period of time does not actually bring any benefit to an organization, and it can quickly become a liability. However, with the right processes and tools, it’s now possible to take control of data and exploit its hidden value, transforming it from a liability to an asset. Examples of this transformation are now common across all industries, with enterprises of all sizes reusing old data for new purposes with the help of technologies and computing power that weren’t available a few years ago.
With the right management solutions for unstructured data, it’s possible to understand what data is actually stored, no matter how complex or dispersed, and to build a strategy that constrains costs while increasing the return on investment (ROI) for data storage.
Depending on the approach you choose, there are several potential benefits to be derived from building and developing a data management strategy for unstructured data, including better security and compliance, improved services for end users, cost reduction, and data reusability.
The GigaOm Key Criteria and Radar reports provide an overview of the unstructured data management market, identify capabilities (table stakes, key criteria, and emerging technology) and evaluation metrics (non-functional purchase drivers) for selecting an unstructured data management solution, and detail vendors and products that excel. These reports give prospective buyers an overview of the top vendors in this sector and help decision-makers evaluate solutions and decide where to invest.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.