Table of Contents
Exponential data growth is no longer news, with the growth of unstructured data now outpacing structured data by several orders of magnitude. Many factors are contributing to this growth, and human-generated data is now only a fraction of the total data generated globally. It is now machine-generated data that is growing the fastest, thanks to the increasing number and quality of sensors, compute power, parallelism, and so on. This is data that, in most cases, needs to be stored for long periods of time, or forever.
Managing storage capacity with efficiency has become easier and somewhat less expensive, thanks to scale-out storage systems for files and objects. Moreover, the cloud enables vendors to expand the options available to enhance performance, capacity, and cold data archiving.
However, that doesn’t mean things have grown less complex. In the last 10 years, we have moved from storing data locally, mostly on-premises, to storing it in several repositories that present different characteristics and access methods. This is an accelerating trend—multi-cloud IT strategies are becoming quite common, and data is now created and consumed at the edge as well. The right mix all depends on business, application, and user requirements. Furthermore, as we discussed in a recent report (Key Criteria for Evaluating Hybrid Cloud Data Protection), finding a solution to protect data across several infrastructures and environments is still quite challenging.
Moreover, in this multi-cloud scenario, demanding regulations like GDPR require a different approach. Data protection and management processes are crucial for complying with ever-changing business requirements, laws, and organization policies.
We are finally coming to a point where storing data safely and for a long period of time does not actually bring any benefit to an organization, and it can quickly become a liability. However, with the right processes and tools, it is now possible to take control of data and exploit its hidden value, transforming it from a liability to an asset. Examples of this transformation are now common across all industries, with enterprises of all sizes reusing old data for new purposes with the help of technologies and computing power that weren’t available a few years ago.
With the right management solutions for unstructured data, it is possible to understand what data is actually stored, no matter how complex or dispersed, and to build a strategy that constrains costs while increasing the return on investment (ROI) for data storage.
Depending on the approach you choose, there are several potential benefits in building and developing a data management strategy for unstructured data, including better security and compliance, improved services for end users, cost reduction, and data reusability.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.