Table of Contents
- Summary
- Market Framework
- Maturity of Categories
- Considerations for Selecting ML/AI Storage
- Vendors to Watch
- Near-Term Outlook
- Key Takeaways
- About GigaOm
- Copyright
1. Summary
There is growing interest in machine learning (ML) and artificial intelligence (AI) in enterprise organizations. The market is quickly moving from infrastructures designed for research and development to turn-key solutions that respond quickly to new business requests. ML/AI are strategic technologies across all industries, improving business processes while enhancing the competitiveness of the entire organization.
ML/AI software tools are improving and becoming more user-friendly, making it easier to to build new applications or reuse existing models for more use cases. As the ML/AI market matures, high-performance computing (HPC) vendors are now joined by traditional storage manufacturers, that are usually focused on enterprise workloads. Even though the requirements are similar to that of big data analytics workloads, the specific nature of ML/AI algorithms, and GPU-based computing, demand more attention to throughputs and $/GB, primarily because of the sheer amount of data involved in most of the projects.
Depending on several factors, including the organization’s strategy, size, security needs, compliance, cost control, flexibility, etc, the infrastructure could be entirely on-premises, in the public cloud, or a combination of both (hybrid) – figure 1. The most flexible solutions are designed to run in all of these scenarios, giving organizations ample freedom of choice. In general, long term and large capacity projects, run by skilled teams, are more likely to be developed on-premises. Public cloud is usually chosen by smaller teams for its flexibility and less demanding projects.
ML/AI workloads require infrastructure efficiency to yield rapid results. With the exception of the initial data collection, many parts of the workflow are repeated over time, so managing latency and throughput is crucial for the entire process. The system must handle metadata quickly while maximizing throughput to ensure that system GPUs are always fed at their utmost capacity.
A single modern GPU is a very expensive component, able to crunch data at 6GB/s and more, and each single compute node can have multiple GPUs installed. Additionally, CPU-storage vicinity is important and why NVMe-based flash devices are usually selected for their characteristics of parallelization and performance. What is more, the data sets require a huge amount of storage capacity to train the neural network. For this reason, scale-out object stores are usually preferred because of their scalability characteristics, rich metadata, and competitive cost.
Figure 1: Possible combination of storage and computing resources in ML/AI projects
In this report, we discuss the most recent storage architecture designs and innovative solutions deployed on-premises, cloud, and hybrid, aimed at supporting ML/AI workloads for enterprise organizations of all sizes.
Key findings:
- Enterprise organizations are aware of the strategic value of ML/AI for their business and are increasing investments in this area.
- End users are looking for turn-key solutions that are easy to implement and that deliver a quick ROI (Return on Investment).
- Many of the solutions available are based on a two-tier architecture with a flash-based, parallel, and scale-out file system for active data processing and object storage for capacity and long term data retention. There are also some innovative solutions that are taking a different approach, with the two tiers integrated together in a single system.