Key Criteria for Evaluating Machine Learning Operations (MLOps)v 1.1

An Evaluation Guide for Technology Decision Makers

Table of Contents

  1. Summary
  2. MLOps Primer
  3. Report Methodology
  4. Decision Criteria Analysis
  5. Evaluation Metrics
  6. Analyst’s Take
  7. About Andrew Brust

1. Summary

Artificial Intelligence (AI) and machine learning (ML) have been enormously important and impactful in the technology world, not only in terms of the hype cycle but also with regard to their significance to enterprise customers. Today, the market has moved past the point of being wowed by the technology—a demo or proof of concept is no longer enough to convince it of value or efficacy.

Now that ML has the market’s attention, it’s time to make it work at scale, in production, on a day-in and day-out basis. ML needs to be operationally manageable, with automation replacing the manual experimentation, deployment, and monitoring that have run rampant in its early execution.

This is where Machine Learning Operations (MLOps) comes in. Much like the DevOps culture that has established itself over the last decade, and the DataOps culture that followed, MLOps seeks to make the development and operational side of machine learning rigorous, consistent, automated, and efficient. MLOps advances certain approaches from mere best practice to formalized procedures that are managed as a matter of policy, rather than simply through organization and good habits.

The appearance of MLOps tooling and platforms is a great sign of maturity for ML. The fact that so many MLOps solutions—both open source and commercial—now exist, and that a set of common MLOps capabilities is emerging, is a better sign yet. Meanwhile, the very existence of so many solutions, and the fact that capabilities vary widely from platform to platform, makes it high time to report on this category. This Key Criteria report will help enterprise buyers become familiar with MLOps, track the MLOps state of the art, and develop a taxonomy and framework for understanding the technologies and vendors within it.

Among the key findings of the report:

  • Experimentation management and deployment are critical capabilities for MLOps platforms
  • ML model explainability and responsible AI-related capabilities are quickly becoming high-priority
  • MLOps platform criteria and capabilities are still evolving and are subject to frequent change
  • Accuracy monitoring and ground truth facilitation are increasingly important capabilities
  • Container technology supports far more efficient ML development, deployment and reuse

Full content available to GigaOm Subscribers.

Sign Up For Free