Table of Contents
- Executive Summary
- Overview
- Considerations for Adoption
- GigaOm Sonar
- Solution Insights
- Near-Term Roadmap
- Analyst’s Outlook
- Report Methodology
- About Andrew Brust
- About GigaOm
- Copyright
1. Executive Summary
Generative AI (GenAI) is something most people didn’t even think about as recently as early 2023, but now it seems to be everywhere. What’s more, GenAI is a high priority for many organizations, both on the technology side of the house and, perhaps even more, the business side. The hype portrays GenAI as easy to implement and almost magical in its capabilities. But while it’s undeniably powerful when well-harnessed, GenAI tech is new, evolving, and not yet fully understood. It is being implemented by professionals still learning how to use it effectively, still observing what can go wrong with it, and still figuring out how to stabilize and control it. As a result, GenAI is being treated as an experimental technology, primarily used for proofs of concept, with cool demos as the deliverable.
For GenAI to become useful and production-ready for mainstream use, its development, testing, deployment, and ongoing management must be rigorous, precise, and masterful. Making that disciplined approach a reality is the focus of AI governance and the subject of this Sonar report. This is not a straightforward inquiry, though, because vendors in this space each define AI governance differently.
Sometimes those differences are slight; in other cases, two vendors’ definitions can be mutually exclusive. In aggregate, the only real point of consensus is that AI governance aims to bring order and repeatable, auditable processes to the often chaotic and improvised AI implementation approaches that are common today. The content and function of those processes, and the principles upon which they are based, however, are not yet standardized.
Standardizing the definition of AI governance is a battle, but one which must be fought. There are different fronts in this conflict, and vendors fight harder or allocate more resources to particular ones. Some vendors focus on fairness, trustworthiness, and responsibility in AI, while others emphasize regulatory compliance. Many concentrate on operational issues and some are obsessed (in a positive way) with risk monitoring and mitigation. The last of these includes minimizing large language model (LLM) “hallucinations,” maximizing contextual relevance, preventing harmful or offensive content, and protecting against exfiltration of sensitive or proprietary data, whether in prompts or in output or responses. Figure 1 illustrates the major areas of AI governance functionality emphasized by the various vendors whose platforms are profiled in this report.
Figure 1. AI Governance Functionality
Let’s step back for a moment, and recognize that AI governance isn’t just about GenAI; it also applies to more established statistical AI in the form of machine learning (ML). Although it seems like GenAI gets all the attention, ML is arguably just as important. After all, ML is more established, uses far fewer compute resources (and is, therefore, less expensive) and works in a more algorithmically determinate way than GenAI, making it less risky.
However, the differences between ML and GenAI make unified governance less than straightforward. Some vendors in this report attempt this fusion, while others handle them separately or focus primarily on one. That’s another complex front in that battle for standardization.
Whether you are a developer, a business analyst, a data scientist, or a legal or compliance professional, AI governance should be of utmost importance to you. The once ad hoc approach most companies took with ML has become more disciplined, but now that whole maturity cycle has been reset with GenAI. The potential risks of using AI without comprehensive and auditable governance are considerable, and companies should act now to establish strong auditable governance practices.
This report examines a range of vendors in the AI governance space, including established AI companies from the ML era and newer players in the GenAI timeframe. We also look at offerings from a couple of veteran enterprise software/cloud providers, as well as a cloud data warehouse provider that has made some strategic AI acquisitions. We can’t cover every vendor claiming to be in this category, but those included in this report cover a very good cross-section of the market.
This is GigaOm’s first Sonar report on the AI governance space. This GigaOm Sonar report provides an overview of the market’s vendors and their available offerings, outlines the key characteristics that prospective buyers should consider when evaluating solutions, and equips IT decision-makers with the information needed to select the best solution for their business and use case requirements.
ABOUT THE GIGAOM SONAR REPORT
This GigaOm report focuses on emerging technologies and market segments. It helps organizations of all sizes to understand a new technology, its strengths and its weaknesses, and how it can fit into the overall IT strategy. The report is organized into five sections:
- Overview: An overview of the technology, its major benefits, and possible use cases, as well as an exploration of product implementations already available in the market.
- Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario. We look at table stakes and key differentiating features, as well as considerations for how to integrate the new product into the existing environment.
- GigaOm Sonar Chart: A graphical representation of the market and its most important players, focused on their value proposition and their roadmap for the future.
- Vendor Insights: A breakdown of each vendor’s offering in the sector, scored across key characteristics for enterprise adoption.
- Near-Term Roadmap: 12- to 18-month forecast of the future development of the technology, its ecosystem, and major players in this market segment.