This GigaOm Research Reprint Expires May 6, 2025

GigaOm Radar for Cloud Resource Optimizationv3.0

1. Executive Summary

Cloud resources that are not optimized can prove costly. Cloud resource optimization solutions provide a holistic view of an organization’s public or private cloud infrastructure. They deliver resource configuration suggestions that balance cost, performance, and other objectives. The most valuable of these solutions provide effective and reliable resource configuration recommendations, integrate into deployment pipelines, and enhance management processes.

Furthermore, as cloud usage continues to outpace the rate at which IT operational analysts can be hired, automated optimization of these resources directly impacts the bottom-line of the cloud bill and the effectiveness of existing IT staff (who are freed up to work on higher-value business objectives). Taking an hour to determine whether a machine would benefit from less or more vCPU may seem hardly worth the time and effort but may reveal an imbalance that can generate significant excess spending or risk if neglected at scale. This is the kind of task the analytics engines within resource management solutions can handle effectively and expediently.

As you evaluate cloud resource optimization solutions, it’s important to keep the following in mind:

  • Cloud resource optimization is closely aligned with the financial operations (FinOps) and cloud management platform (CMP) tooling categories, and solutions may lean in one of those directions with a strategy of providing a single solution.
  • Private cloud, public cloud, and Kubernetes resources all require oversight and optimization, and solutions tend to be stronger in one area than another. You’ll need to determine where your resource challenges exist today and what improvements you want to have made 12 to 18 months from now.
  • It can be beneficial to delegate resource and cost optimization to individual teams, with some limited central oversight. Individual teams are closely aligned with the performance needs of their applications and, if motivated properly and given the right tools, will ensure a balance is reached between cost and performance.

This is our third year evaluating the cloud resource optimization space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.

This GigaOm Radar report examines 11 of the top cloud resource optimization solutions in the market and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the category and its underlying technology, identify leading cloud resource optimization offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

2. Market Categories and Deployment Types

To help prospective customers find the best fit for their use case and business requirements, we assess how well cloud resource optimization solutions are designed to serve specific target markets and deployment models (Table 1).

For this report, we recognize the following market segments:

  • Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
  • Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category have a strong focus on flexibility, performance, data services, and features that improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.

In addition, we recognize the following deployment models:

  • Software as a service (SaaS): These are available only in the cloud. They’re designed, deployed, and managed by the service provider and are available only from that service provider. The advantage of this type of solution is the ease of integration with other services offered by the cloud service provider. These components may support the installation of remote agents into customer-owned environments.
  • Self-hosted: These solutions are deployed and managed by the customer, often within an on-premises data center, a dedicated virtual private cloud (VPC) of a cloud provider, or even a customer-specific virtual machine (VM) image on public infrastructure as a service (IaaS). This approach drives up operational costs but also allows greater flexibility with respect to the data collected by the platform.

Table 1. Vendor Positioning: Target Market and Deployment Model

Vendor Positioning: Target Market and Deployment Model

Target Market

Deployment Model

Vendor

SMB Large Enterprise SaaS Self-Hosted
Akamas
Apptio, an IBM Company
BMC
Broadcom (VMware)
Densify
Flexera
Harness
IBM
NetApp
OpenText
StormForge

Table 1 components are evaluated in a binary yes/no manner and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1).

“Target market” reflects which use cases each solution is recommended for, not simply whether that group can use it. For example, if an SMB could use a solution but doing so would be cost-prohibitive, that solution would be rated “no” for SMBs.

3. Decision Criteria Comparison

All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:

  • Hybrid multicloud support
  • VM and container performance analysis
  • Basic cost analysis or FinOps integration
  • Role-based access control (RBAC) and self-service
  • Dashboards and reporting
  • Agentless or passive monitoring integrations
  • Rules-based dynamic discovery
  • Cost/performance balancing

Tables 2, 3, and 4 summarize how each vendor included in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.

  • Key features differentiate solutions, highlighting the primary criteria to be considered when evaluating a cloud resource optimization solution.
  • Emerging features show how well each vendor is implementing capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months.
  • Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization.

These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Cloud Resource Optimization Solutions.”

Key Features

  • Integration with DevOps and ITSM tools: Providing a variety of easy and flexible integrations with common third-party DevOps and enterprise ITSM tools is important for this sector’s value proposition. Such integrations should require minimal effort to establish and have little impact on existing workflows and processes. The degree to which the available integrations are supported out of the box is a factor.
  • Automatic resource reconfiguration: The ability to automatically apply certain types of recommendations based on configurable rules or characteristics reduces the effort required to derive value from a cloud resource optimization solution and yields finer-grained results.
  • Reconfiguration through managed pipelines: Although many optimization opportunities can be provisioned directly through the respective cloud provider, organizations often have existing DevOps pipelines, infrastructure provisioning tools, change control processes, and auditing constraints that require some or all changes be routed through managed pipelines that are external to the cloud resource optimization solution.
  • AI/ML-driven resource predictions: The sophistication and depth of the engine used to generate resource recommendations is a key determinant of how effectively cloud resources can be optimized. AI-driven systems are able to surface insights from data that can’t be achieved through simpler statistical techniques, and they are able to improve with use in a way that is relevant to each user.
  • Intelligent resource grouping: Any cloud system (application, service, content, and so forth) is composed of many cloud resources, some distinct and some shared. Even though it is the individual resources that are being optimized, it is helpful if a solution can automatically identify which resources should be aligned into groups for system-level management.
  • Abandoned resource identification: Beyond the implied capability to reduce provisioned resources that for whatever reason aren’t being used, solutions should be able to identify orphaned resources that are remnants of decommissioned systems or extraneous provisioning processes, and remove them entirely.
  • Kubernetes resource management: In addition to incorporating containerized resources into the optimization regimen, a solution can achieve even better results through deep optimization of Kubernetes clusters, which should include cluster configurations, topologies, and advanced auto scaling techniques.

Table 2. Key Features Comparison

Key Features Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Key Features

Vendor

Average Score

Integration with DevOps & ITSM Tools Automatic Resource Reconfiguration Reconfiguration through Managed Pipelines AI/ML-Driven Resource Predictions Intelligent Resource Grouping Abandoned Resource Identification Kubernetes Resource Management
Akamas 3.1
Apptio, an IBM Company 3.1
BMC 3.3
Broadcom (VMware) 3.1
Densify 4.1
Flexera 3.3
Harness 2.9
IBM 3.1
NetApp 4.3
OpenText 2.1
StormForge 2.4

Emerging Features

  • Objective-based optimization: While the primary objective of cloud resource optimization is cost reduction (and, implicitly, maintaining a certain level of performance), some organizations may have additional goals or metrics to evaluate. A solution’s capability to incorporate custom objectives or metrics as factors in its recommendation engine can add considerable value.
  • Workload simulation and capacity planning: A big challenge for engineering teams within organizations is to predict the capacity that may be required by new workloads before they are added to the system. Solutions that can simulate new workload scenarios and provide accurate resource requirements will ensure that engineering teams can maintain adequate capacity across their cloud platforms. This forecasting function helps an organization keep pace with development without overprovisioning (wasting resources) or underprovisioning (affecting stability).
  • Serverless optimization: The actual meaning of serverless computing is nebulous and often driven by context, but here we consider a solution’s capability to manage cloud-native or platform as a service (PaaS) resources other than containers, such as AWS Lambda and Azure Serverless Functions, or platforms such as AWS Fargate. Although these resources are presumably fully optimized by the provider, they are still potentially part of an organization’s cloud footprint, and solutions may add value by providing visibility into their cost and utilization and interdependencies with resources that can be directly optimized. Solutions may also identify potential cost savings by migrating loads to or from architectures that the solution can optimize.

Table 3. Emerging Features Comparison

Emerging Features Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Emerging Features

Vendor

Average Score

Objective-Based Optimization Workload Simulation & Capacity Planning Serverless Optimization
Akamas 4.7
Apptio, an IBM Company 1.7
BMC 1
Broadcom (VMware) 3.3
Densify 2.7
Flexera 2.3
Harness 0.3
IBM 2.3
NetApp 2
OpenText 3.3
StormForge 1.7

Business Criteria

  • Cost: This criterion evaluates the simplicity, transparency, and scalability of the solution’s cost model. This includes licensing of the product itself and whether professional services or a significant degree of custom development is likely to be needed.
  • Flexibility: This criterion assesses a solution’s potential to be applied to the breadth of an organization’s technology assets, processes, and workflows without requiring changes in the way people work or introducing awkward compromises or work-arounds.
  • Scalability: Here, we consider a solution’s capacity for managing very large footprints, the availability of its architecture, the level of scale actually demonstrated, and the fitness of the tool for large cloud footprints.
  • Usability: This criterion assesses how easy the solution will be to use in everyday life, including training, setting up, and any degree of specialized expertise or effort that is required.
  • Ecosystem: This criterion evaluates overall community engagement with the solution, the depth of the available talent pool familiar with supporting the product, the quality of the knowledge base, the availability of a user community through forums or Slack, open source contributions and activity, and the number of third parties that supply integrations or aftermarket support for the solution.

Table 4. Business Criteria Comparison

Business Criteria Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Business Criteria

Vendor

Average Score

Cost Flexibility Scalability Usability Ecosystem
Akamas 3.4
Apptio, an IBM Company 3.2
BMC 2.6
Broadcom (VMware) 3.4
Densify 4
Flexera 3.8
Harness 3.6
IBM 4.2
NetApp 4.6
OpenText 3.2
StormForge 3.2

4. GigaOm Radar

The GigaOm Radar plots vendor solutions across a series of concentric rings with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s evolution over the coming 12 to 18 months.

Figure 1. GigaOm Radar for Cloud Resource Optimization

As you can see in the Radar chart in Figure 1, the cloud resource optimization market is fairly mature by this point. There are good options available, from feature play tools that address specific needs and enable deep performance optimization, to platforms that allow IT and finance to manage most aspects of their cloud workloads.

The only real cluster of solutions is found in the Maturity/Platform Play quadrant. These are tools that have long been trusted by large enterprises to optimize cloud resources, and they tend to be product suites (or highly integrated into complementary products) that cover other common cloud concerns such as FinOps, management, and observability.

Two products fall within the Maturity/Feature Play quadrant. These reliable and established tools focus on optimizing cloud resources and can be used standalone for that purpose or to augment other platforms where more comprehensive performance optimization is needed.

Two more solutions sit in the Innovation/Platform Play quadrant. In this Radar, their designation as Innovation vendors reflects a slightly different approach to optimization that will fit well for some organizations. It does not imply that the platforms themselves are new. In fact, one component of software featured in the lower right of this Radar chart used for workload simulation traces its history back for three decades.

Finally, we have a newcomer in this year’s report in the Innovation/Feature Play quadrant. StormForge is a relatively new tool with a strong focus on automatic optimization of containers, which seems to be where most new (or migrated) cloud software is likely to reside.

The most significant overall change from last year’s Radar is the growing maturity of the market, as most tools have settled into their niches and focus on refining ever more nuanced and effective methods for preventing waste and ensuring good availability of cloud infrastructure.

In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; there are aspects of every solution that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.

INSIDE THE GIGAOM RADAR

To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.

Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.

For more information, please visit our Methodology.

5. Solution Insights

Akamas

Solution Overview
Akamas is a cloud resource optimization solution with unique capabilities stemming from a design approach that is fundamentally different from the other tools evaluated. At its core, Akamas is an ML-driven solution that uses reinforcement learning to identify the best configuration for a given workload. It stands out as one of the few solutions evaluated that optimize based on the full stack, including the application layer and various language runtimes (such as Java, Node.js, and Golang). The company also places a strong emphasis on safety throughout the product so that unintended effects are minimized.

Akamas has unmatched capabilities for optimizing AWS workloads at the platform level, and its model is capable of being extended to other public clouds as well as private clouds or even specialized computing environments. Akamas extends its resource optimization to Kubernetes clusters running in any environment. Please reference the GigaOm Key Criteria and Radar reports for evaluating Kubernetes resource management solutions for more insight into that topic.

Perhaps as a consequence of its focus on the full stack, Akamas does not currently support a SaaS model, though it does offer a cloud-hosted option that is fully managed by Akamas. The instance is specific to the customer but can be fully managed by Akamas, a distinction which may not matter in practice. Deployment is limited to a single VM instance, which can be either self-hosted or cloud-hosted. This presents a different set of difficulties for both the SMB and enterprise segments because the setup would require a high degree of hands-on effort and likely professional consultation from Akamas, yet would be constrained in terms of scalability and high availability. On the other hand, no agent is required, so once the initial setup is complete, optimizing additional applications should be a breeze.

Strengths
Akamas provides top-of-the-line analysis for performance optimization and workload simulation across a full stack, extending well into AWS, Azure, and GCP at the platform level and virtually anywhere else with some extra effort.

The tool scored high on AI/ML-driven resource predictions, and it generates high-quality recommendations for performance optimizations with a level of quality that rivals top-tier platforms.

Akamas’s workload simulation capabilities are among the most extensive we’ve evaluated, with an AI-driven experimental approach used to validate application and system behavior under real-life workload conditions. Users have noted the tool’s effectiveness at periodic and seasonal analysis. It also scored well on objective-based optimization, with the explicit ability to define custom goals and constraints and then outline their behavior based on any conceivable parameter, including data exposed from the application stack. This capability could be easily extended beyond resource optimization, leveraging the comprehensive AI engine to assist in achieving any kind of runtime goal an application might have.

Challenges
Constraints related to the deployment model limit Akamas’s scalability. Although there are no concerns about the application’s ability to perform reliably and maintain availability at load, extra effort would be required to manage a portfolio of hundreds of legacy applications for which deep optimization is not a goal. This limits Akamas’ appeal to large organizations who just need a fast and easy way to optimize infrastructure with a large existing cloud footprint.

Purchase Considerations
Akamas offers simple pricing based on the number of concurrent optimizations. This aligns neatly with how many organizations (large or small) will likely use the tool for deep optimization and analytics on high-priority projects. While the overall market seems to be placing a growing emphasis on cost savings and highly integrated suites of cloud management capabilities, there will always be a place for tools that focus strongly on performance optimization, and it is clear that Akamas will be a driving innovator in this space in the coming years.

Organizations of any size that have a number of important applications driving revenue or critical business functions will benefit from the comprehensive capability Akamas provides to optimize the infrastructure of those applications, as well as to surface insights into their runtime behavior that would be otherwise unobtainable. It is also a good fit for organizations that want to bring that level of capability to their entire portfolio—at the expense of requiring a bit more up-front effort and analysis.

Radar Chart Overview
Akamas is positioned as a Leader in the Maturity/Feature Play quadrant. Despite its relative newness to the market and somewhat awkward deployment model, Akamas has such a clear design emphasis on safe and iterative optimization that users can apply its capabilities without worrying about negatively impacting critical business functions. The company is designated as an Outperformer in recognition of its overall capability to deeply optimize key applications or services. A perfect fit for certain users, these capabilities vastly exceed those of any other tool evaluated.

Apptio, an IBM Company: Cloudability

Solution Overview
In August of 2023, IBM acquired Apptio and immediately began integrating Cloudability with Turbonomic. Although they are still available separately, the two products augment each other well because Cloudability gives organizations insights and recommendations needed to understand and eliminate waste from their cloud spend, while Turbonomic generates trustworthy optimization decisions that can be automated. Integration of the products has already begun with a recently announced ability for FinOps practitioners to surface key optimization metrics from Turbonomic within the Cloudability interface, which can help facilitate deeper cost analysis and partnerships between engineering, business, and finance teams. As time goes on, we can expect Turbonomic and Cloudability to be further integrated along these lines.

Cloudability is a full SaaS solution that focuses on cost management and optimization. While the overall emphasis of the tool is on financial accountability and maximizing the value of cloud investments, it does have a rightsizing engine that analyzes a comprehensive range of input data to make appropriate recommendations. One example is tracking of GPUs, yielding a data point that only a few of its competitors factor into their analyses.

Cloudability scored high on the reconfiguration through managed pipelines key feature, in keeping with its strong design philosophy of driving all rightsizing recommendations through a managed pipeline for implementation. Many solutions in this space support actioning through managed workflows, but they provide (and seem to prefer) the ability to apply recommendations automatically through integration with the respective provider software development kits (SDKs) and infrastructure-provisioning tools. Apptio eschews this approach, citing the industry best practice of managing all infrastructure changes through infrastructure as code (IaC) and source control. While some users may find this inconvenient or difficult to support in practice, the growing number of organizations that are dedicated to upholding this best practice as a matter of policy will appreciate Apptio’s intentional enforcement and robust support for managed workflows as a key benefit.

Following this theme of providing context and informing stakeholders of their options, the recommendations Cloudability produces are not narrow or rigid but rather display a range of options illustrating potential impacts and helping users contextualize which course of action may best suit their needs.

With its strong focus on cost management and a broader range of FinOps concerns, the solution provides good cost forecasting and benchmarking. However, Cloudability’s capabilities around intelligent resource grouping, big data resource optimization, and workload simulation and capacity planning are less developed than solutions that target explicit performance tuning.

Strengths
Cloudability provides a great solution for cloud cost management, with a notably capable rightsizing engine that ensures recommendations will not jeopardize performance. Organizations with strong policies on managing infrastructure changes and in need of additional context around cost impacts will appreciate Cloudability’s design philosophy.

Cloudability offers Kubernetes resource management through its container cost allocation module and capacity planning through its workload planning feature.

Challenges
Despite the top-notch capabilities of Cloudability’s rightsizing engine, it doesn’t provide many paths to tune for performance beyond the side-by-side presentation of individual recommendations. While this is consistent with the overall placement of the tool as a more FinOps-focused offering, it suggests the need for further improvement of performance-tuning and workload-simulation capabilities.

Purchase Considerations
Cloudability is priced according to a percentage of monitored cloud spend, with tiers offering lower rates at higher volumes. Because Cloudability is becoming tightly integrated with Turbonomic and both are offered by the same company, the two should be considered together in purchasing discussions with IBM.

Medium-sized businesses or enterprises that need strong insights into cloud spending and optimization, including identification of potential waste, will find Apptio Cloudability valuable. This is especially true for organizations that need or prefer more control over how and when changes are provisioned in cloud environments.

Radar Chart Overview
IBM’s Cloudability is positioned in the Maturity/Platform Play quadrant. It shared some proximity with IBM Turbonomic even prior to IBM’s recent acquisition of Apptio. A press release clearly communicates IBM’s vision for how the overall platform will continue to evolve as a trusted and reliable solution for a comprehensive range of cloud management needs, including resource optimization.

BMC: Helix Continuous Optimization

Solution Overview
BMC Helix is offered as both SaaS and an on-premises solution. Each offering supports “continuous optimization,” a set of capabilities previously named “capacity optimization.” These capabilities use the familiar terminology extract, transform, load (ETL) to describe how to connect to private and public cloud APIs for data collection and storage for later analysis. The solution provides out-of-the-box support for a wide range of public and private cloud ETLs, along with Moviri and Sentry ETLs that connect to other enterprise systems (such as Splunk, AppDynamics, Elasticsearch, Kubernetes, and storage arrays).

BMC Helix is a suite of products delivered through SaaS. While the continuous optimization capabilities are available on their own, they’re best paired with other solutions to gain true efficiency and visibility within the organization. BMC Helix Discovery will discover and group workload dependencies, while BMC Helix Intelligent Automation is required to automate optimization recommendations or integrate into enterprise workflows (such as infrastructure provisioning pipelines or ServiceNow approvals). These two integrations result in BMC’s high ranking on the intelligent resource grouping and automatic resource reconfiguration key features.

In addition to API-based ETLs, BMC Helix offers an agent-based metric-collection method for systems that may not share their metrics through an API. This approach extends BMC’s strength in managing all data-center and public cloud server instances within a single solution. This feature is even more useful to customers interested in migrating data-center resources to the cloud because the solution has migration and scenario planning capabilities built in.

BMC has gone to great lengths to provide integrations across its line of IT management systems and has taken on some of the management burden with the Helix SaaS offering.

Strengths
BMC Helix has an extensive set of ETLs that integrate data from many enterprise and public cloud systems. The SaaS offering is BMC’s preferred approach, and it reduces administrative overhead.

BMC Helix Continuous Optimization scored high on intelligent resource grouping, whereas BMC Helix Discovery can be leveraged to use machine learning to discover associations among resources. It also scored well on scalability, due to its demonstrated usage at high scale by enterprises.

Challenges
Automation of optimization recommendations may require extensive professional services that may be difficult to maintain at scale. Real value is likely to be derived only from integration with other products.

Purchase Considerations
Multiple products are required and most of the key features are supported through additional add-ons to Continuous Optimization. For organizations that are already licensing the full BMC platform, competent cloud resource optimization is available through this set of tools.

Large enterprise customers that already have a deep investment in BMC technologies will likely benefit most from the solution. They may have the capabilities already licensed or may be able to negotiate for the capabilities with an enterprise license agreement. To get the best value from the continuous optimization capabilities, customers should also leverage other features, such as those provided by discovery and intelligent automation products.

Radar Chart Overview
BMC Helix Continuous Optimization is positioned in the Maturity/Platform Play quadrant. The breadth of capabilities present in BMC’s platform is certainly noteworthy, and the company’s designation as a Challenger on the Radar is a reflection of the fact that the platform’s focus is less squarely aligned with the context of the particular features and criteria we are evaluating in this report. In other contexts, the product is certainly a viable solution to cloud resource optimization for organizations whose goals more closely match BMC’s target.

Broadcom (VMware): VMware Aria and VMware Tanzu

Solution Overview
In November 2023, Broadcom acquired VMware. VMware by Broadcom (“VMware”) is no stranger to the private cloud, having led in the space with the introduction of the VM-driven data center that was orchestrated by vCenter, an API-accessible management solution. VMware has completed a rebranding of its cloud management portfolio, expanding its features to include deeper operational insights and automation capabilities across VMware Aria (formerly VMware vRealize), VMware Tanzu CloudHealth (formerly CloudHealth by VMware Suite), and VMware Tanzu Observability (formerly Tanzu Observability by Wavefront).

Existing VMware customers who want to use the same solutions in the public cloud can now take advantage of VMware’s investments to run on AWS (VMware Cloud on AWS), Azure (Azure VMware Solution), and Google (Google Cloud VMware Engine). However, many customers have been deploying workloads to the public cloud engines already (without the VMware tooling) and wanted a way to optimize their public and private-cloud use via a single tool. VMware recognized this need and offered Tanzu CloudHealth to meet it.

Tanzu CloudHealth is the primary product in this category that monitors public cloud infrastructure use to drive optimization recommendations. It’s a well-balanced solution that provides FinOps capabilities along with configuration efficiency. Tanzu CloudHealth customers can improve resource utilization with tailored rightsizing recommendations, manage commitment-based discounts throughout their lifecycle, and drive continuous optimization with governance policies and automated actions that execute changes in their public cloud environment. VMware can also provide security and compliance assurance via VMware Tanzu Guardrails (formerly VMware CloudHealth Secure State).

To optimize both private and public cloud resources, it’s recommended that customers use a combination of Aria Operations, Tanzu Observability, and Tanzu CloudHealth. When paired, Aria Operations and Tanzu Observability provide contextual observability and business insights, along with deep root-cause analysis for the entire stack, from the underlying infrastructure to the application. Aria Operations has extensive capabilities to manage performance, availability, capacity, cost, and compliance in a hybrid infrastructure, while the management pack for Tanzu CloudHealth acts as a bridge that connects these two worlds and brings the costs and resource usage of public cloud from Tanzu CloudHealth into Aria Operations. Tanzu CloudHealth and Aria Operations have a bidirectional integration in which vSphere-based data from Aria Operations can also be ingested into the Tanzu CloudHealth platform. Aria Operations is then able to automate the workflows required to dynamically modify cloud resource configurations based on the recommendations collected by Tanzu CloudHealth. Tanzu CloudHealth can also ingest usage and performance data from Tanzu Observability and use these metrics to calculate rightsizing recommendations.

VMware provides solutions trusted by the largest organizations and is also a fit for SMBs. With its experience in deploying massively scaled private-cloud infrastructure (more than 300,000 VMs in a single instance), and with Tanzu CloudHealth supporting deployments of more than 1,500 cloud accounts and a monthly spend of $30 million, VMware excels in its ability to meet any demand a customer might throw at it, giving it a high ranking on scalability.

Strengths
VMware’s Aria and Tanzu solutions are an easy fit for customers already using other VMware products. The SaaS offering makes it easier for new customers to get up and running. Tanzu CloudHealth offers customers greater control over access and permissions across organizational hierarchy with FlexOrgs, gives users the ability to perform granular data analysis and answer critical business questions with FlexReports and to manipulate cloud bills for accurate chargeback with Custom Line Items, and offers a complete billing and management solution for managed service providers. VMware delivers confidence when deploying at scale, and it scored high for scalability.

VMware also scored high on the emerging feature of workload simulation and capacity planning, with the ability to generate consolidation estimates for presizing data center environments and perform capacity assessments.

Challenges
For cloud resource optimization across public cloud, private cloud, and hybrid cloud environments, customers will need a combination of Aria and Tanzu products. VMware’s greatest benefits are realized by customers who have a deep existing investment in other VMware technologies. For this reason, VMware did not score well on the cost business criterion.

Purchase Considerations
The Aria solutions are available as components of VMware Cloud Foundation, a self-hosted private and hybrid cloud solution, and VMware vSphere Foundation, an enterprise workload engine for data center optimization in vSphere environments. Tanzu CloudHealth and Tanzu Observability are available as standalone offerings and can also be purchased as add-ons to Cloud Foundation. To achieve the goals set out in this category, users will need multiple VMware products.

Medium-sized businesses or enterprises with a large footprint of cloud resources that include both containerized and especially VMs would benefit from VMware’s comprehensive resource management. This is especially the case for those who have Red Hat OpenShift or Tanzu deployments.

Radar Chart Overview
Broadcom (VMware) is positioned in the Maturity/Platform Play quadrant, and its capabilities are expansive. Its designation as a Challenger on the Radar reflects the fact that the vendor’s focus is not as tightly aligned with the context of the particular decision criteria we evaluated in this report. In other contexts, the offering is certainly a viable cloud resource optimization solution for organizations whose goals more closely match Broadcom’s.

Densify

Solution Overview
Densify provides a full range of cloud resource optimization features without being tied to a larger cloud management or FinOps platform. As a result, it’s more attractive to enterprises that simply want to gain efficiencies without having to invest in a large cloud management platform or service-heavy rollout. This also makes the solution a good fit for certain SMBs with big cloud spends looking specifically to optimize for performance and cost savings.

Densify offers a SaaS solution that can optimize private (VMware) and public cloud (AWS, Azure, GCP) VM instances and container workloads with a low-touch initial deployment.

Though the solution’s capabilities remain highly competitive across private and public cloud deployments, the company is clearly putting a lot of emphasis on the optimization and analysis of container workloads and on developing a deep policy framework. This framework provides guardrails that meaningfully influence every aspect of the system. In addition to enabling granular tuning of its resource optimization analysis for different portions of the environment and applications, Densify’s policies allow customers to codify the desired operational characteristics of workloads. Options include CPU and memory utilization thresholds, FinOps multipliers for tuning how aggressively spend is optimized, risk tolerance (for example, optimize to typical versus busiest day), high availability requirements, disaster recovery and business continuity requirements, approval policies, automation policies, catalog restrictions, or any custom policies. These policies allow a high degree of tuning to ensure that the recommendations are correct and the infrastructure is properly optimized to the types of applications being hosted.

Densify is focused on generating highly accurate recommendations that can be acted upon confidently and automatically. Through a combination of workload pattern analysis, benchmarks, deep policies, API-driven integrations, app owner reports, effort rankings, and ITSM integration, the Densify solution provides deeper and more effective optimization than solutions that focus more on billing or on “advisors” that generate more basic suggestions requiring extensive review.

Finally, it is worth noting that Densify ranks high on the reconfiguration through managed pipelines key feature. While Densify can apply recommendations automatically in some cases, it can also provide recommendations via the API to integrate into any infrastructure deployment pipeline or through the built-in Terraform integration. This workflow accommodates complete and thoughtful support for the policy framework and brings incredible potential for optimizing cloud environments according to customer needs.

Strengths
Densify excels in every capability important to cloud resource and Kubernetes optimization. What really makes it stand out is its objective-based optimization, for which it received a high score. This is enabled by its policy framework and its software resource optimization (SRO) capability. One example that illustrates this capability well is using SRO to identify workloads from a specific vendor and establishing policies to ensure that the minimum specifications established by that vendor are incorporated into the system’s analytics and recommendations. Such an approach would make certain that cost and performance are optimized while simultaneously ensuring that service-level agreements (SLAs) remain enforceable.

Challenges
Its comprehensive analytics and approach to generating recommendations mean that Densify inherently requires a little more effort from the user when it comes to interpretation of data and deciding on subsequent action. The solution provides app owner reports and effort estimates to aid in this interpretation, but such robust capability necessarily comes at the cost of requiring a bit more mindshare from the user than solutions focused on purely automatic optimization. For this reason, Densify had a moderate score on usability.

Purchase Considerations
The Densify license model is an annual subscription based on the size of the customer environment as measured by the number of VMs and container pods/nodes. The cost decreases with volume, and flexibility is provided for environment volatility over short periods. The subscription includes customer management and product experts that assist in the ongoing use of the product.

Densify provides a fairly straightforward licensing structure based on the number of targets undergoing optimization, but customers may also acquire the technology through Densify’s partnership with Intel, by which Intel funds a year of Densify for qualifying organizations.

Enterprises with large footprints consisting of any combination of IaaS or containerized workloads can optimize their cloud resources with Densify. Organizations that need an optimization strategy that goes beyond simple cost and performance considerations, or those needing to dig deep into optimization opportunities without targeting changes to the application stack, will find Densify the most qualified solution for the task.

Radar Chart Overview
Densify is designated as a Leader in the Maturity/Feature Play quadrant, a reflection of its competent and thoroughly demonstrated capability for optimizing cloud resources. The company remains dedicated to bringing cutting edge technology and methods to the task, and its rapidly growing customer base is one result.

Flexera: Flexera One Cloud Cost Optimization

Solution Overview
Flexera One is a SaaS-based IT management platform that consists of multiple integrated (but individually packaged) modules that work in concert to optimize the value of an organization’s technology investments. The Cloud Cost Optimization module’s integration with Turbonomic and Kubecost brings the product squarely into scope for consideration of cloud resource optimization and provides another SaaS-based avenue for taking advantage of these industry-leading technologies.

In fact, Flexera exceeds many incumbents in this space in intelligent resource grouping. It identifies and groups related resources based on any cloud construct or any custom datapoint. Optimizations can then be targeted to any of these groupings.

Flexera’s automation engine provides a highly configurable low-code method to guide and constrain optimization recommendations. Although this requires additional integration effort on the part of the user, the solution comes with an open source library of available policies that are usable out of the box. A survey of this library demonstrates how well balanced Flexera’s capabilities are across the major cloud providers, including AWS, Azure, and GCP.

Flexera supports “what if” planning for migration of loads from on-premises to cloud, though it does not provide simulation capabilities.

Strengths
Flexera One offers a highly flexible and effective platform for common IT use cases. With the integration of Turbonomics and Kubecost, this platform is capably extended into the optimization of cloud resources, and it scored well on the ecosystem business criterion due to its extensive partnerships, excellent documentation, and even a community-involved GitHub repository of templates that can be used to get the most out of the tool.

The solution also scored well on intelligent resource grouping, consistent with its focus on providing rich analysis and visualization of an organization’s technology footprint.

Challenges
Integration with existing tools and effective configuration of the automation engine require more of a hands-on effort than other solutions that favor ease of use over sheer flexibility.

Purchase Considerations
Flexera One Cloud Cost Optimization provides tiered pricing based on total cloud spend. The company has a strong partnership with IBM, and Flexera One is often used with complementary IBM tools. Used alone, organizations should consider licensing all or most of the Flexera One suite to take advantage of the features evaluated in this report.

Organizations that have a technology footprint across many platforms and technologies, and want to develop a long-term strategy for migrating to the cloud and continue to effectively manage their portfolio, should strongly consider Flexera One. Though the Cloud Cost Optimization module is competitive in the cloud resource optimization space on its own, Flexera One is most valuable as a suite. For organizations that already use Flexera One, the Cloud Cost Optimization module will certainly have you covered when it comes to cloud resource optimization. Those interested in starting with cloud resource optimization should consider whether Flexera One’s other capabilities are also a good fit for a broader range of needs.

Radar Chart Overview
Flexera One is positioned in theMaturity/Platform Play quadrant on the Radar chart. It is designated as a Fast Mover because the synergistic effect of the platform’s flexibility and deep customizability results in customer discussion, revealing interesting examples of new and effective ways the tool is being applied to various problems surrounding cloud resource optimization specifically and infrastructure management in general.

Harness: Cloud Cost Management

Solution Overview
Harness Cloud Cost Management is an automated SaaS solution for managing, optimizing, and governing cloud costs. It supports FinOps, DevOps, and engineering teams in their efforts to maximize cloud cost savings while simplifying the process of managing cloud spend.

Over the last year, in addition to making small, steady improvements to product features, Harness released an image that can be self-managed on the cloud. This will appeal to certain enterprises that have special needs with regard to security or confidentiality.

Organizations looking for a standalone solution that addresses a wide range of cloud FinOps and governance challenges while touching on performance optimization should give Harness close consideration. The solution scored well on usability, and the tool’s cohesive design and user experience makes it likely to demonstrate value early.

Moreover, Harness introduces quality of service (QoS) tuning, an approach we think captures a strategic understanding of the problem space and gives a definite nod toward performance optimization. QoS tuning acts as a slider to easily bias its automated engine toward either performance or cost savings. Combined with benchmarking capabilities, this provides an effective (albeit limited) method of tuning for performance. Harness’s recommendation engine is supported by a broad range of integrations with ITSM tools and infrastructure provisioning tools.

Strengths
Harness offers an easy-to-use and highly automated solution for cloud cost management that also enables some performance optimization. It scored well on usability.

Harness provides an unrivaled advantage in abandoned resource identification via a feature called cloud AutoStopping, which is supported on AWS (EC2, ASGs, RDS, ECS, EKS), Azure (VMs, AKS), and GCP (VMs, Instance Groups, GKE). AutoStopping enables users to set policies to automatically identify and stop unused resources from running, and it provides an effective method for bringing them back up if and when they are needed. Though not totally capable of identifying abandoned data, Harness does a great job of solving the overall problem of unused resources with a fresh approach.

Challenges
Harness is meant to address cost management and governance first and foremost, and as a result, many of its capabilities supporting resource and performance optimization are less well developed.

Owing to its stronger product focus on cost management and governance, Harness doesn’t rate as high on AI/ML-driven resource predictions or intelligent resource grouping. Additionally, workload simulation and objective-based optimization are not addressed as use cases for this tool, though both items could be tackled in the interim through the service reliability management and chaos engineering modules.

Purchase Considerations
Harness is priced by a percentage of annual cloud spend, with different tiers enabling enterprise-grade features related to security and governance. The company offers a limited free version for up to $250,000 of cloud spend, and although Harness is focused on the enterprise and mid-size markets, its ease of use and pricing model may appeal to smaller businesses as well.

Enterprises with a particular need for self-managed hosting, or any business looking for a simple and effective way to manage cloud budgets and eliminate waste, without deep needs for performance optimization, should consider Harness. The vendor offers a suite of related tools that are worth considering as well.

Radar Overview
Harness is positioned in the Innovation/Platform Play quadrant. This means it’s innovative in its approach to solving the overall problem. Harness is a comprehensive cost management and continuous software delivery platform that has thoughtfully and competently extended its cloud cost management functionality into a competitive resource optimization solution.

IBM: IBM Turbonomic

Solution Overview
IBM Turbonomic delivers well on its ability to use AI to fine-tune resources and guarantee optimized performance of applications. Within the last year, IBM released full SaaS support for Turbonomic, which should significantly increase its appeal to organizations that had previously sought alternatives.

Turbonomic applies the financial model of supply and demand to its resource optimization capabilities, and while that’s often seen in more FinOps-focused tools, it applies this approach to deeper technical recommendations to remind IT operators that every resource has a cost and that resources are finite. This approach aligns resource optimization more closely with capacity planning. While capacity planning is thought of less in public cloud environments, it’s an important capability in both public (AWS, Azure, and GCP) and private cloud (VMware) infrastructures.

The Turbonomic platform grew out of the need to optimize private infrastructure, and its capabilities have been easily extended into public clouds. That history of private cloud optimization resulted in a comprehensive range of infrastructure integrations that allow the solution to go deep and provide recommendations that may apply to individual workloads or shared platform components, such as hyperconverged infrastructure or storage arrays.

Building on the financial cost-benefit model, Turbonomic models environments as a market and uses market analysis to manage the supply and demand of resources. It visually displays the supply chain for all resources in an intuitive UI that reshapes the way users think about their resources. Additionally, Turbonomic’s data ingestion capabilities enable organizations to specify custom metrics that can be included in the market analysis, helping to tie infrastructure components together to build a more complete supply chain view. This is also the way the Turbonomic platform extends to other key application visibility solutions, such as IBM Instana or other third-party application observability tools. In this way, a transaction flow can be visualized and analyzed from ingestion through storage volume, leading to a high score on flexibility.

The Turbonomic UI readily displays recommendations that are easily interpreted and classified as either performance or savings. These recommendations can then connect to ServiceNow workflows or, in many cases, be automatically applied from within the UI. Organizations can use this solution to automatically purchase reserved instances, resize systems, move or reconfigure resources, or clean them up—striking a strong balance between FinOps and cloud resource optimization capabilities and resulting in a high ranking on automatic resource reconfiguration.

Turbonomic is delivered using a self-hosted deployment model, giving customers the flexibility to expand their use of the solution. As mentioned above, there is a fully supported SaaS version available as well.

Strengths
Self-hosted deployment options provide flexibility. This solution strikes a good balance between FinOps and resource optimization capabilities, with a strong, intuitive UI and deep analysis of infrastructure components.

Turbonomic scored well on the ecosystem business criterion, reflecting its wide and experienced user base, deep talent pool, extensive documentation, and availability of professional support.

Challenges
Greater integration with DevOps and ITSM tools would allow customers even more flexibility. The average scores on the rest of the key features reflect the fact that while Turbonomic is competent throughout the breadth of functionality evaluated, it doesn’t fundamentally exceed the capabilities of competing tools in any way.

Purchase Considerations
Turbonomic is licensed on a per-node basis. It is well suited to large organizations each with a varying footprint encompassing private data centers, public cloud, and a mix of IaaS VMs and containerized workloads. This is a well-established product with a proven track record, and its ability to be self-hosted and manage workloads outside public clouds extends its appeal to organizations who might otherwise opt for a tool with even more advanced features.

Radar Chart Overview
IBM is positioned in the Maturity/Platform Play quadrant and is still regarded by many as the gold standard that frames this space. While its pace of development is measured and conservative, the introduction of a SaaS offering and the recent acquisition of Apptio Cloudability inspires confidence that it intends to remain competitive in cloud resource optimization going forward.

NetApp: Spot by NetApp

Solution Overview
Spot by NetApp offers a comprehensive suite of solutions for optimizing public cloud infrastructure. These solutions focus on visibility, analytic insights, automation, optimization, governance, and cost management. It is famously known for continuously determining the most cost-effective resource type to deploy applications to. Spot technology uses machine learning and analytics algorithms that enable organizations to use spot capacity (lower-cost instances) for production and mission-critical workloads. This analysis also extends to reserved instances and savings plans.

Within any major cloud provider, Spot continuously monitors and scores different capacity pools across operating systems, instance types, availability zones, and regions to make the most intelligent decisions in real time regarding which instances to choose for provisioning and which ones to rebalance and replace proactively.

NetApp has prioritized the full integration of CloudCheckr into the Spot suite over the last year, expanding Spot’s ability to perform continuous resource optimization. When an organization seeks simplified automation, it likely leverages Spot’s core offerings for automated resource optimization: Elastigroup for VMs, Ocean for containers/Kubernetes, and Eco for lifecycle management for reserved Instances and savings plans, continuous commitment portfolio design, purchasing, and rebalancing and optimizing of resources. Spot Connect streamlines orchestration, automation, and integration by providing a visual interface to connecting products and services, while the billing engine streamlines invoicing and delivers comprehensive billing reporting with intelligent cost allocation including chargeback/showback. Cost Intelligence delivers granular, actionable analytics on costs and resources across multicloud environments.

Spot is offered solely via a SaaS model and scored high on scalability for being hosted across multiple providers, regions, and continents.

Strengths
Spot ranks high on the usability criterion. It can analyze resource usage continuously and provide autoscaling groups that optimize compute resources to ensure availability and meet resource demands using the lowest-cost compute options without intervention.

This solution has a simple deployment model for continuous optimization and many integrations for ITSM workflows and infrastructure deployment pipelines. It scored high for integration with DevOps and ITSM tools and on reconfiguration through managed pipelines. NetApp is excellent in applying AI/ML and continues to focus heavily on these capabilities, including offering insights based on anonymized metrics from its large user base. It also offers robust network cost analysis, building out a third pillar to the more commonly available optimization support for compute and storage. In addition to thorough optimization for traditional IaaS VMs, Spot Ocean provides top notch container support, which is reflected in its score for Kubernetes resource management.

Challenges
Lack of on-premises capabilities will limit Spot’s use for some organizations. Following overall market demand, Spot is a good fit instead for organizations deployed predominately to AWS, Azure, and GCP. Additionally, Spot does not offer workload simulation at this time, and it scored low on workload simulation and capacity planning.

Purchase Considerations
Spot’s pricing has always been radically different from others in this category. Its automated optimization products are priced based on a percentage of customer savings, while its CloudCheckr cost visibility and management offering is priced based on overall cloud spend. NetApp also offers volume and commitment purchase discounts.

Spot is a top contender for large and mid-size organizations with big footprints on public clouds, especially those with a mix of VMs and containers. Users who have comprehensive budgets or agreements in place with public cloud providers will benefit from NetApp’s unparalleled cost optimization.

Radar Chart Overview
NetApp is classified as a Leader in the Maturity/Platform Play quadrant. Its comprehensive suite is a tried-and-true solution for many facets of managing cloud infrastructure, and Spot has been trusted by users for years to easily and reliably optimize cloud resources and generate cost savings that can be counted on.

OpenText: HCMX

Solution Overview
OpenText (formerly Micro Focus) Hybrid Cloud Management X (HCMX) and Operations Bridge (OpsBridge) are both components of the OPTIC platform. HCMX can be used to support cloud cost optimization, self-service provisioning guardrails, the provisioning of applications and resources, lifecycle management orchestration, and blueprint designs that help build custom environments.

OpsBridge collates data from a wide swath of data sources across the enterprise and applies advanced analytics and AI in service of the broad range of goals outlined above. Its fulfillment of cloud resource optimization objectives is a bit disjointed as a result. HCMX did not score as well as some competitors on automatic resource reconfiguration because it focuses more on surfacing insights for follow-up execution by managed pipelines and workflows.

However, HCMX stands out for its capabilities in workload simulation and capacity planning. It supports what-if modeling to detect sub-optimal allocations and retrievable storage. OpenText LoadRunner Cloud has a deep legacy to draw on in support of actual workload simulation. Combined with the tool’s overall focus on workflow management, certain use cases are well served by this solution.

The sheer breadth of data that OPTIC is able to factor in and analyze from disparate sources, along with its AI engine, enables the tool to surface insights that likely couldn’t be identified by competing solutions in this space. By the same token, configuring the system to generate targeted and relevant recommendations for the purpose of cloud resource optimization would require a hands-on effort.

HCMX would be a good fit for organizations already using OPTIC to expand capabilities into cloud resource optimization, or for organizations also in need of either LoadRunner’s proven ability to simulate huge workloads or to leverage the sorts of deep insights that OPTIC can be made to surface. It scored high on scalability and can be relied on by enterprises of any size or particular need.

Strengths
As part of the OPTIC platform, HCMX can be configured to produce certain insights that likely couldn’t be found elsewhere. Capacity planning for extraordinary loads and situations is easily accommodated through OpenText LoadRunner Cloud, and it scored exceptionally high on that emerging feature for this reason.

Challenges
HCMX is somewhat disjointed and requires relatively more effort to use if the objective is limited to the typical gains expected from cloud resource optimization or to demonstrate value early.

Purchase Considerations
To benefit from HCMX for the purpose of cloud resource optimization, organizations should consider whether they need OpsBridge as well for deeper utilization and performance data. Additionally, LoadRunner would be necessary for serious workload simulation or capacity planning. All the necessary tools are available through the OpenText suite of products, but careful consideration must be given as to which components enable needed capabilities in practice. HCMX can be self-hosted, self-managed as a public cloud image, or be used as SaaS.

Organizations that are early in the process of moving their workloads to the cloud, or need to determine a cloud budget based on significant future but still unknown investments in the cloud, may benefit from HCMX’s strong capability for capacity planning and workload simulation. Together with OpsBridge, HCMX can surface insights from a broader underlying pool of organizational data than is typical for cloud resource optimization platforms and that may appeal to certain enterprises.

Radar Overview
OpenText is positioned in the Innovation/Platform Play quadrant because its intriguing mix of platform capabilities presents an interesting and novel approach to cloud resource optimization. Its designation as a Challenger is a natural result of the tool’s uncommon focus on bringing visibility and operational consistency to the entire IT portfolio. In that regard, the platform brings a slew of features that can be applied to cloud resource optimization in surprisingly effective ways, but it is not really intended to thoroughly address cloud resource optimization pain points in the context in which we are evaluating solutions in this report.

StormForge: Optimize Live

Solution Overview
StormForge Optimize Live is a SaaS platform that focuses on autonomous rightsizing of Kubernetes workloads running in production, using observability data and ML to derive resource configuration recommendations. StormForge is something of an outlier for this report because it focuses on container optimization and does not support optimization of workloads running outside containers on VM instances, which is a core competency of many of the tools that scored well in this evaluation.

As a relatively new entrant into this space, StormForge is focusing on the architectural trend of migrating workloads to containerized environments and bringing innovation to the broader problem of cloud resource optimization in that light. The company’s roadmap takes that holistic approach into account, but it prioritizes current and anticipated architectural trends, which seem to be moving away from explicitly managing VMs, even if they are immutable and IaaS hosted.

The company’s approach emphasizes the bidirectional scalability of Kubernetes autoscalers. Vertical autoscalers such as VPA offer extended opportunities for optimization but often conflict with Kubernetes’ built-in HPA. Optimize Live harmonizes requests and limits with standard HPA target utilization to achieve vertical rightsizing without impacting horizontal scaling behavior.

StormForge samples resource utilization data every 15 seconds and continuously trains the ML model on demand patterns. Within hours, it is capable of optimizing resources for any particular workload.

The recommendations it generates illustrate the kind of improvements expected and clearly communicate cost savings. While users can apply these recommendations on a more conservative manual basis, the tool is designed to apply them automatically and continuously in a safe manner. The goal is to get clusters onboarded easily, and then to be a hands-off autonomous method of managing workloads.

Where the need arises, users can fine-tune certain workloads by establishing optimization goals, specifying resizing frequency, and parameterizing requests and limits.

The UI is clean, clear, and easy to understand. It is designed to reliably optimize workloads and stay out of the way so users can focus on other things while still communicating information that users will find most valuable.

Strengths
In addition to its noted competency in Kubernetes resource management, Optimize Live scored high on usability. As organizations have increased their Kubernetes footprints, many have been surprised by the unanticipated effort required to manage these workloads in a cost-efficient manner. Every aspect of the design of Optimize Live, from the interface to the processes to the dashboards and reports, is clearly intended to remove this burden.

The solution also scored relatively well on the objective-based optimization and workload simulation and capacity planning emerging features. These capabilities are supported by Optimize Pro (described in more detail in last year’s report). StormForge is continuing to bring these specialized (and less frequently needed) capabilities into the Optimize Live platform, with a particular focus on responding quickly to peak loads that may occur on a seasonal or even daily basis.

Challenges
StormForge didn’t score well on the abandoned resource identification or intelligent resource grouping features. This is to be expected because these capabilities are less relevant to the optimization of containerized clusters. Regarding support for managed pipelines, the tool can export recommendations directly to HELM or generate YAML patches, but the default approach is to provision changes through the custom agent. Many SMBs will find this appealing because it renders additional integration unnecessary, but larger enterprises with policies on managed pipelines and change control will have some extra work to do as compared with many of the other solutions we evaluated.

Purchase Considerations
Optimize Live is licensed by vCPU, and the cost decreases as it is scaled out. Professional services are not required but are often used. Starter packs and enterprise license agreements (ELAs) are available. The platform itself is delivered as a SaaS solution with a lightweight agent installed within customers’ Kubernetes clusters.

Organizations of any size with an urgent need to cut waste and expenses while ensuring good performance of their Kubernetes workloads can quickly meet these needs with Optimize Live. This will appeal especially to those who do not have the expertise or available throughput to tackle the burden of Kubernetes management.

Organizations whose cloud footprint consists entirely or primarily of Kubernetes based workloads will find StormForge a compelling option for overall cloud resource optimization. This includes organizations with an architectural roadmap prioritizing the migration of workloads currently running on IaaS or internally hosted VMs to cloud-managed Kubernetes services or self-managed clusters.

Radar Chart Overview
StormForge has spent much of the last year consolidating its product offerings into a cohesive SaaS solution and will continue to build out the basics of its feature set. It is positioned in the Innovation/Feature Play quadrant and is expected to continue evolving as a relevant cloud resource optimization and cost optimization tool while architectural trends toward containerization continue.

6. Analyst’s Outlook

Whether in the data center or in the cloud, every minute of reserved CPU, memory, or storage costs money. An efficient IT organization will closely align the supply of its computing resources with the demands of its consumers, thus maintaining a high level of availability while minimizing waste. Most computing resource requests are guesstimated during initial deployment and rarely reconsidered later, wasting money and driving the need for FinOps tooling to help reduce this excess spending. But FinOps tooling alone is not enough to solve this challenge; it can identify places where the waste occurs, but cloud resource optimizations tools are necessary to determine what to do about it.

In some cases, organizations need to go beyond cost savings and deeper into performance tuning, or they may choose to approach the problem from that direction and implicitly obtain the benefit of cost savings.

IT resource consumers need:

  • Reliable data on exactly how a system should be configured to achieve the desired outcome.
  • Recommendations of predictable and specific actions that can be easily implemented.
  • Proactive management of capacity planning.
  • Automation of common tasks that reduces human intervention.

Organizations that invest in cloud resource optimization solutions experience these benefits directly on their cloud spend and indirectly from reduced outages or delays in cloud resource provisioning, as well as in reduced IT management overhead. With cloud resource optimization solutions in place, the capacity management and overall spend burden can be shifted from the central IT team to individual application teams, creating closer alignment between an application and its overall costs and performance management. This generates greater autonomy for the application team and reduces the load on the IT operations teams without sacrificing visibility or oversight.

As this space evolves, we expect to see further consolidation of cloud management platforms, FinOps, and cloud resource optimization tooling solutions. This is especially true for those vendors positioned in the upper right Maturity/Platform Play quadrant of the Radar chart.

To remain competitive, all vendors will continue to improve and refine their recommendation engines and ability to drive even greater optimization. This will especially be the case for vendors positioned in the upper left Maturity/Feature Play quadrant of the Radar.

Finally, containerization continues to be a preferred architectural approach for both new deployments and legacy migrations. While IaaS VM optimization will remain an important need for years to come for both existing deployments and specialized use cases of new deployments (including AI), the priority for innovation will focus on container optimization. This will be true especially for vendors in the lower left Innovation/Feature Play quadrant of the Radar.

Non-container cloud-native architectures such as serverless functions and some aspects of AWS Fargate haven’t received much enterprise adoption by the customer base for cloud resource optimization tools. The use cases for this architecture have tended to include automation, ancillary DevOps functions, and unique one-time projects that don’t have as much need for comprehensive optimization. Nevertheless, this could change overnight as cloud providers continue to innovate and new factors such as AI capabilities drive development agendas. While unlikely to occur over the next year, such change could shake up the competitive landscape of these tools, which have largely settled into their own comfortable niches.

To learn about related topics in this space, check out the following GigaOm Radar reports:

7. Methodology

*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.

For more information about our research process for Key Criteria and Radar reports, please visit our Methodology.

8. About Matt Jallo

Matt has over twenty years of professional experience in information technology as a computer programmer, software architect, and leader. An expert in Cloud, Infrastructure and Management, as well as DevOps, he’s been an Enterprise Architect at American Airlines, has established disaster recovery systems for critical national infrastructure, oversaw integration during the merger of US Airways with American Airlines, and developed web-based applications to help small businesses succeed in e-commerce while an engineer at GoDaddy.com. He can help improve enterprise software delivery, optimize systems for large scale, modernize or integrate software systems and infrastructure, and provide disaster recovery.

9. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

10. Copyright

© Knowingly, Inc. 2024 "GigaOm Radar for Cloud Resource Optimization" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.

Interested in more content like this? Check out GigaOm Research Reports Subscribe Now