Table of Contents
1. Executive Summary
Kubernetes, the increasingly popular container orchestration system, provides an abstraction layer that ensures containerized applications and services can be deployed quickly and run easily across any public or private infrastructure. However, it can also add significant complexity, such as by allowing developers to define or omit their resource requests. Organizations often operate many Kubernetes clusters across public and private infrastructure resources, and capacity management and the proper allocation of resources may be put off to a later stage or neglected entirely. Kubernetes resource management solutions provide a way for developers to shift this important function left in a near-automated fashion, with better results than could be obtained by hand.
Kubernetes resource management platforms analyze resource usage and configuration continuously across all Kubernetes platforms within an organization. These solutions surface configuration issues, guide application and platform owners toward configurations that align with the needs of their business, and optimize the availability of platforms and applications while minimizing infrastructure waste.
Kubernetes resource management is a relatively new space, but one that continues to grow adjacent to the larger cloud management market. Continued advances in artificial intelligence (AI) and machine learning (ML) provide new opportunities for fine-tuning operational parameters and optimizing resource usage and costs, and are a key focus of Kubernetes resource management solutions. Established general-purpose Kubernetes or cloud management platforms focus on the efficient deployment, configuration, DevOps enablement, and runtime management of containerized applications.
While there is significant overlap between more generalized cloud management platforms and Kubernetes resource management solutions, this report focuses on tools that prioritize the optimization of resources (and provide a balance between application availability and cost). These tools tend to provide integrations with other solutions that offer deeper capabilities in DevOps, runtime management, monitoring, and FinOps. Some provide all of these capabilities as components of a single larger platform.
This is our second year evaluating the Kubernetes resource management space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.
This GigaOm Radar report examines 10 of the top Kubernetes resource management solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading Kubernetes resource management offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.
GIGAOM KEY CRITERIA AND RADAR REPORTS
The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and non-functional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.
2. Market Categories and Deployment Types
To help prospective customers find the best fit for their use case and business requirements, we assess how well Kubernetes resource management solutions are designed to serve specific target markets and deployment models (Table 1).
For this report, we recognize the following market segments:
- Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises, where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
- Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features that improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
In addition, we recognize the following deployment models:
- SaaS: These are available in the cloud and designed, deployed, and managed by the service provider. They are available only from that specific provider. The big advantage of this type of solution is the simplicity, predictable cost model, and integration with other services offered by the service provider. These components may support the installation of remote agents into customer-owned environments.
- Self-hosted solutions: These solutions are deployed and managed by the customer, often within an on-premises data center or within a dedicated VPC of a cloud provider. This approach drives up operational costs but also allows greater flexibility over the data collected by the platform. In exceptional cases, data sensitivity concerns may necessitate self-hosting.
Table 1. Vendor Positioning: Target Market and Deployment Model
Vendor Positioning: Target Market and Deployment Model
Target Market |
Deployment Model |
|||
---|---|---|---|---|
Vendor |
SMB | Large Enterprise | SaaS | Self-Hosted |
Akamas | ||||
BMC | ||||
CAST AI | ||||
Densify | ||||
IBM | ||||
Intel Granulate | ||||
Komodor | ||||
Kubecost | ||||
Spot by NetApp | ||||
Stormforge |
Table 1 components are evaluated in a binary yes/no and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1).
“Target market” reflects which use cases each solution is recommended for, not simply whether it can be used by that group. For example, if it’s possible for an SMB to use a solution but doing so would be cost-prohibitive, that solution would be rated “no” for that market segment.
3. Decision Criteria Comparison
All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:
- Multiple distribution types
- Multiple cluster management
- Quota management
- CPU and memory performance analysis
- Misconfiguration reporting
- Cost analysis
- Dashboards and reporting
- Role-based access control (RBAC) and self-service
Tables 2, 3, and 4 summarize how each vendor included in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.
- Key features differentiate solutions, outlining the primary criteria to be considered when evaluating a Kubernetes resource management solution.
- Emerging features show how well each vendor is implementing capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months.
- Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization.
These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Kubernetes Resource Management Solutions.”
Key Features
- Infrastructure provisioning tool integration: The solution can easily integrate with a variety of infrastructure provisioning tools, so when resource configuration changes are applied, the tool can do so through commonly established deployment pipelines.
- DevOps planning and ITSM tool integration: The solution can easily integrate with a variety of DevOps and IT service management (ITSM) tools. This helps to support continuous delivery efforts, change-control processes, audits, and notification visibility from other commonly used tools.
- Monitoring tool integration: The solution can easily integrate with a variety of monitoring tools to gather metrics from Cloud Native Computing Federation (CNCF) landscape tooling (such as Prometheus), cloud provider APIs, and commonly used third-party infrastructure monitoring and application performance monitoring (APM) solutions.
- Autoscaling support: The solution provides support across the range of functions of Kubernetes autoscaling technologies. This includes horizontal and vertical pod autoscalers (HPA and VPA), cluster autoscaling, Karpenter, or other methods of autoscaling.
- AI/ML-driven resource recommendations: The solution uses AI/ML or advanced statistical methods to generate high-quality recommendations. These methods are based on ongoing analysis of data and generate insights that improve with time and would be difficult to obtain through simpler analytical methods.
- Automatic resource optimization: The solution is able to manage and deploy resource optimization recommendations automatically. This assessment also looks at the safety of the approach, support for warm-ups or rollbacks, and the ability to deploy automatic optimizations natively or through managed pipelines.
Table 2. Key Features Comparison
Key Features Comparison
Exceptional | |
Superior | |
Capable | |
Limited | |
Poor | |
Not Applicable |
Emerging Features
- Objective-driven optimizations: This criterion looks at how well a solution supports balancing basic costs against performance, as well as its deeper support for defining and managing other objectives such as service-level objectives (SLOs).
- Event-driven scaling: Solutions should be able to inform recommendations by capturing and contextualizing situational log and event data relevant to particular applications, to proactively ensure the performance and availability of Kubernetes workloads.
- Custom metric support: Solutions should include a method to identify, collect, and manage data that is contextual to the managed workload and incorporate that data into optimization recommendations. An example would be a user interface (UI) that allows tagging of data specific to a running application or service as a custom metric for the autoscaler.
Table 3. Emerging Features Comparison
Emerging Features Comparison
Exceptional | |
Superior | |
Capable | |
Limited | |
Poor | |
Not Applicable |
Business Criteria
- Cost: This metric assesses the simplicity and transparency of the licensing model, the scalability of the pricing structure, the availability of professional services and whether they are likely to be needed, and the ability of the solution to offset its own cost through operational savings derived from resource optimization.
- Flexibility: For this criterion, we evaluate how the solution fits in with and supports existing assets and the resulting time to value (for example, whether reconfiguration or deploying new agents is required) and ability to leverage preexisting enterprise investments (for example, through integrations).
- Scalability: We assess the system’s capacity to scale for large workloads, as well as the architecture that enables the scaling. We also look at the level of scale demonstrated by current users and the level of support the vendor can provide in terms of size and varying market geographies.
- Ease of use: Here, we evaluate whether the solution is easy to use and whether special expertise or additional training is required to operate the tool effectively.
- Ecosystem: We look at the overall community engagement with the solution, the accessibility of a talent pool able to support the product, the quality of the knowledge base, the availability of a user community through forums or Slack, open source contributions and activity, and the number of third parties that supply integrations or aftermarket support for the solution.
Table 4. Business Criteria Comparison
Business Criteria Comparison
Exceptional | |
Superior | |
Capable | |
Limited | |
Poor | |
Not Applicable |
4. GigaOm Radar
The GigaOm Radar plots vendor solutions across a series of concentric rings with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Kubernetes Resource Management
If you compare Figure 1 with last year’s chart, you’ll note that the relative positioning of players has changed significantly. This is a reflection of how rapidly this market is evolving and the result of some new vendors that bring a different perspective to this space. As established solutions have continued to solidify the maturity of their offerings, new entrants are exploring ways to shift optimization earlier into the development cycle.
Cisco announced the end of life of its Intersight Kubernetes Service, so it was not evaluated in this report. Last year’s readers may recall that Cisco’s solution could be leveraged as a SaaS deployment for Turbonomics, but IBM is now offering Turbonomics SaaS.
New inclusions this year include CAST AI and Komodor, both of which bring new and promising approaches to Kubernetes resource management. Komodor is a “dev-first” tool that helps organizations optimize their software from the gates, and CAST AI offers easy automatic cost savings to those trying to tackle the problem of high cloud costs.
In the Innovation/Platform Play quadrant of the Radar sits a cluster of solutions that align well with the acute market desire to quickly and easily rein in the escalating costs of existing Kubernetes workloads.
In the Maturity/Platform Play quadrant are solutions that have reliably provided this capability for some time, continuing to meet the needs of their user base, especially those who have large workloads under management.
Kubernetes resource management has a fairly tight scope, so all of the solutions on this Radar tend to be feature focused. That said, the solutions on the left of the Radar demonstrate an interesting trend of shifting optimization left; that is, proactively and continuously delivering software that is optimized for Kubernetes environments. This development-centric approach is less focused on optimizing massive existing workloads, but rather on getting a handle on the problem from the start.
In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; there are aspects of every solution that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.
INSIDE THE GIGAOM RADAR
To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.
Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.
For more information, please visit our Methodology.
5. Solution Insights
Akamas
Solution Overview
At its core, Akamas is an ML-driven solution that uses reinforcement learning to identify the best configuration for a given workload. It stands out as one of the few solutions evaluated that optimize based on the full stack, including the application layer and various language runtimes (such as Java, Node.js, and Golang). The company also places a strong emphasis on safety throughout the product so that unintended effects are minimized.
Akamas is a tool suited for analysis, and though it does require some extra thought and effort up front, it has the potential to surface insights that would be otherwise unachievable. The platform allows users to create optimization studies, each of which consists of a system (such as a Kubernetes Java application), a telemetry provider (such as Prometheus or Dynatrace), a workflow (such as applying a configuration and generating load), and the study parameters. What’s unique here is that the study parameters include:
- A goal—such as maximizing throughput.
- Constraints—such as ensuring that transactions always achieve a specific SLO.
- A series of metrics to observe from each of the related application services.
- The types of resource configurations to experiment with.
In a nonproduction environment, after running a study composed of several experiments, Akamas users are presented with a list of all the valid configuration options, one for each experiment, along with the overall efficiency gains.
This is a scenario in which, instead of deploying an application and then fine-tuning it post-deployment, application owners can set the target SLO and expected load up front and deploy the application with the most efficient configuration from Day 0.
Consistent with the emphasis on safety, when using Akamas in the production environment the configuration options proposed by the AI are applied live, and the system is automatically monitored to ensure that the intended results are being achieved in practice. This provides assurance that the applications do not experience any outage or degradation in performance and availability.
A SaaS deployment model is planned, but currently Akamas must be self managed.
Strengths
Akamas’ AI/ML capabilities are outstanding. This product is fundamentally a robust AI engine that has been thoughtfully applied to the overall problem of optimizing containerized workload resources. It has the ability to analyze any composition of data and then to report the correlative relevancy of the various inputs. This is a useful feature, and it’s also an indicator of the team’s deep understanding and design focus in successfully applying ML technology.
Akamas scored high on the monitoring tool integration key feature, which is critical for enabling the tool’s core strengths as most data is sourced using a variety of monitoring tools. It also earned an exceptional score for the emerging feature of objective-driven optimization, which is enabled by the ability to set target SLOs, as described earlier.
It also scored relatively well on the custom metric support key feature. Applications can expose contextual data and this can be easily targeted through the UI as an input into the ML analysis. Although this doesn’t drive autoscaling directly but rather through an intermediate process, it provides an effective and safe way of achieving the overall goal.
Challenges
Akamas does not provide robust out-of-the-box support for the key features related to the integration of tools for infrastructure provisioning, DevOps planning, or ITSM. It does have a capable API and workflow UI that allow full support of these objectives, but this would require a bit more effort to set up than many of the other tools we’ve evaluated.
The solution also scored relatively low on scalability. This does not reflect any concern about the architecture’s ability to computationally support massive workloads, but rather the need to apply the solution horizontally on an application-by-application basis. It is inherent in its design that this solution requires more up-front and individualized effort on a per-workload basis. The flip side is that it offers fundamentally deeper opportunities for optimization. It is currently self-hosted with plans to offer a SaaS version, which will go a long way toward improving its scalability.
Purchase Considerations
Akamas is priced on the number of concurrent running optimizations. Users will need to establish the scope of what constitutes an application in situations where that is not clear. Akamas offers professional services that are often used as part of application setup but not required.
Organizations of any size that need deep analysis and optimization of a particular subset of critical applications or a smaller application portfolio can obtain exceptional results from Akamas. Organizations that are looking to easily apply a more basic cost/performance optimization regimen to an existing Kubernetes footprint may find a faster time to value with other options.
Radar Chart Overview
Akamas’ positioning in the upper left of the Radar reflects the solution’s mature focus on deeply and proactively optimizing smaller sets of workloads, in a safe and conservative manner. The company is delivering against its roadmap at a reliable pace compared to the market and so is classified as a Fast Mover. Though relatively new (founded in 2019), Akamas is quickly becoming established in this space.
BMC, Helix Continuous Optimization
Solution Overview
BMC Helix is offered as both a SaaS and on-premises solution. Each offering supports Continuous Optimization (previously called Capacity Optimization). This solution uses the familiar terminology extract, transform, load (ETL) to describe how to connect to private and public cloud APIs for data collection and storage intended for later analysis. The solution provides out-of-the-box support for a wide range of public and private cloud ETLs, along with Moviri and Sentry ETLs that connect to other enterprise systems (such as Splunk, AppDynamics, Elasticsearch, Kubernetes, and storage arrays). The Moviri Prometheus connector is the primary integration method for Kubernetes.
BMC Helix is a suite of products (delivered through SaaS or self-hosted options). While the Continuous Optimization capabilities are available on their own, they’re best paired with other solutions to provide true efficiency and visibility within the organization. BMC Helix Discovery will discover and group workload dependencies, while BMC Helix Intelligent Automation is required to automate Kubernetes optimization recommendations or integrate into enterprise workflows (such as infrastructure provisioning pipelines or ServiceNow approvals).
The BMC Helix Continuous Optimization solution can report on overall Kubernetes clusters across multiple providers and business units through its Prometheus integration. The depth the reporting engine plumbs, combined with its what-if analysis capabilities, makes this solution effective in delivering capacity management data that operations teams will value.
Large enterprise customers that already have a deep investment in BMC technologies likely will benefit most from the solution; they may have the capabilities already licensed or may be able to negotiate for the capabilities with an enterprise license agreement. To get the best value from the continuous optimization capabilities, customers should also leverage other capabilities, such as the discovery and intelligent automation products.
Strengths
BMC has gone to great lengths to provide integrations across its line of IT management systems and has taken on some of the management burden with the Helix SaaS offering. All forms of integration can be accomplished through the REST API, and additional direct integrations are available through BMC Helix Intelligent Automation. Through this offering, integration is extended to Amazon Web Services (AWS), Ansible, TrueSight Orchestration (with a workflow UI), and TrueSight Server Automation, as well as to direct support of ServiceNow and xMatters, earning the company high scores on the infrastructure provisioning tool integration and DevOps planning and ITSM tool integration key features.
It is particularly strong on monitoring tool support, again owing to the direct support provided by BMC Helix Intelligent Automation, which can link to AppDynamics, Datadog, Dynatrace, Nagios, New Relic, and Splunk.
Moreover, the platform is designed for large environments and can scale to accommodate the needs of large enterprises, giving it a high ranking on the scalability business criterion.
Along with its established track record and demonstrated maturity in the field, BMC Helix is also a flexible solution as a result of the many integrations and broad capabilities of the larger platform.
Challenges
Although BMC Helix Continuous Optimization integrates with Prometheus to surface metrics from Kubernetes workloads, it doesn’t support autoscaling, including VPA. Without either of these available, the ability to fine-tune cluster topology is limited.
In addition, it offers little support for the emerging features we evaluated in this report.
Purchase Considerations
Organizations that are already using BMC’s full suite (or are considering doing so) will find capable support for Kubernetes resource management in the platform. These are usually large enterprises. Organizations seeking a standalone solution for Kubernetes resource management will probably find more value in other options.
Organizations operating at large scale with a mixed footprint of on-premises and public cloud containers that are already deriving value from BMC’s comprehensive suite will find capable management of Kubernetes resources through BMC Helix Continuous Optimization.
Radar Chart Overview
BMC Helix Continuous Optimization does not offer an advanced feature set for Kubernetes, but the overall platform is comprehensive and mature. Providing support for important Kubernetes management features helps to round out the company’s larger platform, making it a compelling option for large enterprises and all current users. Its rate of growth is consistent and conservative. For these reasons, BMC Helix Continuous Optimization is positioned in the Maturity/Platform Play quadrant of the Radar.
CAST AI Cloud Cost Optimization
Solution Overview
CAST AI offers a complete platform for maximizing cloud savings by automatically rightsizing Kubernetes workloads. The tool has a strong FinOps emphasis, with an emphasis on reducing cost while preserving performance. It is highly automated and tailored for teams seeking to automate tedious manual processes.
The solution manages to balance this design goal with surprisingly advanced autoscaling capabilities, as well as other features found in only a few other market-leading tools.
CAST AI is available as SaaS and can work across a full range of managed Kubernetes services. Like Densify and Spot, it is adept at picking the optimal virtual machine (VM) instance types so is likely to produce significant savings in organizations with a large infrastructure as a service (IaaS) footprint.
Strengths
CAST AI has outstanding autoscaling support, including the usual support for HPA and VPA. Rather than CA or Karpenter, the company uses a completely custom autoscaler that appears to be highly effective. Moreover, this vertical autoscaler includes a Karpenter migration wizard to facilitate an easy upgrade. It also supports pod binning, rebalancing of clusters, smart spot management with diversification of families, spot interrupting prediction, GPU scaling, ARM scaling, workload auto-rightsizing, automatic worker node updates, container vulnerability scanning, and best practice checks.
Complementing this full set of features are key capabilities that earn the vendor high scores on our AI/ML-driven resource recommendations and automatic resource optimization decision criteria. The statistical methods used to generate resource recommendations are informed by data that has been anonymized and aggregated, which should lead to immediate and reliable results that only get better with time. In keeping with the tool’s intent, automatic optimization is the primary method by which recommendations are applied, perhaps even to the detriment of those who prefer a more conservative, hands-on approach.
CAST AI also scored high on the ecosystem business criterion. The company has several thousand users in its Slack community. It provides open source templates, and customers actively contribute to the community. A sampling of the overall sentiment toward CAST AI reveals that it is warmly regarded by its users.
Challenges
CAST scored relatively poorly on key features relating to integration. There is an API to build out support where needed, but out-of-the-box integration with many commonly used enterprise tools is lacking. For infrastructure provisioning, there’s support for Terraform, Upbound’s CrossPlane, and Pulumi. Any DevOps or ITSM integration would need to be done through the API or webhooks. It supports OpenTelemetry for audit logs. It sources data from Prometheus, but the consumption of data from APMs must be handled manually. The relative lack of out-of-the-box integrations enabling managed pipelines and change control processes is at least consistent with the tool’s strong focus on managing optimization automatically in a tight feedback loop. The API supports such integrations where they are needed. Organizations that want a tool that takes the reins will tend to be less concerned about prescribing how exactly that is done.
The ability to contextualize application-specific data and events is hampered by the lack of either APM integration or UI workflows to tag and identify such data. Since its custom autoscaler does incorporate application metrics and events (albeit automatically via the engine), it has a good foundation for building these emerging features if they turn out to be valuable to their particular market. In the meantime, these abilities are somewhat enabled by the API.
Purchase Considerations
CAST AI is available in several different pricing plans, each with a monthly fee and a different set of capabilities, with a fixed cost per CPU (post optimization). This is a good cost-scaling approach, and there is a free trial solution that can help users anticipate what their actual savings may be.
Organizations hoping to reduce cloud expenditure (especially those with a large IaaS footprint or agreements with cloud providers for reserved instances or savings plans) and those that want fast and reliable results without having to tie up their development or platform engineering teams will find CAST AI a compelling option.
Radar Chart Overview
CAST AI is a well-rounded cost optimization platform that balances an abundance of underlying features with the simple goal of automatically saving money while preserving performance. The vendor is positioned in the Innovation/Platform Play quadrant of the Radar. It is clustered with other tools that aim to make overall cost optimization automatic and pain free.
Densify
Solution Overview
Densify offers a solution that isn’t deeply tied into other cloud management platform (CMP) components. As a result, it’s more attractive to enterprises that simply want to gain efficiencies without the need to invest in a large CMP or service-heavy rollout. Densify offers a SaaS solution that can optimize Kubernetes workloads along with its underlying private (VMware) and public-cloud instances—AWS, Microsoft Azure, and Google Cloud Platform (GCP).
Densify is focused on generating highly accurate recommendations that can be actioned confidently and automatically. Through a combination of workload pattern analysis, benchmarks, deep policies, API-driven integrations, app owner reports, effort rankings, ITSM integration, and more, the Densify solution provides deeper and more effective optimization than solutions that concentrate more on billing or “advisors” that generate more basic suggestions that require extensive review.
Densify also provides an extensive policy-based framework that allows granular tuning of its resource optimization analysis for different portions of the environment (such as production versus test/dev) as well as for different applications. Densify’s policies allow customers to codify the desired operational characteristics for workloads. Options include CPU and memory utilization thresholds; risk tolerance (for example, typical versus busiest day); requirements for high availability, disaster recovery, and business continuity; approval policies; automation policies; and catalog restrictions.
These policies enable a high degree of tuning to ensure that recommendations will be correct and the infrastructure properly optimized for the types of applications being hosted.
Strengths
Densify has outstanding support for the infrastructure provisioning tool integration key feature. It directly integrates with Terraform, AWS CloudFormation, Ansible, and many other tools for provisioning workflows. Additionally, it implements algorithms that optimize and recommend what can be automated. While Densify now supports implementation through the VPA, it recommends an upgrade path when it determines a different approach can do better. This is a competitive differentiator that helps to position Densify as a solution that can appeal to organizations of different sizes and needs. In many circumstances, users find VPA to be an insufficient method for autoscaling, but sometimes it does fit the bill. When it does, it’s nice to take advantage of this underlying capability of the platform because it is simple to use and may improve in future versions of Kubernetes.
Densify uses patented ML to generate highly effective recommendations that deliver confidence and actionable results. It offers flexible automatic optimization and integration into deployment pipelines. As a single solution, it brings a fast time to value.
Objective-driven optimizations can be managed through the new governance framework, and the API directly supports custom metrics that inform the autoscaler. These can be combined to trigger “relearning” based on infrastructural changes or application events.
Challenges
Features tend to be stronger in AWS before becoming available in other cloud providers. The tool has a heavy focus on the infrastructure layer and requires some domain expertise to operate effectively, although the company makes efforts (for example, through the App Owner report) to guide users into effective usage and understanding of the solution.
Purchase Considerations
Densify provides a simple licensing structure based on the number of targets to be optimized, but customers can also acquire the technology through Densify’s partnership with Intel and the Intel Cloud Optimizer program, through which Intel funds a year of Densify for qualifying organizations. The simple licensing structure and SaaS deployment model make Densify a good fit for organizations of all sizes.
Densify really shines for organizations that have a large IaaS Kubernetes footprint and those migrating to any of the managed Kubernetes services.
Radar Chart Overview
Densify’s customer base is expanding rapidly as recognition of the product’s capabilities grows, and this, along with its IaaS focus and proven track record, position Densify in the Maturity half of the Radar. The solution targets a specific (albeit broadly scoped) problem in a standalone, feature-rich way. While slightly more of a Feature Play than a Platform Play, it manages to straddle a middle ground that will appeal to anyone looking for a solution in this space.
IBM Turbonomic
Solution Overview
IBM Turbonomic is a well-established cost-optimization platform for public, private, and hybrid clouds that uses AI to fine-tune resources and guarantee optimized performance of applications.
Turbonomic applies the supply and demand financial model to its resource optimization capabilities. While that’s often seen in more FinOps-focused tools, it’s used in this case to provide deeper technical recommendations and remind IT operators that every resource has a cost and that resources are finite. This approach aligns resource management more closely with capacity planning; while capacity planning is thought of less in public cloud environments, it’s an important capability across both public and private cloud infrastructures that support Kubernetes clusters.
Building on the financial cost-benefit model, Turbonomic models environments as a market and uses market analysis to manage the supply and demand of resources. It visually displays the supply chain for all resources in an intuitive UI that reshapes the way users think about their resources.
Additionally, data ingestion capabilities enable organizations to specify custom metrics to be included in the market analysis, helping to tie infrastructure components together to build a more complete supply chain view. This approach is also how the Turbonomic platform extends to key application visibility solutions, such as IBM Instana, or third-party application observability tools. In this way, a transaction flow is visualized and analyzed from ingestion in the Kubernetes pod through storage volume, contributing to a high score on the flexibility business criterion.
The IBM Turbonomic UI readily displays recommendations that are easily interpreted and classified as performance or savings. These recommendations can then connect to ServiceNow workflows or, in many cases, be applied automatically if the KubeTurbo engine is running in the Kubernetes cluster.
IBM Turbonomic is delivered through a self-hosted deployment model or through SaaS if managing AWS, Azure, or GCP public cloud resources.
Strengths
Organizations can use this solution to automatically purchase reserved instances (for the underlying VMs), resize workloads, move or reconfigure resources, or clean them up—striking a strong balance between FinOps and Kubernetes resource management capabilities. These factors contribute to a high score on automatic resource reconfiguration.
It also scored relatively high on objective-driven optimization, due to its ability to define service policies that can influence recommendations.
The solution provides direct integration with Terraform and Ansible for infrastructure provisioning, but the previously available integration for Cisco Intersight Kubernetes Service will be of less benefit going forward because that software is now end of life.
Challenges
While Turbonomics provides good support for objective-driven optimization, its inability to directly reach into the lower levels of the application stack limit its ability to derive data related to the individual application context and enable the event-driven optimization or custom metric support emerging features. It also scored poorly on the key feature DevOps planning or ITSM tool integration, due to its relatively limited out-of-the-box options. Though any integration could be accomplished with some effort through the API or webhooks, only ServiceNow integration is supported directly.
Purchase Considerations
Turbonomics is licensed on a per-node basis, allowing customers flexibility to expand their use of the solution.
The platform is best for enterprises that need a full suite of cloud resource management capabilities for private clouds or Kubernetes clusters running in data centers. The platform is also a competitive and mature option for enterprises, medium-sized businesses, or government agencies that have a Kubernetes footprint in any of the major public cloud providers.
Radar Chart Overview
IBM Turbonomics is positioned in the Maturity/Platform Play quadrant. It has a demonstrated track record at high levels of scale. The product is updated regularly, but not advancing rapidly. It is a safe bet for those with conservative risk postures and in need of Turbonomics’ well-established cost optimization capabilities.
Intel Granulate
Solution Overview
Intel Granulate, an autonomous optimization solution, takes a different approach than the other solutions we’ve evaluated. Whereas other Kubernetes resource management platforms tend to start at the infrastructure layer and work more or less toward the application layer, Intel Granulate builds out from a running application’s codebase. Because it is designed to address a problem that’s slightly different than the one we’re focusing on, it has mediocre scores across the range of key features we’re evaluating. However, it is worth highlighting because it delivers on overall performance improvement goals and Kubernetes resource efficiency. Additionally, the solution’s scores have improved since last year’s report, which reflects the product’s evolution into a more general-purpose and convenient Kubernetes resource management platform.
Intel Granulate optimizes Kubernetes resources within a cluster by deploying an agent, identified as the sAgent, into the cluster. The sAgent monitors application patterns of the workloads and optimizes the overall OS scheduling decisions to improve performance. It also sends metrics back to the gCenter, the centralized management dashboard provided by Intel Granulate and delivered via SaaS. It should be noted that Intel Granulate doesn’t optimize all application workloads, but it does work for Java, Scala, Clojure, Kotlin, Golang, Python, Ruby, big data, and stream processing.
Most of the solutions in this report attempt to achieve the perfect balance between cost and performance while treating the workload as something of a black box. Intel Granulate attempts to do more with its existing resources. The end result is still better performance and reduced cost.
Intel Granulate itself is offered as SaaS, though it can be configured to manage on-premises workloads in some cases.
Strengths
Intel Granulate has an exceptional score for the ecosystem business criterion. It also has a number of unique components, such as the free and open source Continuous Profiler, that don’t have analogs in other offerings. In addition, Intel is making significant open source contributions to containerization technology (though this is not technically a component of the Intel Granulate Suite). One example is the NRI repository of containers on GitHub. As described on its GitHub page, NRI allows custom plug-ins into Open Container Initiative (OCI)-compatible runtimes. Such plug-ins can make controlled changes to containers or perform extra actions outside the scope of OCI at certain points in a container’s lifecycle. This can be used to improve allocation and management of devices and other container resources. Intel provides significant support to the communities developing these foundational technologies.
Intel Granulate does well on the autoscaling support key feature, with native support for HPA, VPA, CA, gMaestro, and Karpenter. It has a capable AI/ML engine that’s based primarily on data it receives from profiling the code of managed workloads. One glance at any call stack in Continuous Profiler drives home the realization that the vast majority of any running code consists of reusable common libraries, and most of these have been thoroughly analyzed to supply (and anticipate) good data about how a workload can be optimized.
Intel Granulate can apply recommendations automatically, and users can specify a buffer for headroom. It is also capable of automatic rollbacks of recommendations if those buffers are routinely exceeded.
Challenges
Intel Granulate has a capable API for integration, but its out-of-the-box support for integration with common enterprise tools is lacking. Due to its application-optimization design focus, the platform would be a welcome addition to the toolset of a development or systems operations team and does require a certain level of technical expertise.
Purchase Considerations
Intel Granulate has a simple, transparent cost model based on the CPU hours that are managed and optimized. This billing model directly aligns with cloud utilization and could be accounted for by the savings it can generate with its optimizations.
Intel Granulate supports on-premises Kubernetes clusters and can be contacted for relevant pricing. Users wishing to evaluate the platform can start with the free and open source gProfiler to determine whether they’d benefit from the solution.
Because of its focus on application-layer optimization, this tool is well suited to organizations that could benefit from targeting a subset of specific applications for performance improvements and resource demand reductions. Though not difficult to use for what it is, Intel Granulate requires some degree of development or sysops expertise to use effectively. Some users may wish to pair Intel Granulate with other more general purpose solutions, first to optimize the way key workloads operate and then to transfer management to a compatible solution to right-size the resources once they’re running efficiently.
Radar Chart Overview
Intel Granulate is more of a Feature Play than many of the other solutions evaluated in this report, and as mentioned previously, it would nicely complement a solution that occupies a middle ground between feature and platform, such as Densify or Spot by NetApp. With such a pairing, delivery of software can be optimized for Kubernetes and a diverse array of existing workloads can be managed effectively. Intel Granulate is well established, but is innovative and moving fast and is thus positioned in the lower half of the Radar to reflect the shift-left approach that may influence the direction of this whole market.
Komodor
Solution Overview
Komodor is a newly available solution that is off to an impressive start. It is a developer-centric Kubernetes resource management platform with a unique philosophy that may end up influencing the broader market. All of the platforms we’re evaluating tend to sit somewhere on a spectrum in terms of how much technical system expertise they require. Some require more of a hands-on approach but are capable of deeper optimization or of achieving ancillary goals. Others are designed to achieve the basic objective of balancing cost and performance while being easy to apply. Komodor doesn’t so much sit on the left of that spectrum so much as it seeks to shift left the overall approach to the problem.
In almost any variation of the tale of how organizations wound up paying too much for poorly provisioned Kubernetes clusters, a common theme is that Kubernetes optimization was either an afterthought or a secondary priority. In any event, it stands to reason that if these tools can both solve the problem—shift optimization earlier in development—and become a staple of the development toolkit, Kubernetes management will become visible and organic for ongoing development efforts.
Komodor is available as a SaaS platform and has a stronger focus on the key features that lend themselves to continuous delivery and less on those that emphasize automatic management of existing workloads. It is explicitly designed to be used by developers and platform engineers while providing data visualization that is most relevant to site reliability engineering (SRE) teams, operations teams, and management.
Strengths
Komodor scored high on the key features related to integration, with a particular strength around common CI/CD and DevOps tools. In addition to the commonly available API and webhooks, Komodor directly integrates with Pagerduty, OpsGenie, Slack, and Teams, as well as with GitLab, GitHub, Argo CD, Flux CD, Sentry, and LaunchDarkly (for feature flags). No other solution has such a robust focus on DevOps integration.
It also scored well on autoscaling support, which makes sense because that’s the enabling technology for optimizing current and future state workloads.
Challenges
Komodor is relatively less capable on AI/ML-driven resource recommendations. It does offer some statistical analyses of operational demand and performance but does not provide deep insight into the demand patterns of existing black-box workloads. It makes sense in the context of the overall design focus that such capabilities would be a lesser priority. This functionality would be handy, however, because AI models can bring insight to patterns that are inherently difficult to anticipate or intuit and thus difficult to shift left into software design.
Purchase Considerations
Komodor has a free tier for individuals and small teams. It also offers various pricing plans, all based per node. Professional services are supplied with each plan. The plans scale up in many aspects of functionality, such as the data retention period. The pricing structure appears to be tailored to medium-sized businesses, but will scale without surprises to larger enterprises.
Organizations of any size that are engaged in the continuous delivery of software that may be containerized can gain the double benefit of optimized Kubernetes management and a better software product from using this platform. In practice, this includes almost any company that does at least some software development, but especially applies to those that develop SaaS software or enterprises with significant in-house business logic or integrated systems.
Radar Chart Overview
Komodor has built a tremendous product in only a few years, and the company is expanding its features rapidly. Given the dev-first nature and the newness of the product, we can expect continued innovation and growth of features as Komodor continues to carve out a niche in this market. For these reasons, it is positioned in the Innovation/Feature Play quadrant on the Radar chart.
Kubecost
Solution Overview
As the name suggests, Kubecost is focused on identifying and optimizing the cost of workloads running within Kubernetes. While this is a primary function of FinOps tools, cost and optimized resource configuration are directly related such that the output of any cost-saving tool must be a more efficient configuration.
Kubecost follows an open-core model, which means that it’s available as an open source solution that’s free of charge for individual clusters, while businesses can purchase licensed and supported versions with an expanded feature set that supports multicluster visibility, single sign-on (SSO), and automatic recommendations.
The Kubecost solution leverages Prometheus, the existing metric collection solution in Kubernetes, which makes this an easy deployment for any Kubernetes cluster. While Prometheus is typically used for short-term storage, Kubecost provides integrations and guides on implementing long-term storage, a critical capability for teams that wish to generate container recommendations throughout a longer period of time.
Additionally, Kubecost integrates directly into public cloud providers to determine the types and costs of the underlying cluster nodes, enabling it to generate efficiency recommendations at the cluster level. Ultimately, it’s the node that you pay for, and reducing the overall node footprint directly results in cost savings.
Kubecost also supports integrating external cost discount structures for enterprises that may have negotiated different pricing structures with their cloud provider. As recommendations are generated for specific workloads, this data is available through the Kubecost API, which will be important for many organizations.
While users can apply the recommendations directly to the running workloads automatically (through a specific Kubecost controller), it’s more likely that organizations will want to leverage the API data within their delivery pipelines. Kubecost has demonstrated this use case with a variety of CI/CD tools, and the idea can be extended easily to any delivery pipeline by simply adding a step that requests the configuration parameters from the Kubecost API.
Kubecost (in either the commercial or open-core version) can be deployed in a self-hosted model by simple installation through Helm, and a SaaS offering is now available called Kubecost Cloud.
Strengths
Kubecost scored high on the cost business criterion. It has a simple and transparent pricing structure that allows for incremental demonstration of value.
Kubecost performs relatively well on the autoscaling support key feature, using both CA and VPA. It has the ability to manage cluster spin-downs as an interesting alternative to using HPA and autoscaling groups.
Challenges
Although the API supports integration with virtually any infrastructure provisioning tool, DevOps/ITSM tool, or monitoring tool (and has some clear examples in their knowledge base to serve as patterns), most of these sorts of integrations will need to be implemented manually. For many teams, this is not a significant hurdle, but it may limit the appeal to smaller organizations that either lack the expertise or have it tied up with other tasks.
Perhaps due to the stronger focus on FinOps, Kubecost does not yet support the emerging features we have identified and is generally limited in its ability to reach into the application layer.
Purchase Considerations
For organizations looking for support and advanced business features, commercial Kubecost is priced in bands based on the number of nodes. Kubecost will average the number of used nodes over a given month to account for variability with autoscaling clusters, thereby enabling the licensing to grow with the organization’s needs. Although the open-core version would be insufficient for many enterprise users, it does provide a relatively risk-free path for evaluating the tool in those environments.
Organizations of any size that feel like they have a fairly good handle on the performance and configuration of their Kubernetes cluster but want better ongoing visibility into their Kubernetes associated costs, should give Kubecost careful consideration. The open-core version enables teams within medium or large companies to champion investment in this technology by demonstrating real opportunities for cost savings.
Radar Chart Overview
Kubecost is firmly rooted in the FinOps aspects of Kubernetes resource management. With its recent SaaS offering, the company appears focused on improving along the lines of its core competencies—which align precisely with what many in this market are seeking. Kubecost clusters with other tools focused on automatic cost optimization and is positioned in the Innovation/Platform Play quadrant on the Radar.
Spot by NetApp
Solution Overview
Spot by NetApp is a cloud management portfolio that includes Ocean, a managed infrastructure service for Kubernetes that automatically adjusts infrastructure capacity and size to meet the needs of all pods, containers, and applications. It is designed to automatically scale compute resources to maximize utilization and availability through the optimal blend of spot, reserved, and on-demand compute instances.
In addition to its capabilities in adjusting infrastructure capacity and size, Ocean also plays a crucial role in automating DevOps work in front of Kubernetes. By leveraging its intelligent automation features, Ocean streamlines various DevOps tasks such as application deployment, scaling, monitoring, and managing the lifecycle of Kubernetes clusters. This automation significantly reduces the manual effort required from DevOps teams, allowing them to focus on higher-value activities.
Spot is a comprehensive portfolio of solutions for managing cloud resources, but Ocean can work as a standalone solution if most of your workload is containerized.
Ocean provides the ability to create new, fully managed clusters from within the solution, and can deploy the Ocean controller into existing clusters to collect metrics and perform management tasks as instructed by the SaaS control plane. It can also automatically deploy and configure updated versions of platform Kubernetes components based on rules that you set. It allows users to leverage their Spot Insights solution to determine the types of savings they might achieve before migrating workloads to the managed service, though these details are primarily reports of overall savings without the degree of workload-level optimizations that may be desired in the analysis phase. With that said, Ocean provides right-size reporting for its managed workloads, with a primary focus on making these adjustments easy to understand and implement.
Ocean sends notifications through the API and Prometheus but is oriented toward an autonomous system that simply takes action.
Strengths
The AI/ML that Spot uses to generate recommendations and balance resources is advanced and effective. NetApp has deep expertise in AI/ML techniques, and Ocean allows users to easily benefit from these state-of-the-art methodologies. The engine demonstrates true ML with time and is able to train models with aggregated anonymized data so all users are beneficiaries of the increasingly powerful models.
Spot offers good integration with infrastructure provisioning tools through its API and various modules including Ansible, Cloud Formation, Terraform, KOPS, EKSCTL, and Crossplane. Integration with DevOps and ITSM tools is also enabled through Spot Connect. It can be quickly linked to GitHub, Jenkins, Jira, OpsGenie, PagerDuty, and Slack. Other integrations are enabled through the API or through webhooks.
Ocean sources performance data through Prometheus and can be integrated with other APM tools, such as Instana and Datadog, through Spot Connect. Ocean has a design focus on brokering such data through Prometheus, so direct APM integration is less of a focus than for some competing platforms.
Spot is one of only a few tools that focuses on tight management of the autoscaler and offers robust support at the infrastructure layer for responding quickly to downscale events, reverting to spot instances, reserved instances/savings plan utilization, autohealing, predictive rebalancing, spot interruption, and cluster rolling. It can also perform network cost analysis, taking into account network input/output (I/O) within and between clusters, which is often considerable, but hidden.
Challenges
While Spot is excellent at right-sizing loads to balance cost and performance, it is less focused on other optimization goals such as setting desired performance levels (SLOs) or objective-driven optimizations. Spot lacks more generic forecasting and capacity planning capabilities. Its AI is certainly capable of effectively and safely managing changing conditions, but the tool is not really intended for passive analysis tasks.
There is also very limited support for informing autoscalers with application contextual data, so the emerging features of event-driven scaling and custom metric support can only be partially enabled through the API. The lack of reach into the application layer is consistent with the overall approach Spot takes to automatic management of the infrastructure layer, with a primary focus on balancing utilization and cost. It is, after all, what made Spot famous and precisely what many users are seeking.
Purchase Considerations
The Spot portfolio of solutions can be used as a complete platform for managing cloud resources and will be most valuable to users who have a variety of workloads running primarily in public cloud environments.
Large enterprises and medium-sized businesses that have significant containerized workloads in public cloud environments can use Spot to make sure that they are fully and efficiently utilizing all the resources available to them. This results in cost savings and the continued assurance of deriving value and performance from cloud investments.
Radar Chart Overview
Spot is a widely recognized leader in cloud resource management, and Ocean is a capable and rapidly improving offering for Kubernetes resource management. The competition provided by CAST AI and Densify is sure to keep a spotlight on the continued relevance of container management in the broader cloud space. Due to NetApp’s proven track record and the broad capabilities of the suite, it is positioned in the Maturity/Platform Play quadrant on the Radar.
Stormforge
Solution Overview
StormForge Optimize Live is a SaaS platform that focuses on autonomous rightsizing of Kubernetes workloads running in production, using observability data and ML to derive resource configuration recommendations.
The company’s approach emphasizes bidirectional scalability. Vertical autoscalers such as VPA offer extended opportunities for optimization but often conflict with Kubernetes’ built-in HPA. Optimize Live harmonizes requests and limits with standard HPA target utilization to achieve vertical rightsizing without impacting horizontal scaling behavior.
Stormforge samples resource utilization data every 15 seconds and continuously trains the ML model on demand patterns. Within hours, it is capable of optimizing resources for any particular workload.
The recommendations it generates illustrate what kind of improvements are expected and clearly communicate cost savings. While the user can apply these recommendations on a more conservative manual basis, the tool is designed to apply them automatically and continuously in a safe manner. The goal is to get clusters onboarded easily, and then to be a hands-off autonomous method of managing workloads.
Where the need arises, users can fine-tune certain workloads by establishing optimization goals, specifying resizing frequency, and parameterizing requests and limits.
The UI is clean, clear, and easy to understand. It is designed to reliably optimize workloads and stay out of the way so users can focus on other things while still communicating information that users will find most valuable.
Strengths
Optimize Live scored high on the usability business criterion. As organizations have increased their Kubernetes footprints, many have been surprised by the unanticipated effort required to manage these workloads in a cost-efficient manner. Every aspect of the design of Optimize Live, from the interface, to the processes, to the dashboards and reports, is clearly intended to remove this burden.
The solution also scored high on the objective-driven optimization and custom metric support emerging features. These capabilities are supported by Optimize Pro, which was described in more detail in last year’s report. Stormforge is continuing to bring these specialized (and less frequently needed) capabilities into the Optimize Live platform.
Challenges
Stormforge didn’t score well on the integration of infrastructure provisioning tools and DevOps planning/ITSM tools. The solution includes an API that is fully capable of achieving these integrations, and it now offers direct support for GitOps. The tool can also export recommendations directly to HELM or generate YAML patches, but the default approach is to provision changes through the custom agent. Many SMBs will find this appealing because it renders additional integration unnecessary, but larger enterprises with policies on managed pipelines and change control will have some extra work to do relative to many of the other solutions we evaluated.
Purchase Considerations
Optimize Live is licensed by vCPU, and the cost decreases as it is scaled out. Professional services are not required but are often used. Starter packs and enterprise license agreements (ELAs) are available. The platform itself is delivered as a SaaS solution with a lightweight agent installed within customers’ Kubernetes clusters.
Organizations of any size with an urgent need to cut waste and expenses while ensuring good performance of their Kubernetes workloads can quickly meet that need with Optimize Live. This will appeal especially to those who do not have the expertise or available throughput to tackle the burden of Kubernetes management.
Radar Chart Overview
Stormforge has spent much of the last year consolidating its product offering into a cohesive SaaS platform and will continue to build out the basics of its feature set. It is positioned in the Innovation Platform Play quadrant and is expected to continue evolving as a cost-management solution.
6. Analyst’s Outlook
The effective management of Kubernetes resources is an ongoing struggle for many organizations. In some circumstances there are performance challenges, but in almost all cases there is a sense that the costs are higher than necessary. Ratcheting back provisioned resources is low-hanging fruit for budget savings, but “leaving well enough alone” also functions as a safety net against the unknown. These tools counter that unknown, ensure services are always ready to meet their performance objectives, and clearly communicate to stakeholders that costs are being tightly controlled.
Greenfield development may ebb or flow toward alternative software paradigms, such as cloud-native platforms or something else entirely, but for the foreseeable future, the complex problem of optimizing Kubernetes resources must be tackled. For large footprints, a software solution that uses advanced methodologies like AI/ML to continuously balance resources by factoring many thousands of data points in a tight loop becomes more of a necessity than a luxury.
At that point, the remaining question is one of approach. As discussed earlier, the most interesting differentiator in this market space is to what extent the solution attempts to shift Kubernetes resource management left into a proactive continuous delivery of manageable workloads, rather than implementing capable management of workloads as a black box. This does not imply that solutions that do the latter are reactive or will be less effective in the long run.
Organizations engaged in active development of software intended (or otherwise destined) to be containerized, and that have mature continuous delivery capabilities, should consider the long-term advantages of a solution that reaches deeper into the application layer of the stack and affords greater capabilities at the expense of more up-front effort.
Many organizations have an architectural roadmap favoring new deployments in PaaS, or a migration path that is otherwise inconsistent with an expanding focus on Kubernetes. However, they also have a large footprint of Kubernetes workloads supporting critical software. This situation describes many large enterprises and government agencies. These organizations will be better served by platforms designed to help operations teams optimize resources and achieve cost savings within their current environment, rather than attempting to account for both current and future needs.
Many organizations are generally satisfied with how their Kubernetes systems are designed and operating but know (or suspect) they could be saving money by reconfiguring over-provisioned resources. They need a tool that is easy to start with and easy to use, one that provides continual cost savings without breaking anything or getting in anyone’s way. The solutions in the lower right of the Radar are catering to this market segment.
Over the next year, we can expect all of these solutions to continue to build out the capabilities of their SaaS offerings and to deepen feature sets that enhance the effectiveness of their recommendation engines. All of the solutions rely on effectively identifying optimization opportunities and automatically applying them to achieve cost savings. Cost optimization will continue to be the biggest market driver for adoption of these solutions.
To learn about related topics in this space, check out the following GigaOm Radar reports:
- Gigaom Radar for Cloud Resource Optimization
- GigaOm Radar for Cloud Management Platforms
- GigaOm Radar for Cloud FinOps
7. Methodology
*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.
For more information about our research process for Key Criteria and Radar reports, please visit our Methodology.
8. About Matt Jallo
Matt has over twenty years of professional experience in information technology as a computer programmer, software architect, and leader. An expert in Cloud, Infrastructure and Management, as well as DevOps, he’s been an Enterprise Architect at American Airlines, has established disaster recovery systems for critical national infrastructure, oversaw integration during the merger of US Airways with American Airlines, and developed web-based applications to help small businesses succeed in e-commerce while an engineer at GoDaddy.com. He can help improve enterprise software delivery, optimize systems for large scale, modernize or integrate software systems and infrastructure, and provide disaster recovery.
9. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
10. Copyright
© Knowingly, Inc. 2024 "GigaOm Radar for Kubernetes Resource Management" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.