Table of Contents
Enterprise application developers have been using microservices approaches for several years now. What started with low-risk, non-critical exploratory applications at a small scale has now moved to large-scale adoption in business-critical applications. Similarly, Kubernetes has moved from smaller proof-of-concept implementations to critical infrastructure supporting these microservices-based applications.
Yet, Kubernetes remains a complex platform to operate and one that is continuously changing. Updates are relatively frequent, providing bug fixes and new features, though the pace has slowed somewhat as Kubernetes matures as a platform. For IT organizations accustomed to the relative stability and slower pace of change of more established platforms, this complexity and rate of change presents significant challenges. Keeping up-to-date with the latest version of Kubernetes, including security patches, is a significant operations burden on its own. When the substantial project ecosystem surrounding Kubernetes is included in updates, the operational challenge can overwhelm otherwise capable IT teams.
The reality is that most organizations are not prepared to work at this pace for a sustained period of time. Managing existing workload demands is already a challenge without the additional burden of learning new skills. Organizations are therefore faced with a difficult choice: reject Kubernetes completely and compromise developers’ desire to use microservices patterns (likely a career-limiting move), muddle through on their own as best they can with what they have, or look for outside assistance.
A managed Kubernetes service is an attractive way to shift the operational burden of maintaining Kubernetes and its ecosystem away from the internal IT team. Managed services are a known quantity commonly used in other areas of enterprises today.
There are multiple managed Kubernetes options to choose from, and the choice is made easier by the Cloud Native Computing Foundation’s early move to define a standard for Kubernetes interoperability. This standard helped to reduce the risk of Kubernetes splintering into multiple competing and incompatible variants, as in the early Unix market and later with Linux distributions. The core of Kubernetes is the same on all standard-compliant options, which differentiate themselves on value-added features and functions.
This GigaOm Radar report highlights key managed Kubernetes vendors and equips IT decision-makers with the information needed to select the best fit for their business and use case requirements. In the corresponding GigaOm report “Key Criteria for Evaluating Managed Kubernetes Solutions,” we describe in more detail the key features and metrics that are used to evaluate vendors in this market.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
2. Market Categories and Deployment Types
To better understand the market and vendor positioning (Table 1), we assess how well managed Kubernetes solutions are positioned to serve specific market segments and deployment models.
For this report, we recognize the following market segments:
- Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises, where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
- Large enterprise: Here offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
- Specialized: Optimal solutions will be designed for specific workloads and use cases, such as big data analytics and high-performance computing (HPC).
In addition, we recognize two deployment models for solutions in this report:
- Cloud only: Available only in the cloud. Often designed, deployed, and managed by the service provider, they are available only from that specific provider. The big advantage of this type of solution is the integration with other services offered by the cloud service provider (functions, for example) and its simplicity.
- Hybrid and multicloud: These solutions are meant to be installed both on-premises and in the cloud, allowing customers to build hybrid or multicloud infrastructures. The integration with the single cloud provider could be limited compared to the other option and more complex to deploy and manage. On the other hand, they are more flexible, and the user usually has more control over the entire stack about resource allocation and tuning.
Table 1. Vendor Positioning
|SMB||Large Enterprise||Specialized||Cloud Only||Hybrid & Multicloud|
|Mirantis Container Cloud|
|Rafay Kubernetes Operations Platform|
|Red Hat OpenShift|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
3. Key Criteria Comparison
Building on the findings from the GigaOm report, “Key Criteria for Evaluating Managed Kubernetes Solutions,” Table 2 summarizes how each vendor included in this research performs in the areas we consider differentiating and critical in this sector. Table 3 follows this summary with insight into each product’s evaluation metrics—the top-line characteristics that define the impact each will have on the organization.
The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.
Table 2. Key Criteria Comparison
|Hybrid & Multicloud Support||Multizone Deployment||ALM||Security||Interoperability||Pricing Model|
|Mirantis Container Cloud|
|Rafay Kubernetes Operations Platform|
|Red Hat OpenShift|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
Table 3. Evaluation Metrics Comparison
|Architecture||Flexibility||Scalability||Manageability & Ease of Use||Ecosystem||Cost Optimization|
|Mirantis Container Cloud|
|Rafay Kubernetes Operations Platform|
|Red Hat OpenShift|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market.
4. GigaOm Radar
This report synthesizes the analysis of key criteria and their impact on evaluation metrics to inform the GigaOm Radar graphic in Figure 1. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and feature sets.
The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation, and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Managed Kubernetes
As you can see in the Radar chart in Figure 1, there are three clusters of solutions. The first grouping is in the Leaders circle on the Platform Play side; this cluster contains both more established players—including the three largest public cloud providers: Amazon Web Services (AWS), Google, and Microsoft—and more focused startup competitors. The more established players are moving from their existing markets to embrace Kubernetes as a market adjacency, bringing their existing platform breadth, but also complexity, to Kubernetes. The more focused offerings from Rafay and Platform9 provide similar outcomes for managed Kubernetes but without closely tying customers to a particular cloud provider’s approach. The relative size of this cluster reflects the point at which most customers start to need Kubernetes’ capabilities: Kubernetes is best suited to significant scale.
The next grouping is in the Challengers circle on the Feature Play side; these are vendors that have specialized in various ways. Mirantis is more focused on feature and function, particularly security, while Alibaba has a more geographical focus. Oracle is competing on price/performance with its cloud offering and its strength in database infrastructure, which it brings to managed Kubernetes. IBM’s managed Kubernetes—distinct from its Red Hat branded offerings—is also an extension of its specialized cloud approach.
We also have an outlier cluster comprising DigitalOcean and Linode. These options are designed to appeal to the SMB, hobbyist, and startup markets with a limited but more affordable approach. They perform an important on-ramp role in the overall market, particularly for customers just crossing into the scale where Kubernetes starts to make sense.
The market structure provides good options for customers at various stages of maturity and complexity, indicating the growing maturity of managed Kubernetes as a category. We expect to see these groupings become even more clearly defined over time, though hopefully with occasional outbursts of differentiation to shake things up.
Inside the GigaOm Radar
The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.
The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.
The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.
Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.
5. Vendor Insights
Alibaba Cloud Container Service for Kubernetes (ACK) is similar to offerings from other major cloud providers. Alibaba has a strong focus on the Asia Pacific region, and this is where it has the largest number of data centers available.
ACK offers both managed Kubernetes clusters and serverless clusters. The latter provides the ability to launch applications without creating or managing nodes at all.
Managed Kubernetes clusters come in two varieties: Standard, for which you are charged by the number of worker nodes and other infrastructure resources, and Professional, for which you are charged either by subscription or number of clusters. Serverless Kubernetes clusters are billed based on resource usage and duration of execution. Alibaba offers a wide range of node types and supports graphics processing unit (GPU) instances as well as both Windows and Linux node pools.
Both cloud and on-premises resources can be centrally managed in the Container Service console. Other deployment models are available, including edge cluster management, which unifies management across cloud, on-site data centers, and remote office deployments.
With ACK One, Alibaba provides a distributed cloud container platform, allowing customers to manage cloud-native applications in hybrid, multicluster, distributed, or disaster recovery scenarios.
Kubernetes clusters on Alibaba Cloud offer a comprehensive set of role-based access controls (RBACs) and integrations into its own cloud resource access management (RAM) systems and can be extended with OpenLDAP for enterprise single sign-on (SSO).
Protection for internal Kubernetes application programming interface (API) endpoints can be achieved with network access control lists, and external access can be provided by assigning an Elastic IP address to the cluster. However, restrictions on the locations from which the cluster can be accessed are not available.
Strengths: Alibaba Cloud Container Service for Kubernetes is a strong solution with both hybrid and multicloud solutions, as well as integrations into other Alibaba Cloud services. For businesses operating within the covered locations for Alibaba Cloud, this is a competitive solution.
Challenges: Alibaba Cloud is still working to gain presence in the wider global markets and may not offer sufficient data centers or services in the regions or locations that larger global enterprises require. Geopolitical tensions are adding to this challenge.
Amazon Elastic Kubernetes Service (EKS) provides a wide range of options for deploying Kubernetes within AWS. Multiple options for hybrid cloud and on-premises deployments are also available with EKS Anywhere and Outposts. Fargate helps customers simplify infrastructure management further, taking advantage of Kubernetes without the infrastructure complexity. Amazon also provides and maintains its own Kubernetes distribution, EKS Distro.
EKS supports multiple operating systems (OSs), allowing customers to create worker pools for Windows and Linux. AWS also offers support for its own Graviton processors based on advanced RISC machine (ARM) CPU architecture. EKS supports GPU instances and provides optimized deep learning container environments for artificial intelligence and machine learning (AI/ML) use cases as well.
Standards-conformant Kubernetes clusters can be connected with Amazon EKS Connector to provide visibility across a fleet of clusters. Connecting a cluster allows customers to see status, configuration, and workloads for that cluster within the Amazon EKS console. However, management of these clusters isn’t included at this time.
With coverage in multiple regions worldwide and support for both Outposts and EKS Anywhere, the overall EKS deployment options are broad and far reaching, allowing customers to embrace both hybrid and multicloud architectures. EKS Anywhere deployments tend not to provide the same levels of integration as native services within any given cloud provider, making the ability to centrally manage clusters from the EKS console of greatest benefit to customers with an existing investment in AWS as their primary cloud provider.
EKS provides a good level of security across the infrastructure by allowing API server communications to be limited within a virtual private cloud (VPC) and to IP addresses permitted to access it. Additional network policies can be added with tools such as Project Calico. Patching and updates of the Kubernetes system, as well as underlying node OSs, are handled in a rolling manner ensuring no degradation of service during the process.
Strengths: AWS has demonstrated considerable commitment to expanding its Kubernetes offerings, especially in the hybrid space, with the introduction of EKS Anywhere and EKS Connector. EKS provides support for Graviton instances, offering an alternative architecture type and a competitive price/performance profile.
Challenges: AWS Outposts may prove challenging to justify for organizations with smaller budgets or less operational maturity. However, EKS Anywhere helps to address this gap, bringing VMware support for hybrid deployment. EKS Connector provides additional visibility to any Kubernetes cluster but currently lacks management features. EKS has historically lagged upstream Kubernetes which frustrated customers looking for the latest features, but AWS has worked to address this and now supports Kubernetes v1.25.
DigitalOcean Kubernetes (DOKS) is a cloud-only Kubernetes service that allows customers to deploy clusters without the complexities of handling the control plane and infrastructure. DOKS provides a powerful yet easy-to-manage Kubernetes solution with native integration of DigitalOcean load balancers and block storage solutions.
Simple pricing starting at the very low end will appeal to smaller organizations that are just starting to look at containers and orchestration. DOKS is ideal for open source projects, individual developers, small businesses, and startups looking to move quickly.
Security within DOKS is similarly aimed at less complex environments and lacks features that more established businesses and enterprises may expect. For example, access to the Kubernetes API can’t be restricted by IP address. OS patching for worker nodes is applied during cluster upgrades, so enabling auto upgrades or regularly running these upgrades is important. Kubernetes RBAC is included and is secured via OAuth tokens or certificates, but additional identity management options are not available at this time.
High availability (HA) for DOKS clusters’ control plane is available as an additional paid option. HA clusters have replicated control plane components and can fail over to a redundant replica node, reducing downtime for management operations.
Continuous integration and continuous delivery/deployment (CI/CD) workflows are supported within the DOKS environment via integrations with GitHub Actions, allowing developers to use a push-to-deploy architecture for applications.
DOKS includes several basic and some more advanced metric visualizations to provide insight into the health of Kubernetes clusters and deployed applications.
Strengths: DigitalOcean is very well positioned for individual developers, startups, and small businesses that just want to deploy containerized applications without the complexities of managing Kubernetes infrastructure.
Challenges: This solution lacks a number of features that enterprises have come to expect from managed Kubernetes services, and there are no options for hybrid cloud or on-premises deployments.
Google Kubernetes Engine (GKE) was the first hosted Kubernetes service to reach the market and still holds its own among the other leading cloud providers. GKE provides a stable, scalable, easily automated solution for deployment of modern containerized applications. GKE tends to more closely follow upstream Kubernetes versions than other offerings.
Google continues to add to the solution with hybrid and multicloud options provided by Anthos. For serverless and application-focused deployments, Google offers Cloud Run for serverless containers and GKE Autopilot for serverless Kubernetes, abstracting the application from the infrastructure to simplify deployment and management.
Management of clusters is controlled by standard RBACs with Google offering additional integration with Active Directory and Keycloak for federation and SSO. Security updates and OS patching are handled manually or automatically in Autopilot-enabled clusters. Upgrades are performed on a rolling basis and, depending on the customer’s chosen architecture, either with no disruption for regional clusters or minimal management plane interruption for zone-based clusters.
Google offers a wide range of deployment options. Within the cloud customers can deploy both Linux and Windows node pools, and GKE provides support for a number of Nvidia GPUs. On-premises deployments are supported on bare-metal and VMware. Anthos deployments to other major clouds are supported, along with the attachment of non-Anthos-managed clusters such as EKS/AKS.
GKE benefits from additional services provided by Google. Customers can create CI/CD pipelines on Google Cloud using several hosted products following the popular GitOps methodology. Additionally, popular offerings within the market such as Jenkins and VSTS are also supported.
Backup of application data and configuration can be included by enabling the Backup for GKE service, integrated into the GKE user interface (UI), APIs, and Cloud command-line interface (CLI).
Strengths: Google benefits from its deep knowledge of the Kubernetes project and ecosystem. GKE is a mature, managed Kubernetes offering and integrates well with a wide range of complementary services. Customers with leading-edge needs, particularly in data science and AI/ML, and robust experience with container-based development will be very much at home in GKE.
Challenges: Anthos is still relatively new and needs improvements to address the applicable use cases. GKE can prove challenging for more traditional enterprises that lack mature container-based development processes. However, GKE continues to invest heavily in its ecosystem and large enterprise support, particularly for specialized areas such as data science and AI/ML.
IBM Cloud Kubernetes Service (CKS) sits alongside Red Hat OpenShift Kubernetes options (discussed in the Red Hat section below), enabling existing IBM and Red Hat customers to take advantage of a larger service catalog and enjoy freedom of choice for their hybrid cloud infrastructures. In this context, IBM Cloud also offers a growing ecosystem of services and partners in an expanding number of regions worldwide.
IBM offers a wide range of pricing options to suit customers of all sizes, including a free tier designed to allow an exploration of its capabilities before committing to any spend. Multiple deployment options are available, including shared, dedicated, bare metal, and VPC, as well as private, on-premises installations. Multiple size tiers are available with competitive pricing that’s either hourly or monthly in the case of bare metal.
Access to the IBM CKC can be gained via public and/or private endpoints, each with their own options for securing access to the API endpoints. Using the private endpoint option provides greater security, enforcing access only from within subnets defined by the user and from networks connected to the private cloud network, including through an IBM Cloud VPC virtual private network (VPN) connection and WireGuard VPN.
Kubernetes cluster access and application deployment roles can be configured using RBAC. This role configuration is further expanded by the federation provided within the IBM Cloud ecosystem, allowing for SSO with corporate accounts.
IBM CKS offers both Windows and Linux Containers as well as compute instances with NVIDIA GPUs. This allows flexibility for enterprises when it comes to migrating existing on-premises workloads over to the service.
Strengths: IBM Cloud provides multiple options for customers transitioning from on-premises IT to hybrid cloud. The Kubernetes service is well integrated into the IBM Cloud experience and existing IBM customers will find adding a Kubernetes service straightforward.
Challenges: The IBM Cloud ecosystem is still limited compared to other major service providers. IBM’s rapid work to add services and capabilities adds complexity that some customers may find challenging to navigate, particularly when updating existing Kubernetes service deployments.
Linode Kubernetes Engine (LKE) is a cloud-only Kubernetes service available in a number of global regions. LKE provides an easy-to-manage Kubernetes solution that integrates with existing Linode storage solutions for persistent storage and load balancers for application availability.
Pricing is straightforward with CPU, RAM, storage, and network transfer bundled into a single subscription cost per instance type, with an HA control plane available as an add-on. LKE would appeal to customers just starting to look into containers and orchestration, and it’s ideal for open source projects, individual developers, small businesses, and startups looking to get to market quickly.
CI/CD workflows are supported within the LKE environment by using integrations with solutions from GitLab and others such as GitHub Actions, allowing developers to deploy applications easily by pushing code to a configured repository.
Security within LKE is suitable for the target market, though it lacks features that more established businesses and enterprises would expect, such as cloud firewall capabilities. For example, access to the Kubernetes API can’t be restricted to certain IP addresses or ranges. Kubernetes RBAC is included as is the ability to apply layers of security via certificates, and additional identity federation is provided via Google SSO.
LKE features highly available control planes: HA clusters have a replicated control plane, and components can fail over to a redundant replica node, resulting in reduced downtime for management operations. The HA feature comes at an additional cost and can be enabled either at the point of creation or by editing an existing cluster.
Strengths: Linode is well positioned for individual developers, open source projects, and small businesses or startups that want to get started with Kubernetes without managing physical infrastructure. It is focused on a small number of clear use cases.
Challenges: LKE lacks a number of the features that enterprises have come to expect from managed Kubernetes services. The number of available regions outside of the US is limited, and there are no options for hybrid or multicloud deployments.
Microsoft Azure Kubernetes Service (AKS) is aligned with services from other major cloud providers, but Azure also offers managed OpenShift (jointly operated with Red Hat) and Container Instances, a service to run containerized applications directly in Azure.
Additionally, with Azure Arc-enabled Kubernetes, customers can add and manage Kubernetes clusters running in other clouds or data center locations. Azure Arc supports a wide range of Kubernetes distributions, including Amazon EKS, Google GKE, Azure Stack HCI and Azure Stack Edge, Red Hat OpenShift Container Platform, SUSE Rancher, Nutanix, and Mirantis. Providing a wide range of deployment options and locations gives customers the choice of adopting hybrid and/or multicloud applications with relative ease.
Microsoft provides a broad range of integrations into its software development ecosystem, including VSCode and GitHub. Customers can consume AKS resources from Azure DevOps pipelines and GitHub Actions, and additional projects such as Azure Service Operator (for Kubernetes) provide integrations with other Azure services, such as database services.
AKS can directly manage the integration of Kubernetes into Azure Active Directory for RBAC. Microsoft provides a wide range of integrations for developers, making development of secure applications easier to achieve. For example, using pod-managed identities helps applications access connection strings and authentication details directly from secrets management systems like Azure Key Vault. Access to the managed environment can be limited in multiple ways, including authorized IP range controls or even deploying a fully private cluster with API access limited to a customer’s own virtual network.
Patching of the Kubernetes infrastructure components is provided and runs in a rolling fashion to ensure uptime and availability during the process, with support for up to two previous GA minor versions. Automatic updates based on a number of upgrade channels can also be configured.
Strengths: AKS in combination with Arc and the wider Azure ecosystem provides a comprehensive enterprise Kubernetes experience. Microsoft’s offering makes cloud consumption easy for traditional enterprises and software developers using the Visual Studio suite.
Challenges: The enterprise appetite for Azure Stack hybrid deployments remains muted. Not all integrations are available in every location and customers will need to carefully evaluate if available functionality aligns with their preferred deployment architecture.
Mirantis Container Cloud
Mirantis Container Cloud provides customers with a platform to consume Kubernetes across a range of cloud and on-site deployment options, including AWS, Azure, Equinix Metal, bare-metal servers, and virtualized (VMware, OpenStack). The platform comes as a managed service with an on-site option for secure/dark site requirements. Based on Mirantis Kubernetes Engine or MKE (previously Docker Enterprise), Container Cloud adds automation and orchestration to handle the deployment, management, upgrades, monitoring, and security of Kubernetes clusters across deployment locations.
Mirantis provides multiple operating models, and support can be provided via a co-pilot offering with the OpsCare package. For customers that want to go further, OpsCare+ provides Container Cloud as a fully managed service, including the full deployment and management of Kubernetes at scale. Pricing is based on consumption of assigned cores within virtualized environments or physical cores on bare-metal servers, allowing customers to start as small as they wish and grow as their requirements expand.
Container Cloud provides a full set of identity integration features allowing customers to plug in to an existing identity management system. MKE itself provides a robust set of RBAC features, including certificate-based authentication. The Mirantis container runtime is FIPS 140-2 compliant and NIST validated, providing options for environments that have the highest security requirements.
Upgrades within Container Cloud are comprehensive and cover the OS, MKE, Mirantis Container Runtime, and logging, monitoring, and alerting components where installed. Updates are provided in a rolling manner and include provisioning of new nodes where required and migration of existing applications, ensuring uptime is maintained throughout the process.
As with other managed Kubernetes providers, Mirantis integrates with a range of developer tools to facilitate application lifecycle management, such as Lens Autopilot for CI/CD lifecycle, Amazee Lagoon for Docker-based microservices, and Shipa for application management.
Strengths: Container Cloud provides a secure and well-supported Kubernetes option across multiple locations both on-premises and in the cloud. Mirantis provides strong offerings for high-assurance environments with its container runtime and secure registry, and it will be attractive to customers that place high value on robustness and security.
Challenges: Mirantis Kubernetes versions lag somewhat behind the mainstream Kubernetes releases. However, the added security and hardening provided may be more important for some customers than the latest features. Google Cloud is noticeably absent as a supported deployment location.
Oracle Container Engine for Kubernetes (OKE) is a light and clean Kubernetes service for customers using Oracle Cloud Infrastructure (OCI).
OKE has a straightforward UI and a range of integrations with the rest of Oracle’s ecosystem. Using the OCI Service Operator for Kubernetes, customers can integrate and manage additional OCI resources such as databases directly through the Kubernetes API. This integration allows customers to build a full application stack with a unified approach based on OCI services.
Pricing is based on the consumption of the underlying OCI infrastructure. Only worker nodes are charged, with parent nodes managed by Oracle. Parent nodes are automatically created and managed in multiple availability domains in the Oracle control plane. Parent nodes can be seamlessly upgraded to new versions of Kubernetes without downtime. OKE is also adding support for virtual nodes that eliminate the need for customers to deal with physical infrastructure at all and operate at a higher level of abstraction, though these are not yet available.
Oracle has introduced the Ampere A1 compute platform to its cloud and integrated its deployment within the Kubernetes service. Allowing users the opportunity to deploy applications written for the ARM CPU architecture provides greater flexibility and removes the need to deploy different application stacks in different locations based on required architecture. GPU support is also available within OKE, providing a single Kubernetes cluster solution that can run traditional x86 workloads, GPU workloads such as AI/ML, and ARM workloads.
Oracle is also building a solid partner ecosystem around OCI. For example, Oracle Interconnect for Azure provides an option for customers seeking to use Oracle’s services in some areas (particularly databases) while also accessing the broader range of services offered by Azure.
OKE supports a mix of both identity and access management (IAM) and RBAC configurations to control access to the cluster and applications. OCI supports SSO with common providers such as Active Directory, giving customers the ability to rely on their existing authentication and authorization structures within the Oracle Cloud and Kubernetes environments. A secure container registry is available that supports image scanning, fingerprinting, and verification.
Strengths: OKE is a solid option for Oracle customers looking to migrate applications to the cloud while remaining on Oracle platforms. OKE is attractively priced and may also appeal to customers looking to build Kubernetes and container-based application delivery skills. OKE provides options for both GPUs and ARM CPU architectures, allowing the deployment of specialist workloads in the cloud.
Challenges: OCI lacks features available from other major cloud providers. Hybrid and multicloud deployments require the use of third-party solutions. The limitations of OKE may prove too restrictive for larger enterprises with more comprehensive requirements.
Platform9 Managed Kubernetes (PMK) provides a flexible and robust approach to managed Kubernetes, deftly balancing customer needs for features and functionality with stability and support. Using the latest open source technology, with the added benefit of expert support, Platform9 provides recommendations for tested and supported integrations and components throughout the stack.
Customers can import existing Kubernetes clusters, deploy management agents to existing bare-metal OS or VM servers, or deploy a prepackaged open virtual appliance (OVA) for common hypervisors. A wide range of hardware and acceleration options are supported, including GPUs for AI/ML or graphics-intensive workloads. Support for additional CPU architectures is not yet available, although the number of x86 options in the market is sufficient for most use cases.
Platform9 supports on-premises and cloud-based deployments, including all the major hyperscalers, with a full range of locations around the globe. Customers can take advantage of their preferred hybrid and/or multicloud options without risk of location or provider lock-in. Clusters can be removed from the platform with no impact on underlying applications.
Platform9 has consolidated pricing to a fixed platform fee per customer with an annual license cost per physical or virtual node and a 20% support fee. Additional expert services, such as the onboarding service, migration service, or cloud native transformation service, are charged separately.
Platform9 offers a robust set of security options across the board, including enterprise authentication integration and network policies using Project Calico and Wireguard for encryption on the wire. An on-premises, self-hosted option is available for dark sites where connectivity to the SaaS platform is limited or non-existent.
Strengths: Platform9 is particularly strong on ease of use and lifecycle management, with broad support for customer choice of deployment location and feature set. Investment in support expertise removes the operational burden from customers who want to use Kubernetes without having to manage it. Platform9 is a good solution for customers who need more control of their infrastructure, data, and applications while still accessing the benefits of the cloud.
Challenges: Platform9’s approach is best suited to larger, more established enterprises, and is unlikely to be attractive to SMBs, startups, or similar businesses. IAM options lag some other providers, and import support for some Kubernetes distributions, such as Rancher and OpenShift, is missing, though this is under active development and should be addressed soon.
Rafay Kubernetes Operations Platform
Rafay Kubernetes Operations Platform is a robust and secure option for large-scale management of Kubernetes clusters.
Rafay supports Kubernetes clusters deployed in major clouds, on-site data centers, and remote/edge locations, as well as Kubernetes services such as Amazon EKS and EKS-A, Google GKE, Azure AKS, Red Hat OpenShift, SUSE Rancher, and VMware Tanzu. It has broad support for a wide range of infrastructure options for customers that prefer to specify their own physical infrastructure.
Rafay Kubernetes Operations Platform has a centralized controller architecture, available as both a cloud-based SaaS or as a self-hosted deployment for disconnected or dark sites. Rafay also offers a fully managed Kubernetes service for customers that want to completely outsource Kubernetes management.
The company has strong multitenancy support, for both managed service provider (MSP) use cases and enterprises that take a federated approach to IT services. Lifecycle management can be performed with high assurance at scale without losing the flexibility to perform localized customizations. Global enterprises with specific geographic needs will find this particularly attractive.
Pricing is straightforward, licensed either per node or per cluster on an averaged monthly consumption basis, with a quarterly “true-up” process if steady-state consumption exceeds the licensed quantity. Customers can feel free to experiment without being charged for short-term spikes in demand. Enterprise support is available for an additional 20% fee. Rafay provides fine-grained visibility into costs, helping enterprises to more effectively manage the cost of their Kubernetes fleet.
Strengths: Rafay Kubernetes Operations Platform provides excellent lifecycle management at scale and supports complex enterprise operational needs. Support for multiple technologies alongside Kubernetes assists customers looking to migrate applications over time without having to split their attention with a bi-modal approach.
Challenges: Rafay lacks the broad ecosystem of channel partners that many enterprises rely on to manage vendor relationships, though this is being actively addressed.
Red Hat OpenShift
Red Hat has updated its product line to align under the OpenShift brand for managed Kubernetes options. Red Hat has partnerships with major public cloud vendors, such as Azure Red Hat OpenShift, Red Hat OpenShift Service on AWS, and OpenShift Dedicated on Google Cloud. Red Hat also partners with HPE GreenLake, Dell APEX, and others for private managed OpenShift environments.
Kubernetes environments are built on Red Hat OpenShift Container Platform, which provides capabilities such as service mesh, CI/CD, and GitOps integrations, support for smaller 3-node deployments for edge locations, and a variety of automated operations capabilities. OpenShift Platform Plus adds multicluster security, compliance, and lifecycle management capabilities for large-scale operations. Red Hat also provides the entry level OpenShift Kubernetes Engine, though this has been de-emphasized in recent years.
OpenShift can be deployed into AWS, Azure, IBM Cloud, and Google Cloud or purchased as a pre-integrated solution. Red Hat provides a fully automated installation experience deploying all the components required to run Kubernetes. Over-the-air smart updates are included, allowing customers to see what updates are available for their clusters. Updates can be deployed automatically along with any dependencies.
The Red Hat OpenShift Container Platform supplies a set of operations and developer services and tools that offer a more serverless approach to deployment of applications. This provides a cloud-like service but still leaves an element of responsibility with the IT operations teams to manage and maintain the environment.
For more traditional managed offerings, Red Hat has partnered with Azure, IBM Cloud, and AWS to provide fully managed Red Hat OpenShift. These services are purchased directly from the cloud providers and are jointly engineered and supported by Red Hat and the respective cloud engineering teams. This provides a consistent environment for customers already using OpenShift within existing self-managed environments.
Red Hat OpenShift supports multiple deployment models, allowing customers to choose whether to have the infrastructure self-provisioned or provisioned by the automated installer. Support for Red Hat Enterprise Linux for Virtual Datacenters, Red Hat CoreOS, and Windows is included, providing some flexibility to customers.
Red Hat has a large portfolio of services, so its overall integration of the service is very good, and options exist for most enterprise needs when deploying cloud-native applications. However, for customers, this comes at the cost of understanding the portfolio and knowing how to integrate the parts effectively.
Strengths: Red Hat provides multiple offerings in this space, and there is a great deal of flexibility in deployment options because they are not tied to a particular location or cloud provider. Documentation and support are very good, and Red Hat has a huge amount of experience with the OpenShift platform.
Challenges: As a self-managed solution, OpenShift can present more components for customers to understand and maintain. OpenShift is an opinionated approach that best suits customers willing to adapt to the OpenShift way of doing things.
VMware Tanzu is a collection of Kubernetes-focused products under the Tanzu brand, each providing a tiered or modular part that customers can combine to create a personalized Kubernetes architecture.
VMware Tanzu Kubernetes Grid is the entry-level product that provides a standardized Kubernetes runtime across deployment locations—on-site, in the cloud, or at the edge. It is available as an add-on for vSphere or via several public cloud marketplaces and can be managed with regular VMware tools like vCenter.
Tanzu Mission Control adds a unified control plane for managing multiple Kubernetes clusters. It provides container networking, security and quota policies, cluster data protection, and platform monitoring. Tanzu for Kubernetes Operations extends the solution further to add Kubernetes ingress services, load-balancing, and service mesh capabilities.
Tanzu Mission Control is a cloud-based SaaS that can manage Kubernetes deployments in multiple public clouds as well as on-site deployments. The advanced version can be purchased separately for customers that want the cluster management capabilities but not the VMware Tanzu Kubernetes runtime.
Pricing is somewhat complex, with costs dependent on the specific set of Tanzu products chosen, any add-ons, and the costs of the underlying infrastructure when deploying into cloud locations. Customers must carefully assess their cluster configurations, for example. Tanzu Mission Control will only support Tanzu Kubernetes Grid workload clusters with a minimum of four CPUs and 8 GB of memory, and clusters must have a minimum of two worker nodes to be added to Tanzu Observability.
Strengths: VMware enjoys a significant presence in the enterprise, and Tanzu may be an attractive option for customers looking to start adding Kubernetes to their existing virtual machine-based approach. VMware is working hard to provide a bridge to container-based application deployment for its more traditional VM-centric customers.
Challenges: Tanzu is a portfolio of products that can be difficult to navigate when mapping products to use cases. VMware is still digesting several acquisitions and working to unify the portfolio as a fully integrated whole. Its attempts to translate its traditional services into container-based options are sometimes less successful than customers expect.
6. Analyst’s Take
Kubernetes is a complex ecosystem and enterprise customers need managed options to simplify architecture and operations, particularly at scale.
There is a constant tension between the enterprise desire for flexibility and options on the one hand and the need for consistency and ease of use on the other. Managed Kubernetes options are constantly reevaluating how to best meet these competing needs of their customers.
The leading options have focused on consistency of operations with maximum compatibility via open standards. Standard APIs and interfaces are what has enabled the Kubernetes ecosystem to grow as it has. Isolating oneself from the broader market due to overconfident assertions about unique needs would be unwise, for vendors and customers alike.
When evaluating managed Kubernetes options, customers should first clearly articulate their existing capabilities and the operational model they are looking to achieve. Some solutions are better suited to customers with advanced experience in container-based application development and deployment. Customers will also need to assess their own change velocity and compare it to that of their shortlisted vendors. Kubernetes (and managed Kubernetes options) are a moving target, so customers should select a vendor that will support their own desired pace of change. Moving more slowly but with greater success can be preferable to rapid changes that deliver little value in the end.
For customers that have a clear preference for a specific cloud vendor’s operational approach or set of services, aligning with that vendor’s managed Kubernetes option makes sense. Cloud vendors are extending their approach to support hybrid and multicloud deployment because customers require it, and open standards help ensure interoperability with the wider ecosystem. A truly wrong choice is unlikely, but patience may be required if there is insufficiently broad customer demand for features or capabilities you desire. Moving with the broader market will be required.
For customers that have more complex or specialized needs and need to support multiple clouds as well as a variety of other deployment locations, vendors that provide a holistic approach to Kubernetes management regardless of deployment location are likely a better choice. There are several options that provide excellent ease of use and manageability at scale without compromising customers’ ability to customize individual clusters for specific needs.
Customers with active mergers and acquisitions teams should pay particular attention to options that can absorb and divest Kubernetes clusters with ease. A highly opinionated approach that works well for the existing teams can prove incompatible with the way a new acquisition chose to do things. Unifying conflicting models can be more trouble than it’s worth.
Customers that are very new to containers should delay choosing until they have acquired more experience. Managed Kubernetes options are opinionated by necessity, and their choices will shape yours. A premature decision could lead to incompatible operational models that make Kubernetes a frustrating experience for all concerned. Kubernetes is complex enough without adding still more risk to an initiative.
7. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.