This GigaOm Research Reprint Expires Mar 14, 2025

GigaOm Radar for Kubernetes for Edge Computingv2.0

1. Executive Summary

The escalating growth of data generation at the edge has fueled demand for strategic compute capabilities positioned closer to the source. In response, Kubernetes has emerged as an enticing, standards-based platform for constructing applications geared toward processing data at the edge.

As Kubernetes is the predominant standard for orchestrating containers on a large scale, its adoption at the edge seamlessly extends the orchestration and management capabilities that have made Kubernetes a prevailing choice in cloud and data center environments. The diverse array of use cases and challenges encountered in edge environments necessitates a robust solution like Kubernetes to aid customers in navigating the intricacies.

Today, connected and smart devices span a range of industries, from retail and transportation to healthcare and manufacturing. Whether in remote wind farms or autonomous vehicles, embedded computing devices are generating copious amounts of data, data that emanates from locations far beyond the traditional confines of data centers and cloud environments. Processing such data in centralized locations poses considerable challenges due to intermittent and unreliable connectivity, constrained bandwidth, and high latency. The logical solution is to process data in geographic proximity to its source.

Kubernetes stands out as an ideal platform for building the applications that handle this data. Container-based applications, requiring fewer resources than traditional operating systems, prove well-suited for the constrained resources of rugged, embedded-device form factors. In comparison to static approaches like programmable logic controllers, a general-purpose computing platform, such as Kubernetes, provides greater flexibility. Furthermore, Kubernetes adheres to an operating model familiar to data center operators, offering economies of scope and scale. The same application development methods can be seamlessly applied to cloud-based, traditional data center, and edge-based Kubernetes clusters.

The deployment of Kubernetes for edge computing generally follows one of two common approaches:

  • Platform: This approach supports a broad spectrum of hardware devices and existing infrastructure components, allowing significant flexibility and choice in both hardware and deployment location enabled by Kubernetes.
  • Appliance: Resembling the approach taken in hyperconverged infrastructure, this method combines a highly opinionated selection of compute, storage, networking, and orchestration with a vendor-selected software stack enabled by Kubernetes.

Some offerings may combine aspects of both approaches. Platform deployments prove beneficial when existing infrastructure is in place or when customers prefer to maintain a high level of control over the approach and placement of the solution. In contrast, appliance deployments are advantageous when customers prefer vendors to assume responsibility for the qualification and support of the hardware and software combination for hyperspecific, well-defined use cases.

As data processing capabilities, analytics, and real-time decision-making continue migrating closer to where data originates, the concept of edge computing has evolved to enable new market opportunities. Within edge computing, there are gradations of “edge” that provide different capabilities and cater to specialized use cases.

  • The near edge sits between cloud data centers and the extreme edge, consisting of mini data centers like cell towers, central offices, and campus facilities. The near edge provides compute power, storage, and networking closer to users and devices than the cloud, enabling use cases like content delivery networks, data aggregation from internet of things (IoT) devices, and real-time data analytics requiring very low latency. Kubernetes container orchestration is well-suited for near-edge applications since it allows centralized deployment and management of containerized workloads at this intermediate edge layer.
  • Moving closer to the data source, the far edge resides on-premises, very close to endpoint devices and sensors. This includes small ruggedized servers or integrated compute sitting inside devices within retail stores, factory floors, and vehicles. The far edge focuses on extremely low latency use cases like AR/VR, industrial automation and control, and autonomous vehicles. Lightweight Kubernetes distributions are optimized to provide the same Kubernetes container benefits but at the resource-constrained far edge.
  • Finally, at the furthest reach is the device edge, consisting of the endpoints themselves—various sensors, gateways, controllers, and microcontrollers. These devices collect and preprocess data and communicate with far-edge servers. Being highly optimized for specific functions, device edge elements contain only necessary compute, memory, storage, and power suited for embedded environments. Software like Podman can deploy containerized logic directly on devices without full Kubernetes orchestration.

This is our second year evaluating the Kubernetes for edge computing space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.

This GigaOm Radar report examines nine of the top Kubernetes for edge computing solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading Kubernetes for edge computing offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

2. Market Categories and Deployment Types

To help prospective customers find the best fit for their use case and business requirements, we assess how well Kubernetes for edge computing solutions are designed to serve specific target markets and deployment models (Table 1). Additionally, we’re also comparing the vendors’ software licensing models (Table 2).

For this report, we recognize the following market segments:

  • Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises where intuitive interfaces, ease of use, and low barriers to entry are more important than extensive management functionality, data mobility, and feature set.
  • Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
  • Specialized: Optimal solutions in this segment are designed for specific workloads and use cases, such as AI inferencing and environmental sensing.
  • Network service provider (NSP): In this segment, solutions are targeted to the modernization and transformation of well-defined communication network design patterns, elements, and industry standards. They are chiefly focused on mobile, fixed wireless, WiFi, wireline, and OSS/BSS use cases.

In addition, we recognize the following deployment models:

  • SaaS: Edge devices must connect and remain connected to the vendor’s SaaS cloud systems for provisioning, management, or other core services. Customers are usually responsible only for a portion of the edge environment, with the vendor taking responsibility for the service they provide. This deployment model is less flexible than alternatives, but it’s often simpler for customers to deploy and use.
  • Self-managed: Self-managed solutions can be deployed into customer-controlled environments. These solutions can be more complex to deploy and manage but tend to be more flexible. Customers usually have more control over the system, so they are generally favored for high-security or tightly regulated workloads.

Table 1. Vendor Positioning: Target Market and Deployment Model

Vendor Positioning: Target Market and Deployment Model

Target Market

Deployment Model

Vendor

SMB Large Enterprise Specialized NSP SaaS Self-Managed
AWS
Canonical
Kubermatic
Mirantis
Rakuten
Red Hat
Spectro Cloud
SUSE Rancher
Wind River

Additionally, for this report we’re also considering the following software license models:

  • Proprietary: Defined as software where only the original authors may legally copy, inspect, and alter that software. In order to consume the proprietary software, users must legally agree that they will not do anything with the software that the software’s authors have not expressly permitted.
  • Open source: Defined as software or code that is designed to be publicly accessible—anyone can see, modify, and distribute the software as they see fit.

Table 2. Vendor Positioning: Software Licensing Model

Vendor Positioning: Software Licensing Model

Licensing Model

Vendor

Proprietary Open Source
AWS
Canonical
Kubermatic
Mirantis
Rakuten
Red Hat
Spectro Cloud
SUSE Rancher
Wind River

Table 1 and 2 components are evaluated in a binary yes/no manner and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1).

“Target market” reflects which use cases each solution is recommended for, not simply whether that group can use it. For example, if an SMB could use a solution but doing so would be cost-prohibitive, that solution would be rated “no” for SMBs.

3. Decision Criteria Comparison

All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:

  • Certified cloud-native computing foundation (CNCF) Kubernetes distribution conformance
  • Persistent storage capabilities
  • Internal service load balancing
  • External service access
  • Virtual networking
  • Security controls
  • Automated upgrades

Tables 3, 4, and 5 summarize how each vendor included in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.

  • Key features differentiate solutions, outlining the primary criteria to be considered when evaluating a Kubernetes for edge computing solution.
  • Emerging features show how well each vendor is implementing capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months.
  • Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization.

These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Kubernetes for Edge Solutions.”

Key Features

  • Integrations: Certification and validation of third-party plug-ins is critical for Kubernetes vendors to guarantee functionality, security, and interoperability across their distributions. This allows users to leverage the extensibility of plug-ins safely while avoiding lock-in. Standards-based Kubernetes interfaces such as container network interface (CNI), container runtime interface (CRI), and container storage interface (CSI) offer a modular plug-in-based approach to adapting differentiated capabilities as needed. By supporting and validating third-party plug-ins, vendors enable users to extend Kubernetes functionality to meet their specific needs. We look at both the breadth and quality of a vendor’s standard Kubernetes third-party integration patterns and interfaces such as CNI, CRI, CSI, and operators on disparate hardware platforms.
  • Customization: This feature considers whether the vendor permits and supports a customer’s choice to modify a pluggable or non-pluggable element of the solution. Given the different approaches to enabling Kubernetes for a spectrum of edge use cases, vendors take a different approach to what they’re willing to support or capable of supporting relating to their customers’ unique requirements. Even though the technology may functionally support its extensibility, all vendors in this space make a conscious decision as to the functional limitations and supportability of their solution.
  • Connectivity: Edge environments often have challenging network connectivity due to distributed locations, intermittent connectivity, and bandwidth constraints. Advanced networking features allow edge Kubernetes clusters to handle these complex requirements through the use of advanced vendor-provided network plug-ins. We evaluate the various ways a solution facilitates its access to both the control plane and data plane on an IP-based communications network.
  • Automated platform deployment: This key feature refers to how well the solution facilitates its initial installation with the least amount of human intervention. Setting up Kubernetes clusters at the edge can be complex due to hardware heterogeneity, networking challenges, and a potential lack of hands-on technical expertise at remote locations. Automated deployment enables consistent Kubernetes platform rollout across thousands of edge sites. This avoids configuration drift and complex troubleshooting that comes with manual deployments at scale.
  • App deployment: This feature assesses the ways in which a solution natively supports the onboarding of sophisticated customer workloads to the cluster. Generic upstream Kubernetes supports the use of basic pods, jobs, and deployments. These capabilities may be good enough for basic use cases but may not be suitable for advanced users who have invested heavily into the development of more application-centric deployment management capabilities.
  • Remote management: Edge locations often lack dedicated on-site IT staff, so remote management from a central location is critical for deploying, monitoring, troubleshooting, and managing edge clusters. There can be thousands of edge locations that need to be managed, making automation, standardized configurations, and fleet-based management essential. This feature addresses the distribution’s embedded management capability. It refers to how well the edge Kubernetes cluster facilitates its remote management at scale across a wide variety of connectivity scenarios.

Table 3. Key Features Comparison

Key Features Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Key Features

Vendor

Average Score

Integrations Customization Connectivity Automated Platform Deployment App Deployment Remote Management
AWS 2.3
Canonical 1.8
Kubermatic 3.5
Mirantis 2.7
Rakuten 4.3
Red Hat 4.5
Spectro Cloud 4.7
SUSE Rancher 4
Wind River 2.8

Emerging Features

  • Workload acceleration: Accelerators in edge Kubernetes are specialized hardware devices like GPUs and TPUs. They offload tasks, improving performance and efficiency of containerized applications, enabling faster processing, reduced latency, and lower costs. The technology is advancing with new Kubernetes APIs for better integration and management of these hardware devices, enhancing the capabilities of containerized applications, especially in AI inferencing and computational sensing.
  • Infrastructure as code (IaC): IaC for edge Kubernetes involves using code to automatically define and manage infrastructure, making it versionable, testable, and reproducible. The Cluster API framework, a Kubernetes sub-project that provides declarative APIs and tooling, simplifies provisioning, upgrading, and operating multiple Kubernetes clusters. This integration is important as it enables the application of software engineering practices to infrastructure. IaC as a native capability aligns with the industry trend of maximizing the power of DevOps and site reliability engineering (SRE) operational principles for more efficient and reliable infrastructure management, especially across diverse environments.
  • OS support: In the context of Kubernetes, the underlying platforms can be either standard enterprise operating systems or immutable operating systems. Standard enterprise operating systems require regular maintenance, while immutable operating systems are designed to remain unchanged after deployment. With immutable operating systems, the system image is read-only, and all changes are written to a separate storage location. This approach can enhance security and reliability, as it prevents unauthorized changes to the system. This is important as it impacts the management and security of the Kubernetes clusters. The technology is trending toward tighter integration of immutable systems with Kubernetes to reduce overhead, enhance security, and focus on workloads.
  • Secure supply chain: Secure supply chain in Kubernetes at the edge ensures software integrity and security during build, delivery, and deployment to edge environments. Confidential computing paradigms enhance this by isolating Kubernetes pods inside secure enclaves, providing hardware-level encryption and integrity verification. This is crucial for preventing supply chain attacks and ensuring software trustworthiness at the edge where physical security controls are prone to failure. The technology is trending toward solutions like Sigstore for artifact signing and verification, and standards like SLSA compliance, while also integrating confidential containers for enhanced security.
  • Self-healing: The self-healing capabilities of Kubernetes are particularly important at the edge, where deployments might be distributed over a wide geographical area with limited access for maintenance. The technology is evolving to enhance these self-healing features, with improvements in automation, monitoring, and the ability to manage and orchestrate workloads and underlying compute resources more effectively across diverse and distributed environments.
  • Instruction set: Solutions should support multiple CPU instruction sets in edge computing as edge devices tend to feature a wide range of CPU architectures, including x86, ARM, and RISC-V. This diversity enables edge devices to optimize system operation, enhance performance, and support a variety of workloads. Edge computing is trending toward leveraging heterogeneous computing, which involves incorporating various processing resources such as GPUs, TPUs, and FPGAs to optimize system operation and meet the demands of diverse edge workloads.

Table 4. Emerging Features Comparison

Emerging Features Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Emerging Features

Vendor

Average Score

Workload Acceleration IaC OS Support Secure Supply Chain Self-Healing Instruction Set
AWS 1.7
Canonical 3
Kubermatic 2.8
Mirantis 2.5
Rakuten 2.8
Red Hat 4.2
Spectro Cloud 4.3
SUSE Rancher 2.5
Wind River 2.2

Business Criteria

  • Architecture: Architecture refers to the design and structure of the computing environment, including the arrangement of resources, such as nodes, clusters, and control planes. It is important in relation to Kubernetes for edge as it determines how workloads are distributed across available resources, optimizing the cost of infrastructure and ensuring high availability, scalability, portability, and security.
  • Flexibility: Edge is best defined as a spectrum ranging from central data center-adjacent compute on one extreme to device edge at the other. Flexibility as it relates to edge Kubernetes derives from its intentional opinionated design, its ability to minimally depend on nothing more than a thin underlying operating system serving as the interface between the hardware and the Kubernetes interface, and its ability to ubiquitously enable its common abstraction plane across the entire spectrum of compute modalities.
  • Scalability: Scalability is a meta criterion, meaning it blends intent and technology. It addresses a solution’s ability to execute the business’s objectives as efficiently as possible across the entire spectrum of a customer’s managed fleet.
  • Ease of use: Ease of use focuses on a solution’s ability to serve the platform’s capabilities to the broadest user community available, regardless of whether humans or machines are driving its consumption.
  • Ecosystem: This criterion looks at the breadth of the sales channels for the solution and whether there are systems integrator (SI) partners ready to drive adoption and provide support. It also considers the third-party development community, which can expand use cases and directly extend the product’s functionality.
  • Cost: Solutions should have a large partner ecosystem of SIs, value-added resellers (VARs), and distributors where pricing can be sourced from almost any existing purchasing channel relationship and cost is generally equivalent regardless of channel.

Table 5. Business Criteria Comparison

Business Criteria Comparison

Exceptional
Superior
Capable
Limited
Poor
Not Applicable

Business Criteria

Vendor

Average Score

Architecture Flexibility Scalability Ease of Use Ecosystem Cost
AWS 2.7
Canonical 2
Kubermatic 3
Mirantis 2.2
Rakuten 4.2
Red Hat 4.7
Spectro Cloud 4.5
SUSE Rancher 3.8
Wind River 3.2

4. GigaOm Radar

The GigaOm Radar plots vendor solutions across a series of concentric rings with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s evolution over the coming 12 to 18 months.

Figure 1. GigaOm Radar for Kubernetes for Edge Computing

As you can see in the Radar chart in Figure 1, the Kubernetes for edge market is characterized by a mix of mature and innovative solutions, with a competitive landscape that includes both specialized Feature Play solutions and comprehensive Platform Plays. This diversity reflects the varied needs of customers in the edge computing space and the potential for continued evolution and growth in the market.

There are relatively many edge computing-optimized Kubernetes offerings, as edge computing is still developing as a technology sector. The variety and breadth of these Kubernetes-based offerings reflect the early consensus in the market that a Kubernetes-based edge is not only theoretically viable, but actually the best evolutionary approach to extend a rapidly maturing core Kubernetes capability outward in an effort to solve the widest variety of use cases and customer requirements using a common, well-understood operating model.

While the vendors all take different approaches, some trends are starting to emerge. Platform players such as Rakuten, Red Hat, and Spectro Cloud are establishing an array of options for customers with complex environments and varied needs beyond mere scale. Such customers require a higher degree of elasticity in the solution’s architecture to address a wide range of use case needs in multiple dimensions, and they are more willing to trade some ease of use related to operational complexity or vendor lock-in in order to realize a single vendor approach across the entire spectrum of edge.

We also see two clusters of vendors in the Innovation/Feature Play quadrant, each trying to find a balance among specialization, breadth of offering, and ease of use. The first cluster at the edges of the Feature Play is hyper-focused on far-edge and device-edge use cases catering to resource-constrained platforms with the widest breadth of support for niche low power processors and disconnected environments. The second cluster in this quadrant is grouped together closer to the innovation axis. This cluster is characterized by stretching the functional management and architectural capabilities of their single-product solution to bridge the gap from point solution to hybrid solution, targeting a blend of either near edge and far edge or far edge and device-edge use cases. These clusters offer greater customer choice in customization and the ability to tune fundamental elements of the solution, like the back-end key-value store, to optimize for lightweight single-node performance versus multinode scale. For those looking to understand and predict the development of this market, these two clusters are the ones to watch most closely. Their positions on the Radar reflect the different point-optimized versus hybrid-enabling choices vendors are making as they co-develop their offerings with their customers. These clusters, more than any other, are influencing the evolution of the market as a whole.

Two vendors are positioned in the Feature Play/Maturity quadrant. These providers offer specialized capabilities that are well-established and have been proven over time. These vendors typically focus on delivering specific tools or functionalities that are mature in terms of development, reliability, and market acceptance. While they don’t offer a broad, integrated platform, they provide solutions that excel in particular industry verticals or use cases within the edge Kubernetes ecosystem, like 5G RAN or near-edge public cloud extensions.

In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; there are aspects of every solution that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.

INSIDE THE GIGAOM RADAR

To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.

Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.

For more information, please visit our Methodology.

5. Solution Insights

AWS, EKS Anywhere

Solution Overview
Amazon Web Services (AWS) EKS Anywhere is an open source deployment option for Amazon EKS that allows customers to create and operate Kubernetes clusters on-premises and at the edge. It simplifies on-premises Kubernetes cluster management, supporting various infrastructures such as VMWare vSphere, bare metal, AWS Snowball Edge, Apache CloudStack, and Nutanix. EKS Anywhere is built on Amazon EKS Distro, which provides default component configurations and automated cluster management.

EKS Anywhere is a standalone product, but it is part of the larger Amazon EKS product suite, which includes Amazon EKS Distro and Amazon EKS on AWS Outposts.

EKS Anywhere works by providing automation tooling that simplifies cluster creation, administration, and operations on infrastructure such as bare metal, VMware vSphere, and cloud virtual machines (VMs). This includes default logging, monitoring, networking, and storage configurations.

AWS’s approach to EKS Anywhere is unique compared to other public cloud providers in that it allows customers to create and operate CNCF-certified Kubernetes clusters on their own infrastructure, providing a consistent and reliable on-premises Kubernetes operational tooling consistent with Amazon EKS. This approach is designed to meet the needs of existing AWS customers who require on-premises or edge deployments due to factors such as data sovereignty, compliance, or network latency without the overhead or dependency of additional Amazon-provided hardware, software, or infrastructure abstraction.

Strengths
Amazon EKS Anywhere’s strength lies in its opinionated alignment with the larger, broader, and more mature Amazon EKS public cloud offering. EKS Anywhere is a standalone deployment option for Amazon EKS that enables anyone to create and operate Kubernetes clusters on their own infrastructure. It offers several notable capabilities centered around its alignment with the AWS cloud ecosystem management experience, including integration with AWS services, scalability, and partner ecosystem support. Its unified approach to managing both on-premises and cloud-based Kubernetes workloads simplifies operations and reduces the learning curve.

EKS Anywhere supports hybrid cloud setups, cloud migrations, and on-premises cluster deployment and maintenance. It can be connected to AWS EKS using the EKS Connector, which allows a comprehensive view of Kubernetes workloads across different environments. Additionally, EKS Anywhere clusters can integrate with AWS services and IaC tools like Terraform and GitOps for seamless management.

The platform offers scalability, allowing for thousands of clusters under a common management toolchain. EKS Anywhere simplifies cluster creation and administration via automation tooling, supports various infrastructures, such as bare metal and VMware vSphere, and allows users to adjust the size of their clusters’ worker and control plane nodes to meet changing demands.

AWS partners provide support for EKS Anywhere, offering expertise in areas like cluster configuration, operations, and compliance. This ensures users have access to professional assistance when needed.

Challenges
While AWS EKS Anywhere offers several advantages for managing Kubernetes clusters on your own infrastructure, it may not be the best fit for far-edge and device-edge use cases: EKS Anywhere is designed to run on infrastructure such as bare-metal servers, VMware vSphere, and cloud VMs. It may not be suitable for low-resource devices often found at the far edge, which may not have the necessary computational power or storage to support a full Kubernetes cluster.

Purchase Considerations
Considerations for this solution are relatively straightforward. Given the low barrier to entry on cost ($0) and the optional support model, the predominant factor in selecting this solution is its consistent Kubernetes experience across on-premises and cloud environments. Customers mostly focus on using EKS Anywhere for on-premises Kubernetes, hybrid scenarios spanning cloud and on-premises, and disconnected environments. The solution is best suited for near-edge general purpose compute use cases where the core of the network lies within the public cloud.

EKS Anywhere allows running near-edge Kubernetes workloads on-premises to meet data residency, latency, or compliance requirements not met by Amazon’s other managed edge solution offerings.

Radar Chart Overview
AWS Anywhere is positioned in the Maturity/Feature Play quadrant of the Radar chart. It’s positioned in that quadrant because the solution provides specialized capabilities that are well-established and have a proven track record within the public cloud and private edge Kubernetes market. The solution is focused on delivering specific functionalities that are mature in terms of development, reliability, and market acceptance. It integrates well with existing public AWS infrastructure, providing a consistent and trusted experience for users with particular cloud-first cultures and cloud-connected edge computing needs.

Canonical, MicroK8s

Solution Overview
In recent years, Canonical has experienced significant growth, with a focus on industry verticals such as telecommunications, automotive, and finance. One of Canonical’s offerings is MicroK8s, a minimal, CNCF-certified Kubernetes distribution. It is a standalone product but also part of Canonical’s larger portfolio of open source solutions. MicroK8s is lightweight and focused, providing a simple, robust, and secure platform for developers, edge computing, and IoT applications. It delivers the full Kubernetes experience with a single command, offering features like transactional over-the-air updates and secure sandboxed environments.

MicroK8s is delivered as a single snap package, which can be installed on any Linux distribution that supports snaps, as well as macOS and Windows. It includes all upstream services in an efficient package along with their dependencies, offering not only stability, but enhanced security.

Canonical’s approach to Kubernetes distribution is unique in its simplicity and focus on zero-ops, making it an attractive solution for developers and organizations seeking to leverage the power of Kubernetes without the complexity often associated with it.

Strengths
MicroK8s is strong in a number of areas, including flexibility, support for multiple operating systems, and breadth of CPU instruction sets.

MicroK8s is designed to be lightweight and flexible. It includes only the components needed to deploy Kubernetes, but it also offers a wide range of additional features through add-ons that range from simple DNS management to machine learning with Kubeflow, allowing users to customize their MicroK8s installation to suit their specific needs.

MicroK8s is available for most popular Linux distributions in both full standard Linux and immutable appliance Linux form, as well as for Windows and Mac workstations through a native installer. It uses the snap packaging mechanism, which brings automatic updates, ensuring that users always have the latest stable version of Kubernetes and all available security patches.

MicroK8s is designed to run on a variety of hardware configurations, from laptops to Raspberry Pi, Intel NUC, and in any public cloud, making it a versatile choice for a wide range of use cases.

Challenges
It’s important to note that while MicroK8s is a powerful tool, it may not be the best choice for all situations. For example, some users have reported issues with CPU usage in certain configurations. In regard to the solution’s customizability, the intent of the distribution is to focus on ease of developer consumption, and it is less concerned with enabling the needs of edge. Its lack of remote management limits its applicability for large-scale production use cases across a range of edge deployment modalities. Still, it’s a feature-focused release that conveniently fits in low-resource environments. As with any technology, it’s important to thoroughly evaluate MicroK8s in the context of your specific needs and environment.

Purchase Considerations
Canonical offers full enterprise support for MicroK8s, including a 10-year commitment for critical deployments. However, this does not include a support entitlement for the underlying operating system, which may be required for the long-term sustainability of the solution in production. MicroK8s has a strong community of users and developers in the upstream who can provide advice and answer questions on a best-effort basis for non-enterprise customers.

MicroK8s is well suited for near-edge use cases like running workloads on edge remote branch office racks, retail points of sale, and cell towers. MicroK8s is ideal for these distributed environments due to its resilience, small footprint, and ability to self-heal and auto-update.

MicroK8s is also well suited for device-edge use cases such as IoT, robotics, and embedded environments. MicroK8s can run on ARM devices like Raspberry Pis to orchestrate containers across an IoT fleet.

Radar Chart Overview
Canonical Microk8s is recognized for its specialized capabilities in the Kubernetes for edge market, with its mix of mature and innovative features that cater to specific edge computing requirements. Canonical is positioned in the Feature Play quadrant because it offers a highly specialized Kubernetes solution focusing on delivering specific, targeted features for edge computing rather than providing a broad, all-encompassing platform.

Kubermatic, Kubermatic Kubernetes Platform

Solution Overview
Kubermatic is a company that specializes in Kubernetes automation, providing solutions to manage Kubernetes clusters across various environments, including hybrid, multicloud, on-premises, and edge. The Kubermatic Kubernetes Platform (KKP) is its flagship product, which is designed to automate the deployment and operations of Kubernetes clusters, aiming to reduce the time to market, eliminate unplanned downtime, and decrease maintenance overhead. The company, formerly known as Loodse, rebranded to Kubermatic in 2020 and open sourced its Kubernetes automation platform.

KKP is a standalone product but can be considered part of a larger suite, as Kubermatic also offers KubeOne, a tool for managing Kubernetes clusters on any provider. KKP works by providing a centralized dashboard for managing the lifecycle of Kubernetes clusters, including provisioning, scaling, updating, and cleaning up with a single API call. It supports integration with major cloud providers like AWS, Google Cloud, and Azure, and enables the management of clusters on-premises or at the edge.

Kubermatic’s approach to Kubernetes management is centered around automation and ease of use, with the goal of enabling DevOps teams with self-service capabilities from core to edge. Both KKP and the KubeOne manager are open source, which is a strategic move that differentiates it from some peers and aligns with the company’s commitment to community-driven development.

Strengths
KKP is strong in the areas of connectivity, ease of use, remote management, and architecture. It is designed to manage Kubernetes clusters across hybrid cloud, multicloud, and edge environments, which requires secure and reliable communications among the various elements of the environment. It can run control planes in the cloud and worker nodes in a data center, allowing a distributed setup. This is particularly useful for edge computing, where stable connections may not always be guaranteed.

KKP provides a management layer to manage multiple Kubernetes clusters in different environments simultaneously. It offers a self-service approach to developers, enabling them to easily roll out and manage applications. This ease of use extends to the lifecycle management of clusters, which can be completely managed with IaC.

KKP’s remote management capabilities are particularly beneficial for edge computing. It can manage and scale hundreds or even thousands of clusters, making it suitable for large-scale, distributed environments. It also provides a monitoring component that can warn of problems from the underlying hardware, helping to prevent system failures.

KKP leverages Kubernetes to manage Kubernetes, creating a Kubernetes operator for Kubernetes itself. This approach enables self-healing systems that keep the platform up and running with minimal maintenance overhead. It also allows the platform to run completely containerized, which reduces the footprint and allows for efficient updates and problem resolution.

Challenges
KKP does not include a vendor-built operating system and offers only limited best effort support, which may make the system less secure given the vendor’s limited ability to directly influence the prioritization and acceptance of vendor-provided patches by the upstream communities that make up the underlying operating system.

End-to-end solutions that include the underlying operating system components that enable the Kubernetes platform provide a significant benefit to users, directly impacting ease of use, efficiency, and reliability. An end-to-end solution means that all components of the system are designed to work together seamlessly, reducing the risk of compatibility issues and simplifying the process of setting up and managing the system. This is particularly important for Kubernetes, which involves a complex ecosystem of components that need to work together to orchestrate containers. In a situation where a vendor forgoes the operating system element, like Kubermatic, and instead chooses to to allow greater customer responsibility in furnishing a base operating system, we tend to see a greater need for customer-, partner-, or vendor-led integration efforts to enable the solution. Although this integration allows for greater customization and alignment with customer standards, it impacts the solution’s ease of deployment and the vendor’s ability to scale since every new customer becomes a unique snowflake, thus impacting the velocity of initial solution adoption.

It’s worth noting that while an end-to-end solution can offer many benefits, it may not always be the best fit for every organization, depending on their specific needs and existing infrastructure investments or preferences.

Purchase Considerations
Kubermatic lacks some of the market awareness of competitors with more fully developed partner networks. This makes it challenging for Kubermatic to deliver the solution’s paid Enterprise support experience outside its current European market. The company’s ecosystem integrations seem to be self validated, which, given the limited size of the company, makes deployed add-ons vulnerable to drift as its catalog scales and the underlying Kubernetes platform deprecates and modernizes its APIs in conformance with the upstream Kubernetes project. The company’s commitment to open source and automation remains a key focus worth highlighting, as Kubermatic is one of the few solution providers in the space with a completely free open source solution.

Kubermatic excels in near-edge use cases like deploying Kubernetes clusters across multiple customer sites or retail stores while maintaining centralized control and management. Kubermatic’s solution also enables interesting far-edge use cases like deploying lightweight “minimal” Kubernetes clusters with disaggregated data planes and externally hosted control planes to support very resource constrained edge devices. The solution uniquely supports the management of fully air-gapped clusters with no internet connectivity for lossy or disconnected clusters running critical applications.

Radar Chart Overview
Kubermatic is positioned in the Innovation/Feature Play quadrant because the vendor is considered highly innovative within the context of Kubernetes for edge computing solutions. The company is pushing the boundaries of technology in this space, introducing new and advanced features or approaches that are not yet widespread in the market. Rather than offering a broad set of features, Kubermatic is concentrating on delivering a more targeted set of capabilities that are particularly relevant to certain edge computing architectures.

Mirantis, k0s

Solution Overview
Mirantis is an open source cloud computing software and services company, with a focus on the development and support of container and cloud infrastructure management platforms based on Kubernetes and OpenStack. In the last two years, Mirantis has expanded its capabilities through the acquisitions of amazee.io, a web application delivery company for Kubernetes, and Shipa, a developer-friendly Kubernetes tool. Mirantis offers an edge-oriented Kubernetes distribution called k0s. Over the past year, the company has introduced and enhanced several distribution enhancement offerings, including k0s Autopilot and k0smotron, as well as enhancements to its Lens line of developer tooling in an effort to round out the solution offering with edge-specific operational workflows and features.

k0s is an open source, single-binary, hardened, and secure Kubernetes distribution that runs on any popular Linux version and, experimentally, on Windows Server. It is designed to work on virtually any hardware, from large data center servers and cloud VMs to small single-board computing devices, and is part of a larger portfolio that includes other solutions and services for accelerating platform engineering, DevOps, and software development in distributed environments.

The k0s solution works by leveraging k0smotron for infrastructure control and management. It is easily configurable and can be deployed in minutes with a few commands. k0s includes several Kubernetes operators, such as Autopilot for automated cluster updates, CAPI for provisioning of underlying cloud and bare metal infrastructures, and k0smotron for multicluster fleet management.

Mirantis’ approach is unique in that it provides a flexible, performant, and secure self-contained solution with zero lock-in. It is entirely open source, with a simple, dependency-free binary, and can be deployed in minutes with less than a handful of commands. All the elements that make up the solution, from the underlying edge Kubernetes distribution itself to the remote management tooling, are freely available open source tools with optional paid contractual support agreements for enterprise customers needing technical support.

Strengths
k0s’ strength lies in its zero-dependency, single-binary architecture, which is designed to be flexible, secure, and easy to deploy across a wide range of hardware and environments, especially useful in resource-constrained edge computing scenarios. Compared to other lightweight Kubernetes distributions, k0s offers several unique features and strengths.

k0s is a CNCF-certified conformant Kubernetes distribution that requires only a functional Linux kernel to operate. This makes it highly adaptable and suitable for diverse edge topologies, from data center servers to small system on a chip (SOC) computing devices like a Raspberry Pi.

k0s’ use of Konnectivity for its control plane/worker bidirectional communication and proxying architecture greatly enhances the solution’s connectivity resilience to function across unreliable links. Its use also allows tunneled and encrypted communication, ensuring secure and robust connectivity between the control plane and worker nodes. This is particularly important in edge computing scenarios where network conditions can be challenging and security is paramount.

k0s pairs natively with Mirantis’s open source management project k0smotron; this Kubernetes-native declarative multicluster fleet management system allows a centrally located k0s “mothership” cluster to host virtualized k0s control planes, which can manage worker nodes at edge locations through a robust, secure, and encrypted connection. This feature is particularly useful for edge use cases where fleet management of unreliably connected edge clusters can be challenging.

k0s has a minimal local storage footprint and low compute overhead requirements, making it suitable for environments where resources are limited.

Challenges
The k0s solution does not provide cluster API as a built-in out-of-the-box feature component within the default k0s installation. While CAPI support may be available for k0s, it is likely provided through the use of a customer integrated operator that needs to be installed separately, rather than being included by default in the k0s binary. Moreover, the solution’s focus on deep machine addressable automation has left it lacking a consumable GUI, thus adversely affecting its ease of use for non-expert consumers and operators of the platform. k0s is part of a larger portfolio, and while it offers a range of operators and tools, the integration with other tools and services may not be as extensive or seamless as with some other Kubernetes distributions that have been around longer and have a more mature ecosystem.

Lastly, though k0s is adaptable to edge environments, other Kubernetes distributions may offer more specialized features for use case-specific edge workflows and a broader spectrum of applicable deployment models. Users should evaluate k0s in the context of their specific requirements and compare it with other Kubernetes distributions to determine the best fit for their edge computing needs.

Purchase Considerations
Mirantis has limited system integrator and OEM partners. This could impact the breadth of integrated solutions and support available. k0s is CNCF-certified and relies on the CNCF program for ecosystem validation. Its integrations with various third-party tools and partner solutions are self validated and limited cross-certification exists for the product.

Miranits k0s excels in device-edge use cases where managing clusters of single board computers for lightweight edge processing are leading requirements. k0s can run on a simple single-node cluster on a developer workstation for building cloud native apps, but it can also support multiple processing architectures, including ARM, and can run on low-power, resource-constrained devices like Raspberry Pis. The solution’s remote management capabilities are tuned for managing clusters of single board computers intended for lightweight edge processing. k0s also has relevant applicability in far-edge use cases, managing large fleets of devices like POS systems, ATMs, vehicles, or digital signage from a central Kubernetes control plane. k0s supports separating control and data planes, so worker nodes can be deployed at the far edge while control planes run in a central location.

Radar Chart Overview
Mirantis is positioned in the Innovation/Feature Play quadrant on the Radar chart, reflecting a strong emphasis on providing a feature-rich solution bundle made up of k0smotron and Cluster API while also maintaining a focus on innovation in its fundamental k0s Kubernetes layer. This combination of extensive features and innovation is likely to appeal to customers looking for a robust and forward-looking Kubernetes solution for specific far-edge and device-edge computing needs.

Mirantis k0s provides a broad and rich set of capabilities that are important for Kubernetes at the edge. It is an innovative company that introduces new and advanced features or approaches that differentiate it from more established or conventional offerings in the market.

Rakuten, Rakuten Cloud

Solution Overview
Rakuten Symphony is a pioneering telecom player that caters to telecom industry partners. It is known for its strategic entrance in the Kubernetes cloud platform space. In the past year, Rakuten acquired the US-based cloud technology company Robin.io, enhancing its portfolio with multicloud mobility, automation, and orchestration capabilities.

Rakuten offers a notable approach with an easy-to-use distribution and bundles for various network functions and enterprise applications. The company’s recent acquisitions and partnerships, such as with Robin.io and Google Cloud, further differentiate it from its peers.

Rakuten Cloud-Native Platform is a Kubernetes distribution that runs both containers and VMs. It’s designed to supply carrier-grade reliability, providing industry-leading features and flexibility, with a focus on ease of use and automation. The solution is part of Rakuten’s Cloud suite that also includes Rakuten Cloud-Native Storage and Rakuten Cloud-Native Orchestrator.

Cloud-Native Storage integrates with Kubernetes-native administrative tooling and provides application-aware storage and data management. Cloud-Native Orchestrator manages the lifecycles of various network functions and service chains, offering click-ops automation through a single pane of glass.

Strengths
Cloud-Native Platform is a robust solution that offers several strengths in terms of certified integrations, automated platform deployment, hardware orchestration, and scalability.

Cloud-Native Platform is built on upstream Kubernetes and is CNCF-certified. This ensures that any application that can run on Kubernetes will run as-is on Cloud-Native Platform. Rakuten’s collaboration with industry partners to cross-certify its solution represents a significant step toward empowering its customers with robust and reliable solution outcomes.

Cloud-Native Platform offers excellent built-in deployment automation via Cloud-Native Orchestrator. It is designed to be very extensible with many interchangeable components. Features get instantiated and maintained only when they are turned on, which makes it very lean and efficient. The solution can be deployed on any cloud (AWS, Azure, Google), on bare metal, or as a VM, making it highly versatile.

Cloud-Native Orchestrator provides bare metal-to-services orchestration. It can manage the lifecycle of various network functions and service chains, offering pre-canned, highly sophisticated automation workflows via a single pane of glass. This orchestration extends beyond just bare metal as a service and can be expanded to include multidistributions of Kubernetes clusters, service management, transport devices, appliances, and edge devices.

Cloud-Native Storage provides a notable native storage capability that can manage stateful applications with minimal compute and storage, even in a single-node environment. This is particularly relevant for edge applications. It includes application-aware storage, which has been highly praised and is used as the default storage option for Google Distributed Cloud, using Google’s Anthos deployments for the edge.

Cloud-Native Storage is part of the Rakuten Cloud suite, but it can also be purchased separately and used with other Kubernetes distributions. This flexibility, combined with its proven performance in large carrier networks, makes Rakuten’s storage offering a strong component of the platform.

Challenges
Rakuten Cloud-Native Platform is designed to be highly extensible and efficient, particularly for traditional Kubernetes-based core and near-edge cloud solutions. Rakuten’s orchestration layer can provision, monitor, and perform lifecycle operations for non-X86 platforms including IoT devices, drones, and any other addressable elements in the edge infrastructure; however, when it comes to non-x86 architectures and resource-constrained compute form factors, there are inherent limitations that may arise when attempting to deploy the complete Cloud suite on those devices.

The Rakuten Cloud suite leverages several proprietary elements in the packaging of the solution while firmly rooting the cornerstone of the fundamental Kubernetes layer in a CNCF-certified open source base. While proprietary systems offer advantages like advanced features and dedicated support, they can also present challenges. These include less community support compared to open source options, potential vendor lock-in, and higher costs. However, the decision to choose proprietary over open source depends on the specific needs and capabilities of the organization, as the benefits of proprietary solutions, such as streamlined experiences and robust features, may outweigh these drawbacks for some users.

Purchase Considerations
The platform is designed to be very extensible, with a lot of components that are interchangeable and a simple licensing model. Features only get instantiated and maintained if you turn them on, allowing for greater packaging control prior to deployment.

However, it’s important to note that while the Rakuten Cloud suite comes with a wide range of features, it may not include all the features or capabilities a user might need. For example, if users require observability outside of the included bare-metal to services visibility and correlation capabilities provided by the base Cloud-Native Platform entitlement, users may be required to purchase additional enhancements to enable correlation into transport devices, routers, switches, or ticketing systems, offered by the multidomain observability add-on.

Rakuten Cloud provides the foundational infrastructure to enable a variety of near-edge and far-edge use cases, with its cloud-native, automated, and unified management plane across edge locations. A key focus area is supporting stateful applications at the edge and seamless integration from far edge to core. Near-edge use cases include video analytics for retail stores to provide personalized promotions and optimize operations; private 5G networks for manufacturing plants, AI/ML edge, healthcare facilities, and smart cities to support domain-specific workloads with low latency; and RAN network functions like central unit/distributed unit (CU/DU) deployment across edge and central data centers. Far-edge use cases include RAN DU/RU (radio unit) deployments at cell sites/base stations, supporting stateful applications like databases or enabling content caching at far edge locations.

Radar Chart Overview
Rakuten is positioned in the Innovation/Platform Play quadrant but close to the Maturity hemisphere. It provides its customers the latest technological advancements while maintaining a reliable and integrated platform for their edge computing deployments. Rakuten is introducing cutting-edge features that set it apart from competitors, but it prioritizes these features working together seamlessly within a unified platform. This balance is crucial for customers who require not just individual tools or capabilities, but a comprehensive solution that addresses a broad range of needs in a harmonious manner.

Rakuten’s position close to the center in the Leader circle is based on its high scores across the decision criteria we evaluated. Its closeness to the Maturity line would indicate that it is investing less in innovation and is focused more on the maturation and stability of its platform components than other vendors in the Innovation half.

Red Hat, OpenShift

Solution Overview
Red Hat, a software company acquired by IBM in 2019, is a pioneer in open source technologies, contributing significantly to the IT industry. The company’s primary focus is on developing and providing open source software products for enterprise, government, and telco communities.

In the past year, Red Hat has continued to enhance its offerings, with no significant acquisitions or changes. The company’s flagship product is Red Hat OpenShift, a leading hybrid cloud application platform powered by Kubernetes. OpenShift is part of a broad portfolio that includes various editions like OpenShift Container Platform, OpenShift Platform Plus, and MicroShift. These offerings provide a unified platform to build, modernize, and deploy applications at scale, with multicluster security, compliance, application, and data management working across infrastructures.

OpenShift works by providing a consistent application development and management platform across on-premises, virtual, physical infrastructure, private cloud, public cloud, and edge environments. It unifies and accelerates the development, delivery, and lifecycle management of a hybrid mix of applications.

Red Hat’s approach is notable in its commitment to open source technologies and its focus on providing a unified hybrid cloud platform. This approach has made Red Hat an industry leader across various sectors.

Strengths
Red Hat OpenShift’s strengths are multifaceted, encompassing certified integrations, remote management, and globally consistent application and platform deployment with advanced cluster management for Kubernetes.

The Red Hat ecosystem offers a network of expert partners and products certified to work with Red Hat technologies, ensuring compatibility and reliability. This certification process is part of Red Hat’s commitment to providing hardened solutions that facilitate enterprise operations across various platforms and environments.

For remote management, Red Hat OpenShift supports smaller-footprint topologies in edge scenarios, which include 3-node clusters, single-node OpenShift, remote worker nodes, and resource constrained edge/IoT devices. This capability is crucial for managing deployments that are geographically dispersed or have limited connectivity. Additionally, the Insights Operator gathers OpenShift Container Platform configuration data, which helps Red Hat improve the quality of releases and rapidly address issues. To secure the software supply chain, Red Hat offers the Trusted Software Supply Chain, a cloud service powered by OpenShift that enhances software supply chain resiliency. It enables real-time security scanning and remediation, helping to eliminate potential security issues early in the development lifecycle.

Application deployment is streamlined with Red Hat advanced cluster management for Kubernetes, which enables the deployment of apps, management of multiple clusters, and enforcement of policies across clusters at scale. This is complemented by OpenShift’s integrated platform monitoring and automated maintenance operations, which provide IT operations teams with the control, visibility, and management needed to deploy and manage code pipelines effectively.

Lastly, OpenShift’s platform deployment is designed to be consistent across core, near-edge, far-edge, and device-edge environments, which simplifies and accelerates the development, delivery, and lifecycle management of applications across a heterogeneous infrastructure footprint. This unified approach is part of Red Hat’s unique strategy, focusing on open source technologies and providing a hybrid cloud platform that is common among its peers but differentiated by its execution and integration within the broader Red Hat ecosystem and portfolio products.

Challenges
Red Hat OpenShift, while a robust and comprehensive container orchestration platform, does face certain challenges.

One primary challenge is the complexity of its initial setup. The Red Hat Enterprise Linux (RHEL)-based platform’s underlying base Kubernetes architecture is different across its standard and device-edge use cases and mixes different immutable Linux variants, which can make the update, management and operationalization of two use-case-specific Kubernetes variants cumbersome. The installation process across all variants is non-uniform, involving different tools, approaches, and components. It can be daunting, requiring a solid understanding of both OpenShift, its enabling ecosystem, and the underlying infrastructure.

Moreover, though OpenShift allows for a high degree of customization, including the ability to add third-party software and kernel drivers, Red Hat’s support for third-party customizations can be limited. Users who customize their OpenShift installations with non-standard configurations or third-party modules may find that they are on their own if they encounter issues related to these customizations without securing additional enterprise support contracts from those third-party elements.

MicroShift is a relatively new and lighter-weight version of OpenShift designed for far-edge and device-edge computing scenarios. MicroShift is a CNCF-certified Kubernetes distribution and is derived from OpenShift—thus it has similar codebase maturity. However, because it’s targeted at the smallest edge devices, enabling it to work on edge infrastructure with fewer resources, it has fewer features. Due to its recent introduction, MicroShift may not have the same level of productization maturity or the breadth of features found in the full OpenShift platform. Users considering MicroShift for their edge computing needs should be prepared for a solution that, while promising, may still be evolving and may lack the robustness of its more established OpenShift offerings.

Purchase Considerations
Red Hat has a broad and diverse sales channel, which includes a large partner ecosystem of SIs, VARs, and distributors. This extensive network allows customers to source pricing from almost any existing purchasing channel relationship, with costs generally equivalent regardless of the channel. Red Hat offers support for certified third-party components—if the component is supported as indicated in the Red Hat Ecosystem Catalog, Red Hat will collaborate with the ecosystem partner to troubleshoot the issue.

OpenShift provides a consistent Kubernetes platform while supporting multiple edge deployment configurations like compact 3-node clusters, remote workers, and single node clusters. This flexibility maps to varying size, connectivity, and availability requirements across near-edge, far-edge, and device-edge use cases.

Near-edge use cases include smart factories, industrial automation, smart cities, and AR/VR enablement, leveraging remote multinode support for GPUs and ML inference to enable innovations in these areas. Far-edge use cases include connected vehicles, smart oil rigs, and smart mining sites leveraging OpenShift’s remote worker node topology to reduce reliance on large duplicate resource requirements and compute footprints at the edge for Kubernetes control plane functionality. Device-edge use cases include wearables, robotics, and satellites where the end device needs to maintain fully autonomous access to local resource-constrained devices and offer self healing capabilities from initial boot to app level resilience. OpenShift delivers a uniform experience across on-premises, hybrid cloud, and edge. The same teams, tools, and processes can be leveraged across the entire edge spectrum.

Radar Chart Overview
Red Hat is positioned in the Maturity/Platform Play quadrant on the GigaOm Radar, as it provides a mature and feature-rich Kubernetes platform with a commitment to incremental innovation that positions it well for future growth and at a high level of execution in the edge computing market. Its positioning reflects Red Hat’s established presence in the market and its ongoing commitment to improving and expanding its Kubernetes platform. While OpenShift provides a rich platform experience, its use-case-specific variants like Microshift include a focused set of features that address specific needs within the Kubernetes ecosystem. This balance demonstrates Red Hat’s strategy to offer a robust platform that can serve a wide range of use cases while strategically managing the complexity of offering a universally applicable solution for its customers.

Spectro Cloud, Palette

Solution Overview
Spectro Cloud is a technology company primarily focused on edge computing, cloud computing, and IoT. The company’s main offering is the Palette platform, which is designed to manage Kubernetes clusters across various environments, including public cloud, private cloud, bare metal, and edge locations.

In the past year, Spectro Cloud has continued to innovate on its product portfolio with the focused development of their Palette Edge, a solution that enables operations teams to build secure configurations for edge devices. This product is part of a larger suite that includes Palette Enterprise and Palette VerteX, a version tailored for government agencies and organizations where security and compliance are paramount.

Palette Edge works by allowing users to manage their clusters across thousands of edge locations. It provides high availability and zero-downtime rolling upgrades, even in single-server deployments. The platform is unique in its approach, offering a built-in immutable OS built on Kairos.io, a new open source meta-Linux distribution, enabling users to spin up an immutable Palette-provided or customer-provided third-party Kubernetes cluster with the Linux distribution of their choice.

Spectro Cloud’s approach to Kubernetes management is notable in its focus on providing a scalable, flexible, and customizable solution. The company’s offerings are designed to meet the specific needs of different development teams, allowing them to choose from curated libraries of validated components to build the right infrastructure for their needs. This approach sets Spectro Cloud apart from its peers, who often offer one-size-fits-all solutions.

Strengths
Spectro Cloud’s Palette Edge solution is particularly strong in edge computing use cases due to its patented scalable distributed architecture, which can manage tens of thousands of endpoints, crucial for organizations with numerous edge locations. This architecture avoids control plane bottlenecks, enabling seamless management across vast networks, such as those needed by retail chains.

Palette Edge leverages a patented full-stack, declarative model that extends the Cluster API blueprint to encompass the entire stack, from the operating system to applications. Pallet uses Cluster API to deploy on top of a full spectrum of edge deployments. However, for far-edge and device-edge use cases, it takes an “edge native” approach, leveraging Kairos to bootstrap the edge servers and deploy the customer’s preferred stacks. This simplifies the creation and deployment of container images for preferred OS and Kubernetes distributions across different locations. The model also supports integrated logical, sequential lifecycle management from Day 0 through Day 2, ensuring effective management of each stack layer.

Palette Edge by Spectro Cloud incorporates several noteworthy security features tailored for edge computing environments. It integrates confidential computing principles, including Cosign and secure boot, to secure operations. Device security is emphasized from the onboarding phase, with authenticity checks to prevent malicious devices from connecting. Image verification through Cosign ensures the integrity of downloaded images. Additionally, Palette Edge performs root of trust verification during device startup to confirm that the system remains tamper-proof.

Spectro Cloud provides robust support to customers, offering a single point of support for both open source and commercial solutions. This includes assistance with integration and troubleshooting, as well as prevalidated and pretested integrations across every supported version of Kubernetes, simplifying compatibility and reducing complexity for customers.

Challenges
Palette Edge faces challenges that are common to proprietary systems when compared to other edge Kubernetes distributions. One of the main issues is the potential for vendor lock-in. The Palette Edge Kubernetes distribution does comply with the upstream Kubernetes conformance standard ensuring workload portability. However, the patented elements that enable its distributed architecture limit the platform’s interchangeability, making it difficult to switch to other solutions or integrate with different systems without significant effort or cost.

Purchase Considerations
There are several factors to keep in regarding the use of Palette Edge. Chief among them is the decision customers must make as to whether to consume it as a SaaS or on-premises deployment. When considering the cost tradeoff between SaaS and on-premises service consumption for Palette Edge, it’s important to note that Spectro Cloud’s pricing models differ based on the deployment. The management plane (Palette) is offered as a multitenant SaaS (no charge), dedicated SaaS (extra recurring fee applies), and self-hosted deployment in a customer’s environment (customer can deploy on their own). Palette is a subscription service that is licensed by cluster worker nodes under management. Bare metal and edge devices/servers are licensed per worker node and based on the core count and functionality of the device/server. Additionally, Palette Edge offers a low-cost small-form-factor (SFF) license for device edge deployments such as Raspberry Pi, NVIDIA Orin Jetson, Intel NUC, or similar SOC/SBC devices.

For non-edge platforms, pricing is based on the consumption of worker node kilocore hours. This could mean lower upfront costs for SaaS, as it eliminates the need for on-premises infrastructure investment. However, on-premises deployment might offer better long-term cost efficiency for some organizations, as it can reduce recurring subscription fees associated with dedicated SaaS offerings in exchange for operational overhead on-premises.

Spectro Cloud Palette Edge offers a highly optimized edge experience tuned for a variety of near-edge, far-edge, and device-edge use cases. Its cloud-based composer allows users to define every element of the solution and tune each piece for their unique deployment scenario. Palette Edge excels in highly regulated government and military use cases given its innovative embedded secure supply chain attestation technology applied uniformly across the entire edge spectrum of use cases. Near-edge use cases include deploying content delivery networks (CDNs) closer to end users to reduce latency and bandwidth costs or running AI inferencing workloads like computer vision analysis on edge servers near hospitals, stores, and so on to enable real-time experiences. Far-edge use cases include managing Kubernetes clusters on remote oil rigs, farms, or tactical edges with intermittent connectivity. Device-edge use cases include onboarding and managing Kubernetes clusters across thousands of small form-factor edge devices like in retail stores and drones or deploying Kubernetes and edge applications on moving vehicles.

Radar Chart Overview
Spectro Cloud is positioned in the Innovation/Platform Play quadrant on the Radar chart, as it provides a feature-rich Kubernetes platform and is committed to innovation that positions it well for future growth and evolution in the edge computing market. This positioning appeals to a broad spectrum of customers, seeking advanced security capabilities for their edge computing deployments focusing on a composable integrated platform for their operations. While Palette Edge provides a cohesive platform experience, it also maintains a focus on delivering advanced security hardening capabilities that are highly relevant to tactical edge computing use cases. Spectro Cloud is not just offering a generic Kubernetes platform, it is tailoring its solution to meet the unique challenges and requirements of government and military use. Spectro Cloud’s unique composable packaging meets its customers’ needs for customization without sacrificing the security, reliability, and predictability of its edge computing solutions.

SUSE Rancher, K3s

Solution Overview
Rancher Labs, a company focused on edge computing and containerization, was acquired by SUSE, a global leader in open source innovation, in late 2020. This acquisition has allowed the combined company to offer a best-in-class Linux OS, a market-leading Kubernetes management platform, and pioneering edge capabilities.

SUSE offers several prepackaged edge solutions including the SUSE Edge 3.0 platform and SUSE Adaptive Telco Infrastructure Platform (ATIP), all based on its core K3s Kubernetes distribution, a lightweight and easy-to-use Kubernetes distribution designed for deploying containers in any environment, including edge and IoT applications. K3s is part of a larger portfolio that includes Rancher Prime, an enterprise platform for multicluster Kubernetes management and RKE2, SUSE’s security and compliance focused Kubernetes distribution.

K3s works by bundling all Kubernetes components into a simple converged single server control plane and worker model. It’s optimized for unattended, remote, resource-constrained environments and can be easily managed within the Rancher platform. This approach is unique to SUSE, simplifying deployment at the edge and enabling users to quickly launch and manage high volumes of clusters.

SUSE’s approach to Kubernetes management is notable for its simplicity and focus on edge computing. The company’s offerings, particularly K3s, are designed to be lightweight and easy to use, making them ideal for edge and IoT applications. This focus on simplicity and ease of management sets SUSE apart from its peers in the Kubernetes management space.

Strengths
K3s is a lightweight Kubernetes distribution that excels in scalability, CPU architecture support, flexibility, and ecosystem adoption.

K3s’s scalability is evident in its ability to support up to 2,000 clusters and 100,000 nodes, with a roadmap to scale to one million clusters. This scalability is particularly beneficial for edge computing scenarios where numerous clusters may be deployed across various locations with limited connectivity. K3s facilitates remote management by allowing upgrades and patches to be managed locally on K3s clusters and then synchronized with the management platform, ensuring zero-downtime maintenance.

K3s supports a broad range of CPU instruction sets, including x86, ARM64, ARMv7, and s390x, making it versatile for deployment on everything from small IoT devices to large cloud instances. This extensive support enables K3s to run on diverse hardware platforms, from Raspberry Pi Zero to enterprise-grade servers, meeting the needs of a wide array of computing environments. K3s is especially suitable for environments with limited resources and connectivity, such as industrial IoT devices, edge computing, remote locations, and unattended appliances. Its lightweight nature, with a binary size of under 50 MB, allows it to function efficiently in resource-constrained settings.

K3s is a CNCF-certified Kubernetes distribution, ensuring workload portability across certified distributions. Its adoption has been significant in sectors such as government, where it is used in air-gapped environments, and by organizations seeking a simplified Kubernetes solution that is easy to deploy and manage. K3s is quickly becoming the preferred open source edge Kubernetes variant for other notable solutions providers as the go-to embedded choice to enhance their feature capability offerings.

Challenges
K3s is a lightweight Kubernetes distribution designed for edge computing and environments with limited resources. By default, K3s does not place many restrictions on containers and allows privileged containers, but it provides options like pod security policies to restrict permissions. As a best practice, containers should be run as a non-root user without privileged access. This protects the real root user on the host from potential container breakouts. K3s supports a rootless mode that allows running the K3s server and worker nodes as an unprivileged user; however, rootless mode is still considered experimental and has some known issues around networking and upgrades.

Edge computing changes the economic and performance calculus compared to cloud data centers. It favors performance per watt and space efficiency local to the user, which can make accelerators more economical and practical for edge deployments. There is little evidence of K3s supporting workload accelerators other than GPUs. In scenarios like autonomous vehicles, where real-time data processing is critical for decision-making, hardware accelerators enhance processing capabilities to meet the demands of such computationally intensive tasks.

It’s important to note that K3s’s focus on simplicity and minimalism comes with a trade-off in terms of its ecosystem. While K3s is fully compatible with the broader Kubernetes ecosystem, it does not have as extensive a list of validated and certified vendors as some other Kubernetes distributions. This means that while K3s can be integrated with a variety of tools and platforms, these integrations may require additional configuration and may not have been as thoroughly tested or certified as those for more traditional Kubernetes distributions.

Purchase Considerations
K3s is a lightweight, CNCF-certified Kubernetes distribution that is part of the broader Rancher portfolio. K3s itself does not include an underlying operating system. It is designed to work on most modern Linux systems, and it has some minimum requirements for the nodes on which it runs. Customers must consider the additional cost of integration and management of their own OS when selecting K3s. In the broader Rancher portfolio, K3s is often used in conjunction with other Rancher products. For example, Rancher itself is a complete software stack for teams adopting containers, and it can manage K3s clusters, among others. Another product, RKE2, is a CNCF-certified Kubernetes distribution like K3s, but it is focused on security accreditations and meets the FIPS 140-2 standard.

K3s is lightweight enough to run on constrained device edge environments while providing production-grade Kubernetes functionality for orchestration, lifecycle management, and more. Its modular architecture makes it customizable for a variety of edge and IoT use cases. K3s is ideal for near-edge use cases such as using K3s on edge nodes or devices within retail locations and warehouses to run local services like inventory/stock management while syncing data with the cloud. K3s excels in device-edge use cases, such as vehicle telematics on in-vehicle gateways to enable over the air (OTA) updates, run analytics at the edge, and maintain functionality with intermittent cloud connectivity, or in smart cameras to run computer vision applications like face recognition by deploying containers via K3s on low power ARM-based smart camera devices.

Radar Chart Overview
SUSE Rancher is positioned as a leader in the Innovation/Feature Play quadrant, as its K3s product provides a balance between introducing new capabilities and maintaining a stable, production-ready solution.

SUSE Rancher K3s offers a focused set of features rather than an extensive feature set that tries to cover all possible use cases. This means it is concentrating on delivering high-quality, specialized features that are particularly relevant to a subset of edge computing use cases, rather than attempting to be a one-size-fits-all solution.

While SUSE Rancher is forward-thinking and introducing new capabilities, it is also ensuring that these innovations are reliable and ready for production use. This balance is crucial for customers who need both the latest features and the assurance of a stable platform for their edge computing deployments.

Wind River, Studio Cloud Platform

Solution Overview
Wind River, a global leader in edge computing, was acquired by Aptiv, a technology company focused on making mobility safer, greener, and more connected, in December 2022. The company’s primary offering is the Wind River Studio Cloud Platform, an open source, production-grade distributed Kubernetes solution for managing edge cloud infrastructure. This platform is part of a larger portfolio.

Wind River offers Wind River Studio Cloud Platform as commercially supported StarlingX, representing a compilation of best-in-class open source technologies on top of its base Kubernetes distribution. The product is designed to control operational costs and support new use cases for telco-grade cloud environments optimized for distributed edge applications.

Wind River’s approach to edge computing is notable in its focus on production-grade Kubernetes for the 5G distributed edge. This approach has made the company an essential partner to Tier 1 network operators and communications service providers. The recent acquisition by Aptiv is expected to accelerate the transformation of Wind River’s solutions portfolio and focus efforts on greater alignment with its key industry verticals, catering to well-defined use cases.

Strengths
The StarlingX project is an open source, cloud-native platform that provides a highly reliable, scalable, and deployment-ready infrastructure for edge computing. It integrates OpenStack and Kubernetes, leveraging the strengths of both to manage, scale, and update services. Wind River improves on the strengths of the StarlingX project with its Studio Cloud Platform, an open source, production-grade distributed Kubernetes solution for managing edge cloud infrastructure.

Studio Cloud Platform excels in optimized performance and in industry technology integration. The solution efficiently operates on low-capacity footprints, with optimizations for the latest Intel processor technologies and the use of accelerators, which greatly contribute to the overall performance of the solution. Studio Cloud Platform’s great strength lies in its large catalog of certified blueprints and integrations with industry technology partners. These cross-validated integrations are made available through the Wind River Studio Marketplace, further enhancing the solution’s ability to leverage embedded OpenStack and Kubernetes technology components. These strengths make the platform a compelling edge solution for service providers looking for a low-risk solution to align with the prevailing trend of network equipment providers modernizing their solutions as they move toward a cloud-first, container-native design approach.

Challenges
Wind River Studio Cloud Platform is highly specialized, focusing on the specific requirements of the far-edge space with an ultra-low latency, highly distributed, and 99.999% reliable solution. This focus on niche use cases, such as 5G and other intelligent edge systems, may limit its applicability in broader contexts. Wind River has made extensions to the platform for key customers, many of which are telco (telco accelerators, PTP notifications) and some of which are not (GPU support). This is a testament to its extensible and customizable architecture.

While Wind River Studio Cloud Platform is built on open source technology, which typically allows for a high degree of customization, the platform’s focus on specific use cases and its integration with specific technologies may limit the extent to which it can be customized to meet unique user needs. This could potentially be a challenge for users who require extensive customization for their specific applications or workloads.

While Wind River Studio Cloud Platform is designed to be efficient, the platform’s focus on high-performance, mission-critical systems and recommended minimum infrastructure requirements mean that it has a relatively heavy footprint in terms of memory, storage, and network bandwidth compared to some of its competitors. This could potentially be a challenge for users who are operating in resource-constrained environments or who need to deploy the platform at scale.

Purchase Considerations
The purchase of Wind River Studio Cloud Platform may necessitate additional components such as Wind River Conductor and Wind River Analytics to ensure robust remote management and monitoring capabilities, as well as to support the operationalization of the platform in complex, distributed environments. Wind River Conductor is a key element, serving as an orchestration and automation system for the underlying platform, which simplifies the deployment and management of distributed workloads. Wind River Analytics provides the necessary tools for monitoring cloud server performance. It offers insights into the behavior of cloud systems, helping to optimize operations and reduce costs. This component is essential for maintaining the high reliability and performance expected from mission-critical intelligent edge systems.

Wind River Studio Cloud Platform excels in near-edge use cases requiring low latency, high bandwidth, flexibility, and distributed intelligence across many edge sites. Studio Cloud Platform is optimized to support both 5G core and vRAN deployments by providing a distributed Kubernetes solution to manage edge cloud infrastructure across potentially thousands of edge sites. It is being used by several Tier 1 communications operators for their 5G rollouts.

Radar Chart Overview
Wind River is positioned in the Maturity/Feature Play quadrant on the Radar chart, as it is a mature Kubernetes solution tailored to communications service provider requirements focused on well-defined, industry-specific edge computing use cases. The company’s balanced approach to features and maturity positions it as a solid choice for organizations seeking a dependable solution for managing near-edge cloud native infrastructure.

It’s well suited for customers who have no need for cutting-edge features and simply want a low-risk, edge-optimized, Kubernetes-compliant capability to support a third-party commodity off the shelf solution.

Wind River Studio Cloud Platform provides a comprehensive set of features that are likely to meet the needs of most low-complexity edge computing use cases. It can be beneficial for customers looking for a solution that delivers essential functionality without unnecessary complexity.

Similarly, its placement in the middle of the Maturity axis indicates that the solution is neither a nascent nor a legacy offering. It has been through several iterations, incorporating customer feedback and adapting to the evolving landscape of edge computing. This level of maturity often translates to a solution that has been tested in various real-world scenarios and is backed by a track record of performance and support.

6. Analysts’ Outlook

The market for Kubernetes in edge computing is still in its early stages, with a variety of solutions emerging to cater to diverse use cases and customer requirements. The market is characterized by two main clusters of offerings: platform players and feature-balanced solutions. Platform players are targeting customers with complex needs that require a high degree of architectural flexibility across the entire spectrum of edge computing use cases. In contrast, feature-based solutions are striving to strike a balance between offering a broad range of features and ease of use, typically aiming to lead the market in more focused use-case-specific requirements.

Our research showcases a diverse range of vendors, each with unique strengths and challenges. For instance, AWS, Canonical, and Mirantis are positioned as feature players, offering solutions that cater to customers with complex, well-defined footprints and needs. These vendors are less concerned with ease of use and more focused on providing a wide range of niche features to address near-edge or device-edge needs. On the other hand, vendors like Red Hat, Rakuten, and Spectro Cloud are looking to balance the breadth of offerings with ease of use and deliver the widest, most feature complete solution suites available to address the broadest spectrum of use cases, from edge-adjacent hybrid cloud all the way out to the device edge.

For IT decision-makers considering adapting and adopting Kubernetes for edge computing, it’s crucial to consider both current and future needs when comparing solutions and vendor roadmaps. Consider the architectural flexibility, ease of use, licensing constraints, and the breadth of features offered by each solution. Also, consider the vendor’s position on the GigaOm Radar chart, which provides a forward-looking perspective on the vendor’s product based on its technical capabilities and roadmap.

As the market matures, we can expect to see more feature-focused solutions that look to balance offering niche features and use case applicability. IT decision-makers should keep a close eye on this cluster as it is likely to influence the evolution of the market as a whole.

To learn about related topics in this space, check out the following GigaOm Radar reports:

7. Methodology

*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.

For more information about our research process for Key Criteria and Radar reports, please visit our Methodology.

8. About Matt Jallo

Matt has over twenty years of professional experience in information technology as a computer programmer, software architect, and leader. An expert in Cloud, Infrastructure and Management, as well as DevOps, he’s been an Enterprise Architect at American Airlines, has established disaster recovery systems for critical national infrastructure, oversaw integration during the merger of US Airways with American Airlines, and developed web-based applications to help small businesses succeed in e-commerce while an engineer at GoDaddy.com. He can help improve enterprise software delivery, optimize systems for large scale, modernize or integrate software systems and infrastructure, and provide disaster recovery.

9. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

10. Copyright

© Knowingly, Inc. 2024 "GigaOm Radar for Kubernetes for Edge Computing" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.

Interested in more content like this? Check out GigaOm Research Reports Subscribe Now