Table of Contents
Edge Kubernetes addresses a number of growing requirements in modern businesses. The increase in connected and smart devices across all areas of business is changing where and how we generate data. Processing that data to enable business outcomes can be challenging with traditional centralized computing models. From autonomous vehicles to fast food, to haulage and transport, to healthcare, and more, the endpoints we now connect are more varied than ever.
Connectivity back to centralized data centers may be unreliable or intermittent with mobile devices. Running the applications closer to the device and transferring only processed data and results back to a central location allows greater flexibility and faster response times. Cluster footprints can be as small or as large as necessary at the edge and can take advantage of the ruggedized form factors available to suit less hospitable environments. Most common use cases for edge Kubernetes can be found in data analytics, AI/ML workflows, image and video processing, robotic process automation, IoT, and other applications that benefit from speed in data processing and manipulation (see Figure 1).
Figure 1. Edge Kubernetes Overview
The two most common approaches to edge Kubernetes are:
- Software defined: The software-defined approach enables the use of a wide range of hardware devices as well as of existing infrastructure components. It allows a greater degree of flexibility, so you can mix and match the hardware requirements to the location.
- Appliance/specialized hardware: This approach is similar to using hyperconverged infrastructure appliances. It combines compute, storage, and networking resources with a software platform that manages and deploys the edge clusters. It may also include specialized hardware designed for AI/ML workloads that include GPUs or other dedicated hardware for a specific use case.
Solutions may support a combination of these two models. Software-defined deployments are usually beneficial when existing infrastructure is in place, possibly even existing hypervisor platforms, should a bare metal deployment not be required. The latter can be beneficial for Greenfield deployments and scenarios where bare metal deployments are not possible. Appliance deployments can also help provide a single point of contact for purchasing and supporting hardware.
How We Got Here
Over the last two decades, computing technology has improved dramatically. In fact, modern smartphone devices carry more processing power than the average data center servers used at the turn of the century. Along with this, the footprint of computing devices has shrunk. The advent of the Raspberry Pi and Next Unit of Computing platforms introduced low-power devices that can run enterprise-class applications in remote locations without sacrificing performance and stability.
Other key factors in the development of the edge and IoT space were improvements in connectivity, such as 4/5G mobile connectivity, and SD-WAN solutions that allowed for enterprise control and connectivity over consumer-grade internet connections.
Initial efforts in edge computing started within the virtualization space; however, the overhead of virtual machines and hypervisor software can make deployment at scale difficult. Deployments were limited to sites that could host enough resources to run the necessary applications, which often required additional power and cooling at the edge location. Then, along came container technology, which allowed the deployment of a minimal number of services and associated system files to support the application. The footprint required to deploy multiple applications was reduced from traditional pizza box-style servers to devices you can fit in your hand.
In recent years, Kubernetes emerged as the de facto standard for container orchestration, with wide adoption across multiple hyperscale cloud and on-premises solution providers. Bringing the scalability and orchestration capabilities of Kubernetes to the edge made running hundreds or even thousands of globally distributed data centers a reality.
About the GigaOm Sonar Report
This GigaOm report is focused on emerging technologies and market segments. It helps organizations of all sizes to understand the technology and how it can fit in the overall IT strategy, its strengths, and its weaknesses. The report is organized into four sections:
Overview: an overview of the technology, its major benefits, possible use cases, and relevant characteristics of different product implementations already available in the market.
Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration on how to integrate the new product with the existing environment.
GigaOm Sonar: A graphical representation of the market and its most important players focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.
Near-Term Roadmap: A 12-18 month forecast of the future development of the technology, its ecosystem, and major players of this market segment.
2. Report Methodology
A GigaOm Sonar report analyzes emerging technology trends and sectors, providing decision makers with the information they need to build forward-looking—and rewarding—IT strategies. Sonar reports provide an analysis of the risks posed by the adoption of products that are not yet fully validated by the market or available from established players.
In exploring bleeding-edge technology and addressing market segments still lacking clear categorization, Sonar reports aim to eliminate hype, educate on technology, and equip the reader with insight that allows them to navigate different product implementations. The analysis focuses on highlighting core technologies, use cases, and differentiating features rather than drawing feature comparisons. This is done mostly because the overlap among solutions in nascent technology sectors can be minimal. In fact, even when a core technology is based on the same principles, the product implementations themselves tend to be unique and focused on narrow use cases.
The Sonar report defines the basic features that users should expect from products that correctly implement an emerging technology, while taking note of characteristics that will have a role in building differentiating value over time.
In this regard, readers will find similarities with the GigaOm Key Criteria and Radar reports. Sonar reports, however, are specifically designed to provide an early assessment of recently introduced technologies and market segments. The evaluation of the emerging technology is based on:
- Core technology: Table stakes
- Differentiating features: Potential value and key criteria
Over the years, depending on technology maturation and user adoption, a particular emerging technology may either remain niche or evolve to become mainstream (see Figure 2). This GigaOm Sonar report intercepts new technology trends before they become mainstream and provides readers insight to understand its value for potential early adoption and highest return on investment (ROI).
Figure 2. Evolution of Technology
The overall goal of edge Kubernetes deployment is to improve the scaling and efficiency of applications in a distributed environment. By harnessing the benefits of containerized deployment technologies and the improved computing capabilities of small form-factor devices, applications can be deployed closer to the end user and the originating data streams. This locality of data and applications results in better overall performance of the systems on site and improves the responsiveness of the business.
Edge Kubernetes clusters are usually implemented in remote locations that would be difficult to connect reliably to centralized data centers, or in locations with hostile environments that are not suitable for traditional computing infrastructure. A prominent example is the deployment of edge Kubernetes clusters to fast food stores across the country, which provides valuable insight into customer buying patterns, ensures efficient preparation of food, reduces waste, and increases throughput.
From a technical point of view, depending on the type of deployment, the components of these solutions are:
- Compute: The device needs to provide adequate compute capability and enough RAM to run the desired applications and the overall cluster management components. Both traditional x86 and newer AArch64 architectures, as well as dedicated GPUs or ASICs for AI/ML, are all options for compute deployment.
- Storage: Edge deployments are generally stateless in nature, processing data locally and storing long-term data back at central locations for later analysis and archival. However, adequate storage needs to be available for temporary data processing and deployment of the applications. This is usually in the form of local NVMe SSD devices within the compute unit, which saves on power consumption and space requirements.
- Networking: Connectivity for both the data plane and the control plane is essential both to run the applications and manage the platform. Small form-factor switches with lower numbers of ports are commonplace, and Power over Ethernet options are advantageous. Devices that can provide multiple functions, such as routing or security, are also beneficial.
- Software: The key components of edge Kubernetes deployments are all software, from day-zero deployment to ongoing maintenance of the environment. It’s the software stack that provides the ability to run the applications and the infrastructure.
Most solutions allow you to deploy both worker nodes and management services to the same physical hosts, reducing the required footprint without compromising the availability of services. This can be achieved by running multiple container components on bare metal hosts or by running on top of a virtualization environment.
Management of these solutions focuses on the ability to deploy, monitor, and maintain large-scale fleets of geographically dispersed clusters. These solutions often integrate existing deployments across on-premises data centers and within cloud providers, along with new edge deployments, and bring a consistent approach to managing your infrastructure and applications regardless of location.
Due to the nature of edge Kubernetes and the technology involved, there are plenty of opportunities to deploy this technology for enterprise applications. We are seeing the common use cases to be:
- Data analytics: Data from devices at the edge locations can be processed, sorted, tagged, and managed locally, with only results and aggregated data being transmitted back to central data center locations. This saves on bandwidth and processing time.
- AI/ML workflows: Edge processing of data gathered from devices through AI or ML workflows locally enables faster decision making and instantaneous feedback. Applications such as autonomous driving or real-time tracking are popular in this space.
- Image/video processing: As with the above use cases, processing of images and video feeds at the edge allows the data to be processed, sorted, analyzed, and tagged locally. This data can then be acted on or aggregated and sent to a central data center location for additional processing. This can be useful for applications such as real-time parking information, road traffic analysis, weather forecasting, and search and rescue.
- Robotic process automation: Improvements in manufacturing allow a greater use of robotics for repeatable tasks on production lines, and ensuring that the operation of these processes is co-located within the facility fosters efficiency and continued operation, even when connectivity to central data centers is lost or where connectivity may be limited.
- IoT: Connected devices power many modern workplaces. From sensors and monitoring systems to connected appliances in fast-food restaurants, there are more data points to be collected and analyzed than ever before. Using IoT devices for the above-referenced use cases allows for robust edge applications.
- 5G/telecommunications: Improvements in mobile and remote connectivity solutions are creating new opportunities for services to be deployed at the edge of the radio access networks and for the distribution of applications to locations that are closer to the end user. At the same time, the software and hardware used to run these networks are now being deployed in a distributed manner, allowing service providers to turn existing mast infrastructure into an edge computing cloud.
4. Considerations for Adoption
Before adopting edge Kubernetes solutions, users should clearly understand the benefits and risks associated with this still-emerging technology. There are a growing number of solutions available in the market, but there are variations in deployment and management methodologies. The use case is the primary criterion for evaluating the right deployment model, followed by any constraints imposed by the target location.
Given the potential risks, the user should always analyze if the edge deployment is really needed or if it would be better to provide fast and reliable connectivity to a centralized facility or cloud provider. Questions should be asked as to how any loss of connectivity would impact the business applications and the devices at the edge location. If operations can continue for periods of time in a disconnected state, are there benefits that might outweigh the additional operational overhead? Where systems and applications require data processing to continue, even when connectivity is lost, or where the processing of data/metrics are dependencies for later stages of the workflow, deployment of edge Kubernetes solutions can be ideal.
Return on investment for these solutions should be measured by direct impact to business outcomes, be that faster time to market or competitive edge. Deployment of edge Kubernetes services should directly enable line-of-business applications to be deployed quickly and efficiently to the locations where they have the most impact. Management overhead should be minimal, and handling the overall operation of the applications across multiple sites should be possible by a small-to-medium sized team.
Challenges inherent in the deployment of such solutions at scale should not be underestimated. Support for automation, discovery, and configuration tools can simplify the initial deployment, as well as upgrades and day-to-day activities. Central management consoles, dashboards, and metrics are key to ensuring that operations teams are equipped to handle ongoing maintenance and respond to issues effectively.
Key Characteristics for Enterprise Adoption
Table 1 shows the most important characteristics of edge Kubernetes to evaluate and how well each is implemented in the solutions assessed in this report. The characteristics to consider before adoption include:
- Deployment: The deployment options available should be evaluated against the requirements of the locations in use and the infrastructure available. Appliance-based deployments will have different considerations than those of software-defined options. Appliance deployments are typically easier and require less up-front configuration. Software-defined deployments should support both bare metal and hypervisor platforms.
- Automation: The availability of APIs and basic automation frameworks is essential to deployment at scale. Automation should be taken into account across the entire deployment of the edge infrastructure and the continued day-two operations. Infrastructure lifecycle management should be included in the automation options; automatic updating and patching of edge locations are essential when operating at scale.
- Management: Control plane operations should be easy to manage across multiple edge locations. Visibility, metrics, deployment status, and other key information should be available within the management solution. Integration into the hardware platform is desirable for bare-metal deployments and management-of-failure scenarios.
- Scalability: Edge Kubernetes solutions should support scale-out both locally at the edge location and geographically dispersed at multiple locations simultaneously. The solution should support various processor architectures and operating systems to allow flexibility in the deployment of applications based on use case.
- Security: Security should be built into the solution from the very start, ensuring secure communications locally within the edge deployment and for the applications deployed, and secure management operations from centralized consoles or APIs.
Table 1. Key Characteristics for Enterprise Adoption
|Exceptional: Outstanding focus and execution
|Capable: Good but with room for improvement
|Limited: Lacking in execution and use cases
|Not applicable or absent
5. GigaOm Sonar
The GigaOm Sonar provides a forward-looking analysis of vendor solutions in a nascent or emerging technology sector, based on each vendor’s strategy, execution, and roadmap. The GigaOm Sonar chart plots the current position of each solution against these three criteria across a series of concentric triangles, with those set closer to the center judged to be of higher overall value. The forward-looking progress of vendors is further depicted by arrows that show the expected direction of travel over a period of 12 to 18 months.
The GigaOm Sonar chart (Figure 3) is defined by three axes. They are:
- Roadmap: When assessing emerging technologies, it is important to take a looking-forward approach and to describe the requirements for initial adoption, while also understanding the expected future development of the technology. This is essential for organizations that seek to expand beyond the initial targeted use case to maximize employment of the solution in a way that can increment return on investment (ROI).
- Execution: This metric is critical to understand if the vendor has developed a solution with the necessary differentiation to stand out in the crowd. Is the product architecture solid and ready to support the growing number of features and capabilities that users will require over time?
- Strategy: This metric takes into account the vendor’s go-to-market strategy and its ability to create a solution ecosystem around its product. Strategy also reflects the company’s ability to articulate vision and accomplish the goals on its roadmap.
Figure 3. GigaOm Sonar for Edge Kubernetes Solutions
As you can see in the Sonar chart in Figure 3, the landscape is somewhat spread out as edge Kubernetes solutions are still in their infancy. Companies in this space are all working to meet the challenge of providing compute at scale across multiple locations. However, there are different approaches. Companies such as ZEDEDA and NodeWeaver focus on solving edge deployment and management problems and leave Kubernetes management to established players in the market. Meanwhile, solutions from Rancher, Red Hat, Spectro Cloud, and Platform9 feature a more complete approach that addresses the entire solution end to end.
6. Market Landscape
NodeWeaver’s approach to edge Kubernetes is quite different from the other vendors. Integrating storage, networking, and virtualization into a single system is more akin to the HCI systems we have become accustomed to in the data center. This approach enables the company to offer an optimized solution for use cases that will operate in harsh conditions, or that require specific hardware optimizations and low latency.
One of the most interesting characteristics of NodeWeaver’s platform is its approach to deploying applications at the edge. NodeWeaver started with a lightweight hypervisor and built out from there. Building high-performance virtual machines that provide near bare-metal performance enables enhanced IO, especially in high-throughput network applications. The solution also provides persistent storage for containers based on a transactional model, providing durability and scalability for applications at the edge.
Support for ARM CPU architecture and GPUs for computer vision and AI/ML is available within the platform. With the ability to run as just a single node at the edge, the platform supports growth in single increments and it also supports heterogeneous devices. This helps with the ever-changing requirements at the edge and offers flexibility when replacing hardware.
Use cases for NodeWeaver are predominantly in industrial and automotive applications. Energy and infrastructure, such as solar or wind turbines, require devices that operate in challenging environments with limited connectivity. Autonomous vehicles are another compelling use case where offline devices can run within the car and provide high-speed offload of data when back at base.
Strengths: NodeWeaver works with server and device providers to preload configurations before shipping to customer sites, and it also works with several ruggedized specialist device providers.
Challenges: There’s no Kubernetes federation management layer. Partnering with existing solutions for management at this layer (such as Gardener, Rancher, and Tanzu Community Edition) would be the way to overcome this lack.
Platform9 is a leading provider of managed Kubernetes services. The Platform9 SaaS control plane allows deployment and management of Kubernetes across public clouds and private data centers using a policy-driven approach. Extending to the growing number of edge use cases is a natural progression for the solution.
The primary market for Platform9 at the edge is in the Telco/5G space, but the same model lends itself to the emerging number of use cases for SASE PoPs (Points of Presence) and modern retail stores. Referring to the edge as “cloudlets,” Platform9 focuses on providing deployment and management from the ground up. Bare-metal-as-a-service features the ability to remotely provision and inventory devices or nodes, ensuring that they meet the standards required for security and connectivity. It also detects devices such as TPM chips, and the offload features of network cards, FPGAs, and, in the near future, GPUs. The containers-as-a-service facility ensures deployment of the operating system and container runtimes. Pairing this with a rich catalog of applications within the platform-as-a-service element allows deployment of monitoring, logging, load-balancing, and other features to the edge with ease, all from a defined set of policies.
Platform9 also offers a non-SaaS version of the solution for edge customers with additional security considerations or for those operating on dark sites. The solution doesn’t currently offer support for ARM-based processor architectures, limiting its potential for small footprint low-power workloads.
Strengths: Platform9 has a wealth of experience managing Kubernetes environments and has focused its use cases for the edge very well, providing specialist expertise in the 5G/Telco/SASE space utilizing features for network acceleration, IPV6, and multi-networking.
Challenges: Platform9 is currently limited to x86-based architectures and is therefore better suited to the larger locations and use cases within the various edge deployment models.
Recently acquired by IBM, Red Hat is a well-known entity in the open-source and Kubernetes ecosystems. The combination with IBM and its resources makes Red Hat a prominent player in a number of sectors, with the ability to leverage existing relationships across both traditional IT buyers and new cloud-native developers.
Red Hat’s edge offering extends the company’s impressive Kubernetes and container portfolio. Its existing integrations now reach across the entire stack, from the Red Hat Enterprise Linux (RHEL) OS layer all the way to the Red Hat Application Services management layer. Most impressively, Red Hat boasts flexible deployment options, including a single-node deployment of its popular Red Hat OpenShift platform, which helps reduce hardware requirements at the edge. Other options include providing remote worker nodes at the edge with centralized control nodes deployed at locations with a larger hardware footprint, and three-node high-availability clusters.
Overlaying all this with the Red Hat Advanced Cluster Management for Kubernetes and the wealth of automation already available from its Ansible product line, you can deploy, automate, and manage clusters at scale across all locations within the organization.
Use cases exist within but are not limited to telecommunications, manufacturing, and energy generation. Red Hat is working in areas of edge computing where distributed compute is a core part of the business to help customers overcome problems they’ve been dealing with for many years.
Strengths: Red Hat is taking a novel approach in offering community wisdom and learned knowledge in this space and providing these operational blueprints as code. These are called “validated patterns” and are backed by experience from real-world customer deployments. They are similar to the validated designs we have become accustomed to in the data center.
Challenges: The solution is composed of several products, which can be seen as challenging for adoption in this space. However, the existing investment in validated patterns, as well as providing good documentation, can help overcome this.
Spectro Cloud was founded in 2019 and is, therefore, one of the newer companies in this space. However, the team behind this company comprises industry veterans with a proven track record in transformational technology solutions.
Palette is the core of the solution from Spectro Cloud, offered as a SaaS platform or self-hosted for organizations with specific regulatory or compliance needs. Palette provides multi-cluster management for Kubernetes at the edge and full-stack OS and application management. It uses a declarative profile-driven model to build stacks for day-zero, day-one, and day-two operations. The focus of Spectro Cloud is to make Kubernetes easy to adopt regardless of the role of the consumer, whether developer or IT operations personnel.
The solution supports both GPUs and ARM-based CPUs, allowing for a wide range of hardware options across the data center and edge. Customers can use their own OS and Kubernetes distribution, or choose the purpose-built, security-hardened OS (based on Ubuntu) and Kubernetes distribution provided and supported by Spectro Cloud. This gives end users multiple choices, while still providing support options that are critical to the enterprise.
System updates across the edge are handled similarly to the “over the air” updates we have become accustomed to in the mobile device market. Spectro Cloud aims to provide notifications to edge devices and across the various layers of the stack, allowing for updates all the way from the bootloader to the applications running.
The primary market for Spectro Cloud is medium-to-large enterprises looking to make Kubernetes accessible and scalable across a large number of locations and cloud providers. These organizations want usability, flexibility, and control at scale to meet demanding workloads.
Strengths: Spectro Cloud has an elegant and intuitive UI that simplifies the adoption of edge clusters and application management, using community wisdom and industry expertise to ensure that customers from novice to expert are catered to.
Challenges: The edge cluster management solution is in the early stages of deployment compared to the rest of the offerings from Spectro Cloud. However, Palette already has a strong foundation of declarative, profile-driven, multi-cluster management on which to build.
Founded in 2014, Rancher Labs boasts a strong suite of products within the CNCF portfolio and is best known for its multi-cluster management and Rancher Kubernetes Engine (RKE). Acquired by SUSE in late 2020, the combined products provide a full-stack offering that enables the deployment of Kubernetes everywhere.
Rancher has worked very hard to address the issue of scale at the edge, supporting projects such as Kine to help scale out etcd and to remove limitations when looking at cluster scale in the tens or even hundreds of thousands. Along with this, Rancher is working with projects such as Akri to help reach the tiny edge, where sensors and IoT devices meet the far-edge compute layer.
Due to the breadth of the product base, Rancher can service use cases across a broad range of edge deployments. It utilizes SUSE Linux Enterprise Server and RKE at the near edge, where equipment can be a more traditional form factor to provide solutions for multi-service and telco operators. Rancher also takes advantage of smaller footprint SUSE Linux Enterprise Micro and K3s for the far edge, where form factors are condensed for agriculture, manufacturing, transportation, and the like.
The entire stack can be deployed and managed using a GitOps model, combining onboarding, registration, and attestation services.
Strengths: Rancher benefits from having K3s, a certified Kubernetes distribution that is lightweight and built for IoT/edge computing. Solutions are highly scalable and suited to large-scale, far-edge use cases.
Challenges: Rancher Labs was acquired by SUSE in late 2020, and we are still in the early days of seeing what impact the combined solutions will have on the market. However, early indications show a strong synergy in the products across the stack, especially at the edge.
ZEDEDA was founded in 2016 specifically to solve challenges posed by edge computing. This focus makes the company somewhat unique in the market, and it shows in the focus of its product, ZEDCloud.
ZEDCloud is a SaaS-based control plane that powers the core of ZEDEDA’s edge management solution. This solution can be white-labeled to suit customer needs and bring a familiar feel to users while enabling service providers to build on the platform to market managed services for edge compute. The user experience is clean and provides a good overview of deployed locations globally. ZEDCloud enables the discovery, inventory, orchestration, security, and management of many edge devices. It supports a choice of edge hardware (with memory and storage capacity as low as 1GB). ZEDEDA’s integration with third-party solutions such as Rancher provides a solution for centralized management of edge Kubernetes deployments.
A marketplace of applications is included, allowing users to deploy prepackaged and validated versions of common edge software solutions, such as K3S, Azure IoT Edge, AWS Greengrass, and more. Users can add their own images and software to the marketplace to be used with the orchestration engine. Device security at the edge can be performed using remote attestation of the PCR values within TPM hardware, ensuring that deployed devices are not tampered with in the field.
Along with the SaaS platform, ZEDEDA has put considerable work into its EVE-OS, which allows for rich integration from the ground up. EVE-OS supports several CPU architectures and GPUs, including x86 and lower-power ARM offerings.
Use cases for ZEDEDA are primarily within industrial applications, such as manufacturing, renewable energy, and oil and gas. These are the operational technology areas that require a solution to meet the demands of non-traditional IT environments.
Strengths: ZEDEDA’s edge platform has been built around EVE-OS, which is the foundation for running VMs, containers, and applications on a wide range of supported devices. EVE-OS was developed within the Linux Foundation’s LF Edge organization, which provides open, vendor-neutral governance.
Challenges: Managing the deployment of the runtime, platform, hardware, and more requires a Kubernetes workload management platform to be layered on top to create a full end-to-end solution. However, ZEDEDA works closely with the big solutions in this space to ease the complexity of this solution.
7. Near-Term Roadmap
Edge Kubernetes expands existing investments and developments in the Kubernetes ecosystem. General Kubernetes adoption is still in the early stages, and Kubernetes at the edge is even earlier. As such, use cases are limited to those with specific needs and scale. However, vendors are working toward future mainstream adoption of Kubernetes and looking forward to complementary technologies such as 5G becoming widely available.
That said, distributed applications and localized data processing are growth areas for almost all businesses. Distributed workforces and global customer reach require being able to react quickly and operate from nearly any location. Edge Kubernetes solutions help to realize this goal. Continued development of support on small-footprint low-power devices, as well as additional compute architectures such as ARM, are essential to the long-term success of edge deployment solutions.
8. Analyst’s Take
Neither edge computing nor Kubernetes are new solutions; however, using them together has long been confined to businesses innovating in this space. Solutions haven’t long existed that didn’t comprise some open source or community-driven application born out of necessity for a particular deployment. Early leaders in this area have paved the way for the general adoption of solutions across a wide range of business requirements, allowing productization and simplification of complex solutions.
As the overall working landscape has changed during the last 12 to 18 months, so have the requirements of most businesses. Workforces are more dispersed than ever before, and the customer base is more global. Businesses are constantly finding new ways to generate value from data and analytics. The ability to deploy business applications directly where they have the most impact is a growing requirement, and edge Kubernetes solutions enable this.
With all the new emerging technologies, it is important to carefully evaluate use cases and options, ensuring that the total cost of ownership and return on investment align with business expectations. Edge Kubernetes solutions should be considered along with other new or existing business strategies for application modernization.
9. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.