Table of Contents
1. Executive Summary
Container networking solutions provide lower-level (Layer 3 and 4) networking constructs that support communications within and between pods and clusters in a distributed application environment (container- or Kubernetes-based, for example). Enterprise-grade solutions in this category enable organizations to define network policies, routing, network security, observability, and analytics in complex applications, and they cater to the ephemerality of containerized environments.
In distributed environments, the network is part of the application and directly impacts service performance. Native container networking constructs available in Kubernetes and open source plugins enable organizations to start their containerization journey with relative ease and low upfront investment. However, organizations often miss the value-add of an enterprise-grade container networking solution and fully rely on open source networking components.
Without enterprise-grade mechanisms for scaling up, the network will eventually become a bottleneck, hindering service performance. The good news is that developers and network engineers are not locked into using only Kubernetes-native networking constructs and open source plugins.
Note: We are defining the key characteristics in the container networking Sonar report to be agnostic with respect to container runtimes and orchestration technologies. However, we will be using Docker and Kubernetes terminology to illustrate the concepts.
Managing networking at the container level gives organizations low-level control over security, performance, and observability. These solutions form a foundation for container security by handling segmentation, filtering, access controls, intrusion detection, and more. For distributed applications, container networking solutions support application performance by offering load balancing, observability, diagnostics, and troubleshooting. They also support application development by enabling multicluster, multicloud, and edge connectivity.
This is the first year that GigaOm has reported on the container networking space in the context of our Sonar reports.
This GigaOm Sonar provides an overview of the category’s vendors and their available offerings, outlines the key characteristics that prospective buyers should consider when evaluating solutions, and equips IT decision-makers with the information they need to select the best solution for their business and use case requirements.
ABOUT THE GIGAOM SONAR REPORT
This GigaOm report focuses on emerging technologies and market segments. It helps organizations of all sizes to understand a new technology, its strengths and its weaknesses, and how it can fit into the overall IT strategy. The report is organized into five sections:
- Overview: An overview of the technology, its major benefits, and possible use cases, as well as an exploration of product implementations already available in the market.
- Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario. We look at table stakes and key differentiating features, as well as considerations for how to integrate the new product into the existing environment.
- GigaOm Sonar Chart: A graphical representation of the market and its most important players, focused on their value proposition and their roadmap for the future.
- Vendor Insights: A breakdown of each vendor’s offering in the sector, scored across key characteristics for enterprise adoption.
- Near-Term Roadmap: 12- to 18-month forecast of the future development of the technology, its ecosystem, and major players in this market segment.
2. Overview
Networking across containers is challenging in large-scale container deployments, which are increasingly common with the adoption of orchestration systems such as Kubernetes. It is particularly difficult to define and enforce networking and security policies when working with multiple clusters distributed across different cloud providers and on-premises environments.
One development that has opened up innovation in the container networking space is the Container Networking Interface (CNI) Specification, a Cloud Native Computing Foundation (CNCF) project that consists of guidelines and libraries for writing plugins to configure network interfaces in Linux and Windows containers.
CNIs are third-party, typically open source plugins that provide essential Layer 3 and 4 constructs, plus additional low-level features such as network policy enforcement, load balancing, network encryption, and integration with network infrastructure for multihost and multicluster networking. Orchestration systems that support CNIs include Kubernetes, Mesos, OpenShift, Nomad, and CloudFoundry.
CNIs are a good point of reference for understanding the core capabilities of a container networking solution. Most CNIs are open source, and most enterprise-grade solutions leverage open source CNIs to build more advanced capabilities. Some networking vendors choose to develop CNIs in-house, while the creators of some open source CNIs offer an enterprise-grade version of their open source project.
Another third-party plugin that forms a container networking solution is the ingress controller. These are responsible for fulfilling incoming requests, acting as a reverse proxy and load balancer. The ingress controller accepts traffic from outside the Kubernetes platform and converts configurations from ingress resources into routing rules between pods running inside the containerized environment.
While container networking solutions work at Layers 3 and 4, enterprises may also look for Layer 7 service-to-service connectivity. This is commonly handled by a service mesh solution. Some vendors featured in the container networking Sonar will also offer service mesh solutions, which is captured in the deployment model scoring but otherwise not weighted in any key characteristics. For more details on service mesh, see our report “Key Criteria for Evaluating Service Mesh Solutions.”
Getting all these components working together and creating an enterprise-grade service wrap for policy definition, visualization, and scalability, container networking solutions deliver on use cases such as:
- Connectivity within Kubernetes using network topologies such as overlays, direct routing, or native cloud provider integrations
- Implementation of zero-trust security principles
- Multicluster connectivity distributed across different environments
- Load-balancing across clusters
Typically, large organizations with complex containerized workloads will see the benefits of container networking solutions. These help alleviate the issues related to scaling and managing large environments by providing end-to-end visibility and management of how containers communicate with each other and the rest of the infrastructure stack. Smaller organizations, including startups, can often leverage open source solutions to handle simpler environments.
3. Considerations for Adoption
Organizations can start container networking in one of the following ways:
- Using an open source CNI with in-house development
- Using the enterprise-grade plan of an open source CNI
- Using a commercial solution
Organizations can use open source CNIs such as Antrea, Calico, Cillum, Flannel, or kube-router to enable containers to connect with each other (and the broader network) and to define IP addressing, routing, load balancing, network policy enforcement, and service discovery. As with all open source software, these are free to use and, in terms of upfront investment, the cheapest option available. However, additional costs of development and upskilling employees can rapidly counterbalance the zero upfront charges. Note that we are not evaluating open source CNIs in this report, but they are an option worth considering for trials and for those seeking low upfront investment.
Enterprise versions of open source are hardened and enterprise-ready solutions offered by the same creators of the open source software. The most notable vendors in this category are Isovalent for Cilium and Tigera for Project Calico. Opting for an enterprise version means getting support directly from the people who know the software best. They’re more likely to understand the nuances and edge cases that might arise, leading to quicker and more effective problem-solving. Updates to the enterprise features and the open source version are typically synchronized, so any advancements in the open source quickly find their way into the enterprise version as well.
Other container networking solutions are delivered by household names in the networking space. For this category, it’s worth noting two things. First, these solutions can use either open source CNIs or develop proprietary CNIs. Second, some of these vendors have container networking capabilities as part of a wider networking solution. Compared to the enterprise versions offered by the creators of open source software, commercial solutions present a number of benefits, such as vendor incumbency, standardized management, and broader product portfolios. If an organization already has an existing deployment from one of the vendors described above, deploying its container networking solutions is likely to require less setup and configuration work.
Deployment Model
To help prospective customers find the best fit for their use case and business requirements, we assess the available deployment models for each solution (Table 1).
For this report, we recognize the following deployment models:
- Proprietary CNI: The solution can use a container networking interface developed by the vendor. This deployment model will be scored as absent/not applicable if the vendor only uses third-party CNIs.
- Proprietary ingress controller: The solution can use an ingress controller developed by the vendor. This deployment model will be scored as absent/not applicable if the vendor only uses a third-party ingress controller.
- Built-in service mesh: The vendor packages service mesh capabilities within the solution. The service mesh can be developed by a third-party, but it must be deployed and managed through the same management console.
Table 1. Vendor Positioning: Deployment Model
Deployment Model
Deployment Model |
|||
---|---|---|---|
Vendor |
Proprietary CNI | Proprietary Ingress Controller | Built-In Service Mesh |
Arista | |||
Cisco | |||
F5 | |||
Isovalent | |||
Juniper | |||
NetScaler | |||
Red Hat | |||
Tigera | |||
VMware |
Key Characteristics
Here, we explore the key characteristics that may influence enterprise adoption of the technology, based on attributes or capabilities that may be offered by some vendors but not others. These criteria will be the basis on which organizations decide which solutions to adopt for their particular needs. The key characteristics for container networking solutions are:
- Networking policy definition
- Routing
- Container network security
- Cross-environment networking
- Monitoring and troubleshooting
- Managing ephemeral resources
- DevOps suitability
- Scalability
Networking Policy Definition
Container networking can enable administrators to define and deploy networking policies for both east-west (traffic within the environment) and north-south traffic (ingress and egress traffic). Protocols can be defined at Layer 3 (using IP), Layer 4 (using protocols and ports), and at Layer 7 (using HTTP or DNS).
To do this, a solution can provide a variety of policy definition mechanisms, including GUIs and CLIs. The definition and implementation of network policies can also be done with configuration files such as YAML or JSON and via standard Kubernetes configuration mechanisms. Other options include using APIs or integrating with infrastructure-as-code tooling.
Some examples of network policies include forcing internal cluster traffic to go through a common gateway and tying policies to workloads, namespaces, nodes, service accounts, and container and pod labels. Policies can be described using service names, workload metadata, traditional IP/CIDR blocks, DNS names, and other forms of identities including cloud provider metadata.
Some solutions can provide out-of-the-box policy examples that cater to best practices, such as policies that isolate namespaces by default ensure default-deny across entire clusters, or block particular known geographic IP blocks. Solutions with mature policy definition engines also offer policy visualizers that display the way specific policies impact traffic flows, highlighting whether policies are blocking traffic that should be allowed, or if they are being over-permissive.
Routing
This key characteristic evaluates a solution’s routing functionality for enabling data transmission between pods, across clusters, and for communication with non-container workloads such as virtual machines (VMs), microservices, or monolithic applications hosted on physical servers. Routing capabilities include:
- Border Gateway Protocol (BGP), such as iBGP/eBGP, external traffic policy, graceful restart, and eBGP multihop. These are used to announce container network IP routes to external routers for connectivity between the Kubernetes cluster network and external systems.
- Multicast and/or anycast to optimize traffic delivery to groups of containers or pods by avoiding unnecessary unicast traffic replication.
- Layer 4 load balancing to distribute incoming requests across containers based on IP and port.
- Subnetting to divide the container network IP space into smaller subnets assigned to each node or zone to scale addressing and containment.
- Overlay network routing, including VLANs, VXLANs, IPsec, or other overlays to allow logical container network topologies to be built over the physical network fabric for connecting containers across hosts.
- IPv4 and IPv6 routing stacks with dual-stack support to support interfaces that originate and understand both IPv4 and IPv6 packets.
- Network address translation supporting all IPv4-IPv6 translations.
- Egress and ingress gateway functionality for routing access to and from the container cluster network to external resources.
Container Network Security
Solutions can support network security by implementing network segmentation and microsegmentation, traffic filtering, access controls, intrusion detection, and traffic encryption.
Traffic can be filtered based on IPs, fully qualified domain names (FQDN) on source and destination, labels for pods, containers, namespaces, and clusters, service accounts, cloud provider metadata, such as AWS EC2 tags, VPC subnet IDs, Layer 4 protocols and ports, and Layer 7 constructs such as HTTP path values. For traffic encryption, administrators can enable cluster-wide and cross-cluster traffic encryption of traffic using IPsec or WireGuard.
For access controls, the solution can apply a least-privilege model by which only approved traffic is accepted. This can be simply achieved by observing real-time traffic to define network policy rules based on traffic. In addition, network policies offer the option to enforce mutual authentication between endpoints using frameworks such as SPIFFE.
Role-based access controls (RBACs) can also be imposed on the management console to limit a user’s view to their environment only. Third-party intrusion detection systems (IDS) can be integrated. Tap/mirroring functionality exists to record or reroute traffic into honeypots and other security systems.
Cross-Environment Networking
Container networking solutions can support communication among clusters hosted across different environments and connect non-container workloads. Some examples include:
- Intra-cloud, which includes different data centers or availability zones in a single public cloud.
- Multicloud, for connecting workloads across different public clouds. The solution can connect clusters together and provide service discovery, service load-balancing, and consistent network policies across clusters, across multiple cloud providers.
- Hybrid cloud, encompassing environments that incorporate workloads running across on-premises, private cloud, and public cloud.
- Edge connectivity, with containers running on edge compute providers. These are typically smaller environments with many geographically distributed instances.
Other types of connectivity a container networking solution can support include:
- Monolithic or legacy applications, which includes support for VMs to provide consistent network security policies across both Kubernetes clusters and traditional VM-based applications.
- Web services and APIs, which are typically third-party services that communicate via HTTP.
Monitoring and Troubleshooting
A container networking solution can provide monitoring and observability features to identify and map network entities, and it can monitor data flows between these entities. These solutions have awareness of constructs, including namespaces, pods and clusters, as well as support service names, DNS names, SSL SNI hostnames, and other forms of identity, such as AWS EC2 security groups.
The solution can discover services and entities automatically, either at regular polling intervals or as new instances are spun up. Upon discovery, these solutions can display topological maps that show connections and data flows between network entities. Tracing capabilities track requests as they travel through the network and provide details such as latency on a hop-by-hop basis. Other metrics can include package drops, used TLS ciphers, and connection saturation. At Layer 7, container networking solutions can provide content-aware visibility of API calls and HTTP(S) requests and return codes.
The solution can log network security events, such as blocked or allowed connections, and export them to third-party tools, such as security information and event management (SIEM) or security data lakes. It can also enable users to run a connectivity test that validates that the network configuration is behaving and performing as intended.
Managing Ephemeral Resources
The solution offers features to support the ephemeral nature of containers and associated networking such as:
- Dynamic IP allocation: IPs are automatically allocated for containers from configured pools as they are scheduled. IPs are released when the container stops.
- Service discovery integration: Service discovery automatically registers and deregisters container IPs as they come online or offline.
- Dynamic service routing: Routing entries are automatically updated as containers are rescheduled across hosts.
- Stateless networking: Network configurations can be declared via stateless configurations.
- Graceful draining: When hosts are drained, containers are rescheduled before being removed from networking.
- Predictable IP space: CNI and IPAM ensure containers get predictable IPs when restarting or rescheduling.
DevOps Suitability
Managing container networks is closely tied to the DevOps team building and running applications. As new containers are provisioned to host new workloads or scale existing ones, the DevOps team requires a tight integration between the container networking solution and their development tools to ensure the network is not a bottleneck in the development process.
The solution can cater to DevOps audiences by:
- Integrating with infrastructure-as-code (IaC) tools such as Terraform, Ansible, Salt, Chef, and Puppet.
- Supporting declarative APIs, and GitOps frameworks such as ArgoCD, Flux, and Weave.
- Integrating with CI/CD tools such as Jenkins and Gitlab.
- Integrating with version control systems such as Git.
- Automating and scripting via languages like Python, Perl, and PowerShell.
Scalability
While the size and changes in the size of containerized environments is dictated by the container orchestration system, the container networking solution must support the scale of the environment with respect to the absolute quantity of services and traffic, as well as the magnitude of changes as new instances are deployed.
A solution’s scalability can be evaluated by assessing capabilities such as:
- Number of nodes supported.
- Number of pods/containers per node.
- Number of services/load balancers.
- Throughput per node.
- Concurrent connections.
- IP address allocation.
- Federation.
- Number of applications being deployed per minute.
- Number of policy changes.
- Auto scaling to support scheduled workloads.
Table 2 shows how well the solutions discussed in this report score in each of these areas.
Table 2. Key Characteristics Comparison
Key Characteristics
Exceptional | |
Capable | |
Limited | |
Not Applicable |
4. GigaOm Sonar
The GigaOm Sonar provides a forward-looking analysis of vendor solutions in a nascent or emerging technology sector. It assesses each vendor on its architecture approach (Innovation) while determining where each solution sits in terms of enabling rapid time to value (Feature Play) versus delivering a complex and robust solution (Platform Play).
The GigaOm Sonar chart (Figure 1) plots the current position of each solution against these three criteria across a field of concentric semicircles, with solutions set closer to the center judged to be of higher overall value. The forward-looking progress of vendors is further depicted by arrows that show the expected direction of movement over a period of 12 to 18 months.
Figure 1. GigaOm Sonar for Container Networking
As you can see in the Sonar chart in Figure 1, the vendors are equally distributed across the Feature Play and Platform Play sides.
Vendors featured on the Platform Play side are those who offer point solutions, while the ones featured on the Feature Play side offer container networking as part of a wider solution. In other words, vendors on the Feature Play side offer container networking as a feature of a wider product, while those on the Platform Play side offer a solution that is wholly dedicated to container networking.
Offering container networking as a feature of a wider product means that customers either have to buy into a broad solution or leverage the features if they already have the solution deployed. This narrows the instances where the solution is applicable. Point solutions can be deployed specifically for supporting container networking use cases without any prerequisites or deploying products with a wider feature set.
5. Solution Insights
Arista, CloudEOS
Solution Overview
Arista is a household name in the networking space, offering a variety of hardware and software products. Arista’s container networking solution is part of CloudEOS, the company’s multicloud and cloud-native networking offering. CloudEOS integrates with Arista CloudVision to simplify the operator’s experience of interconnecting and managing multicloud, cloud-native, and on-premises enterprise networks.
Arista CloudEOS Router for Kubernetes uses a containerized version of Arista EOS software (CloudEOS-CR), providing every Kubernetes node access to the full features of Arista EOS and CloudVision.
Arista CloudEOS and CloudVision software provide a consistent operational model for containers, private on-premises clouds, public cloud infrastructures, and bare metal environments. The solution uses the open source Calico and Cilium as the underlying CNIs and supports Kubernetes, OpenShift, and Docker.
After initial deployment, CloudEOS can be configured and monitored using standard EOS CLI commands, Arista’s JSON eAPI, OpenConfig, and real-time state streaming telemetry APIs used by Arista CloudVision.
Strengths
The solution offers good capabilities to support DevOps practices, which include integrations with IaC tools such as Ansible, Chef, Puppet, and Salt; programmatic access to network-wide state using JSON-RPC based eAPI or OpenConfig over gRPC; and scripting in Python, C++, and Go.
For routing, CloudEOS supports BGPv4, equal-cost multipath routing (ECMP), Route Reflection, and Link Layer Discovery Protocol (LLDP), VXLAN, and encryption using IPSec.
The solution offers provisioning and monitoring via CloudVision by supporting template-based deployments, event monitoring and management, packet capture analysis using the tcpdump and libpcap interface framework, and real-time streaming telemetry using gRPC, gRPC Network Management Interface (gNMI), and OpenConfig.
Challenges
CloudEOS has not been purpose-built for container networking, so the solution scores lower on several key characteristics described in the report, including network policy definition, container network security, observability and troubleshooting, scalability, and managing ephemeral resources. While these features are supported for multicloud use cases, the extent to which they support container networking and the associated security is limited. For example, the solution does not support policies such as forcing internal cluster traffic to go through a common gateway, defining policies using cloud provider metadata, or applying a least-privilege model by observing real-time traffic.
Purchase Considerations
Arista CloudEOS is a strong option for customers with an existing Arista networking footprint. If organizations have the in-house skills and have operationalized the use of CloudEOS and CloudVision, managing container networks with the solution will offer a smoother learning curve and shorter time to value.
By combining Arista CloudEOS with Kubernetes, customers are able to take advantage of the Arista EOS and CloudVision platforms in their Kubernetes on-premises clusters. The solution provides a consistent operational model across on-premises, hosted, and public cloud workloads.
Sonar Chart Overview
Arista is positioned in the Feature Play side of the Sonar because the vendor’s container networking solution is tightly coupled with Arista’s wider product portfolio, which makes CloudEOS less suitable for customers with a non-Arista technology stack.
Cisco, CNO
Solution Overview
Cisco is a key player in the networking space that offers a wide range of solutions, including hardware and software products. Cisco added container networking capabilities in its Application Centric Infrastructure (ACI) solution. These features are also available in the Application Policy Infrastructure Controller (APIC), which is the unified point of automation and management for the Cisco ACI fabric. All these container networking capabilities are released under an umbrella product named Cisco Network Operator (CNO).
The Cisco ACI CNI plug-in provides network services to Kubernetes, Red Hat OpenShift and OpenStack, Rancher RKE, and others. Besides the proprietary ACI-CNI, Cisco Network Operator expands support and automation for additional third-party CNIs such as OVN and Calico.
CNO aims to provide a ready-to-use, secure networking environment for Kubernetes. The integration maintains the simplicity of the user experience in deploying, scaling, and managing containerized applications while still offering the controls, visibility, security, and isolation required by an enterprise.
The Cisco Network Operator and Kubernetes allow the cluster pods to be treated as fabric end points in the fabric integrated overlay, and they provide IP address management (IPAM), security, load balancing, policy definition via APIC, multitenancy, visibility, and telemetry information.
Strengths
The solution supports Kubernetes network principles, such as namespace isolation and network policies; IP address management; policy-based distributed network address translation per application, namespace, or cluster; third-party service-mesh deployment; and automatic load-balancing configuration. CNO supports communications and application flows between on-premises containers, public cloud workloads, and public cloud containers.
The Cisco ACI plugin supports routing and switching with integrated VXLAN overlays implemented fabric wide and on Open vSwitch, distributed firewalls for implementing network policies, and endpoint group-level segmentation. Newly released ACI-CNI features include IPv4/6 dual stack support and firewall insertion.
Cisco ACI is already deployed across many data centers, so organizations can take advantage of its container networking capabilities to support connectivity of on-premises and cloud-hosted containers with little additional development effort.
Challenges
Cisco’s ACI is a complex solution that caters to many networking use cases, with container networking a small subset of the wider capabilities. Customers wanting to use ACI for container networking need to buy into the wider solution, which may not be suitable for those who require a point solution.
Purchase Considerations
Cisco Network Operator is a strong option for customers with an existing Cisco networking footprint. If organizations have the in-house skills and have operationalized the use of ACI and APIC, managing container networks with the solution will offer a smoother learning curve and shorter time to value.
The ACI CNI currently is suitable for defining a single overlay data plane for containers, VMs, and bare-metal servers; fabric automation for container networking functions (CNF) in response to workload scheduling; a whitelist security model extended to container workloads; and end-to-end network visibility and telemetry, including container network traffic and a distributed Layer 4 load balancer at the top-of-rack layer.
Sonar Chart Overview
Cisco is positioned in the Feature Play side of the Sonar because the vendor’s container networking solution is tightly coupled with Cisco’s wider product portfolio, which makes CNO less suitable for customers with a non-Cisco technology stack.
F5, Distributed Cloud
Solution Overview
F5 offers a wide range of networking and security solutions, entering the container networking arena with the F5 Distributed Cloud Services (F5 XC) following the 2021 acquisition of Volterra. F5 XC provides full-stack Layers 3-7 networking via a SaaS-based platform to connect, deliver, secure, and operate both networks and applications.
F5 Distributed Cloud Services comprises two components: Distributed Cloud Network Connect and Distributed Cloud App Connect.
F5 Distributed Cloud Network Connect helps customers establish a networking fabric across cloud providers and sites, Kubernetes clusters, on-premises, and edge locations, with automatic provisioning, observability, and end-to-end security and orchestration.
F5 Distributed Cloud App Connect is an agnostic Kubernetes ingress/egress controller and API gateway that can proxy service discovery between remote clusters, with the same automatic provisioning, observability, and end-to-end security and orchestration as Network Connect.
F5 XC has a native CNI implementation for hosting containers or VMs as pods. It also supports multiple pod interfaces for use cases that require direct VLAN access to the pod in addition to the cluster’s internal CNI. F5 supports an XC ingress controller, which allows application owners to advertise their microservices on Kubernetes to any other F5 XC site or to the internet. It allows automatically programming the XC distributed load balancer to use labels on a k8s ingress manifest. App Connect provides a service mesh that extends across clusters so that services on one cluster can access apps on other clusters like they are local resources.
Strengths
All Layer 7 traffic on XC is controlled using service policies that provide granular control to access the APIs on all apps advertised within and across sites (E-W and N-S).Layer 3 traffic can be controlled using an enhanced firewall, which can be configured using the XC console, or with CLI, API, or IAC tools like Terraform and Ansible. The user role having access for configuring the firewall policies can be tied to RBAC and controlled by the tenant admin.
Layer 7 security policies can be created at a global level and are shared as a recommended policy for applications and load balancers in all XC namespaces. F5 XC supports segmentation by assigning on-premises VLANs or cloud VPC/VNets to network segments. Cross-site traffic is always encrypted using IPSec, and the XC load balancer also provides TLS encryption and SNI-based routing for application traffic.
F5 XC is also suitable for DevOps audiences, with support for IaC tools like Terraform and Ansible. Everything on the platform can be configured and monitored using APIs, which can be used with any scripting tool like Python, Perl, and so forth, and can be managed using Git. Config changes can be made by running these scripts individually or using a CI/CD tool like GitLab.
Challenges
Container networking is a feature within the wider F5 Distributed Cloud Services solution, meaning that customers who wish to use the solution must buy into a more comprehensive solution. Customers who want to leverage F5’s container networking solution must buy the Multi-Cloud Networking package to get access to the F5 XC App Connect product. This can entail a migration from already deployed CNIs or ingress controllers.
Purchase Considerations
F5 Distributed Cloud Services is a platform with some distinguishing features, notably operating a network backbone and offering customer edge software that can bring the F5 XC services to an organization’s on-premises workloads. The solution offers a robust set of infrastructure services, so organizations that require connectivity and security across the whole environment, along with container networking features, should evaluate the solution’s wider capabilities.
F5XC can provide consistent connectivity across sites in multicloud and hybrid cloud environments. The solution can provide Layer 3 connectivity to legacy monolithic applications using Network Connect or advertise them to other sites using App Connect. XC can front-end a third- party application and deliver it only to where it is required. It can also provide Layer 7 security and WAAP for these apps.
Sonar Chart Overview
F5 is positioned on the Feature Play side of the Sonar because the vendor’s container networking solution is tightly coupled with F5’s wider product portfolio, which makes the solution less suitable for customers that are not using F5 Distributed Cloud Services.
Isovalent, Cilium Enterprise
Solution Overview
Isovalent is an eBPF-based networking and security solution provider and the creator of the widely adopted Cilium CNI. Cilium was donated to the CNCF, from which it graduated in October 2023.
Isovalent Enterprise for Cilium is the hardened, enterprise-grade and 24/7-supported version of the eBPF-based Cilium CNI. It is a cloud-native connectivity solution that can be deployed and managed by customers themselves or in cloud-managed Kubernetes services like GKE, EKS, and AKS. It can also be installed and consumed via the AWS and Azure container marketplaces and is certified to run on Red Hat OpenShift.
In addition to the Cilium CNI, Isovalent Enterprise also includes an ingress controller and a gateway API. The Cilium Service Mesh’s functionalities include support for flow encryption and mTLS-based mutual authentication, network observability, and Layer 4 and Layer 7 load balancing, without requiring sidecar proxies.
Isovalent Cilium for Enterprise includes related projects Tetragon, for container runtime security enforcement and observability, and Hubble, for networking and security observability.
Strengths
The solution offers extensive network policy definition capabilities, which can be created and implemented using configuration files such as YAML; in a GitOps approach, alongside tools like ArgoCD and Flux CD; using an API or CLI; or by using the NetworkPolicy management GUI.
The solution’s GUI enables users to create a network policy based on actual observed traffic. A distinguishing development is the NetworkPolicy Editor for Kubernetes, which is a free online visual policy editor that outputs a Kubernetes and Cilium policy as a YAML file. The tool is integrated in the enterprise version of the solution and enhanced with real-world traffic from the environment.
Isovalent’s solution offers extensive routing capabilities, which include BGP, Layer 2 announcement, a full IPv4/IPv6 routing stack with dual-stack support, a full NAT46 capability to support IPv6-only container clusters with IPv4 connectivity, multicluster routing, and segment routing version 6.
The solution can define security policies to filter traffic based on FQDN, IPs, service accounts, namespaces, cloud provider metadata such as AWS EC2 tags, and pod, container, namespace, and cluster labels. Network policies offer the option of enforcing mutual authentication between endpoints using the SPIFFE framework. The solution also supports WireGuard and IPsec for encrypted traffic overlay.
Challenges
The vendor could further develop the integration between containers and Kubernetes clusters managed by Isovalent Cilium for Enterprise and non-containerized applications. This solution does not yet offer specific networking capabilities for web services and APIs or integrations with third-party applications.
Purchase Considerations
Isovalent has consistently developed and released new features for its networking, security, and observability solution. The vendor has a comprehensive development pipeline to support extended use cases, such as multicloud networking across legacy and containerized workloads. Isovalent offers a full stack for container and multicloud networking, which includes CNI, an ingress controller, and service mesh, as well as Hubble and Tetragon for runtime security and observability.
Isovalent Enterprise for Cilium helps platform and security teams to solve use cases such as connectivity within Kubernetes using different network topologies, implementation of zero trust security principles with network policies and transparent encryption, distributed and secure multicluster connectivity, and integrated sidecar-free service mesh capabilities.
Sonar Chart Overview
Isovalent is positioned on the Platform Play side of the Sonar, as it is the creator of the widely adopted Cilium open source CNI, and it offers a CNI, ingress controller, and a service mesh solution, as well as an enterprise-grade service wrap on top of the CNI.
Juniper, CN2
Solution Overview
Juniper is one of the largest providers of hardware and software networking products, catering to container networking use cases with the Cloud-Native Contrail Networking (CN2) solution.
Juniper Cloud-Native Contrail Networking is a software-defined networking solution that automates the creation and management of virtual networks. CN2 is suited to multicluster environments and scales across virtual networks, policies, and compute instances to manage virtual networks of thousands of nodes. The solution is supported on Kubernetes, OpenShift, and OpenStack.
Juniper’s CN2 can be used as a Kubernetes CNI to provide connectivity to pod workloads using overlay tunnels across the IP fabric. These tunnels can be VXLAN, MPLS over UDP, or MPLS over GRE. In late 2023, Juniper added support for the eBPF data plane.
The solution provides dynamic end-to-end virtual networking and security for cloud-native containerized workloads, as well as VM workloads, across multicluster compute and storage environments, from a single point of operations.
Strengths
CN2 is DevOps-friendly and integrates seamlessly with existing workflows and processes. Advanced security features such as microsegmentation, multitenant and namespace network isolation, and label-based security policies provide granular controls over traffic flows.
CN2 can manage multiple clusters with one CN2 instance and supports multicluster policy federation for network and security and BGP cluster-to-cluster peering. This enables easy management and scaling of the network across multiple clusters.
CN2 operates with centralized control over a distributed set of vRouter forwarding planes on all worker nodes in the cluster. CN2 offers advanced networking but with simplified configurations and management for features like overlay and underlay forwarding; service chaining; federation of gateways, controllers, and VNF workloads; remote-edge compute clusters; and dynamic network learning. CN2 also uses technologies such as kernel vRouter, DPDK vRouter, and SmartNIC to provide high performance and speed.
Challenges
The vendor does not offer other modules that support container networking, such as ingress controllers or service meshes, meaning customers need to rely on third-party products for these functionalities. While CN2 has been specifically designed to support communications across containers, CN2 is a commercial solution, so it is more suitable for vendors who have bought into the Juniper ecosystem, as other organizations usually prefer opting for open source CNIs before committing to an enterprise-grade solution.
Purchase Considerations
Juniper’s solutions draw on the vendor’s extensive networking experience, with CN2 catering to most use cases and offering a comprehensive feature set for container networking. The solution is particularly useful for vendors who also manage on-premises environments with an existing Juniper footprint.
Juniper is the only networking hardware vendor that offers a container networking point-solution, and it’s the only vendor featured in the report whose container networking solution supports telco cloud use cases. Its carrier-grade feature is used by a number of tier-1 service providers. CN2 supports use cases such as traffic security, service chaining, multicluster management and federation, and GitOps and CI/CD integrations.
Sonar Chart Overview
Juniper is the only networking hardware vendor positioned in the Platform Play side of the Sonar because CN2 is a purpose built container and multicloud networking solution that can be deployed independently of other Juniper products.
NetScaler, NetScaler Cloud Native
Solution Overview
NetScaler provides application delivery services via a platform that offers performance improvement, security, observability, automation, and orchestration. For container networking, the vendor offers NetScaler cloud native solution, which is composed of multiple modules.
NetScaler cloud native solution leverages the traffic management, observability, and comprehensive security features of NetScaler to ensure enterprise-grade reliability and security. It can provide complete visibility of application traffic in a Kubernetes environment and extract insights about application performance.
NetScaler cloud native is composed of NetScaler xDS adapter, NetScaler CPX, NetScaler Ingress Controller, and NetScaler Observability Exporter.
NetScaler CPX is a container-based application delivery controller that provides load balancing and traffic management for containerized applications, and can be provisioned on a Docker host. The NetScaler CPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms, such as the Citrix Hypervisor, VMware ESX, Microsoft Hyper-V, Linux KVM, AWS, Azure, and GCP.
NetScaler provides a node controller that can be used to create a VXLAN-based overlay network between the Kubernetes nodes and the NetScaler Ingress Controller, and it supports the Flannel, Cilium, and Calico open source CNIs. NetScaler CPX is supported on Docker, Kubernetes, OpenShift, EKS, AKS, GKE, and Rancher.
NetScaler CPX integrates well into the Kubernetes environment and forms an integral part of the NetScaler cloud native solution. NetScaler cloud native helps organizations to create and deliver software applications with speed, agility, and efficiency in a Kubernetes environment. It helps users ensure enterprise grade reliability and security for their Kubernetes environment.
Strengths
NetScaler cloud native provides an advanced Kubernetes ingress solution that caters to the needs of DevOps audiences and network or cluster administrators. The solution automates CI/CD pipelines for canary deployments and provides out-of-the-box integrations with CNCF open source tools. The solution removes the need to rewrite legacy applications based on TCP or UDP traffic while moving them into a Kubernetes environment.
NetScaler offers a service mesh lite solution with less complexity; this is a differentiating feature for enterprises who need Layer 7 service-to-service connectivity. In this solution, NetScaler CPX runs as a centralized load balancer in the Kubernetes cluster and load balances east-west traffic among microservices. NetScaler CPX enforces policies for inbound and inter-container traffic.
NetScaler CPX has good routing features that include BGP Routing, Layer 4 load balancing and Layer 7 content switching, IPv6 protocol translation, and TCP optimization. Considering the vendor’s focus on application delivery, the solution offers features such as subscriber-aware traffic steering, application acceleration, and caching and cache redirection, among others.
Challenges
The solution does not include either a proprietary or open source CNI, so organizations using the solution will need to configure the CNI themselves. While a CNI is not mandatory for a vendor to offer container networking capabilities, a solution without a CNI does not have control over the Layer 3 routing and observability features it offers.
Purchase Considerations
Customers who already use NetScaler application delivery controllers (ADCs) in on-premises environments can use the same ADCs for Kubernetes environments to apply the same load balancing and Layer 7 policies to microservices in Kubernetes clusters.
The solution can cater to a number of use cases, including hybrid cloud application delivery, load balancing across on-premises and public clouds, hardware-based application delivery replacement, and ensuring operational consistency of NetScaler across development, test, and production environments.
Sonar Chart Overview
Netscaler is positioned on the Feature Play side of the Sonar because the vendor’s container networking solution is part of its application delivery solution. While the company offers an ingress controller, it does not integrate a proprietary or open source CNI in the solution.
Red Hat, Openshift and OVN-Kubernetes
Solution Overview
Red Hat OpenShift is a hybrid cloud application management platform built on Kubernetes that provides an enterprise-grade application development environment. OpenShift provides developers with an IDE for building and deploying Docker-formatted containers and then managing them with the open source Kubernetes container orchestration platform.
OpenShift provides two container networking solutions: OpenShift SDN, which configures an overlay network using Open vSwitch (OVS), and Openshift OVN-Kubernetes (Open Virtual Network), which provides an overlay-based networking implementation. In this report, we are evaluating OpenShift OVN-Kubernetes, which has a wider range of features compared to SDN, including IPSec, IPv6, network policies, and network policy logs.
The OVN-Kubernetes solution offers a proprietary CNI plugin and is the network provider for the default cluster network. The OVN-Kubernetes CNI cluster network provider offers a network virtualization solution to manage network traffic flows; implements Kubernetes network policy support, including ingress and egress rules; and uses the Geneve protocol rather than VXLAN to create an overlay network between nodes.
Strengths
The solution’s biggest differentiator is its suitability for DevOps audiences. OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices by integrating with CI/CD solutions such as OpenShift Builds, OpenShift Pipelines, OpenShift GitOps, and Jenkins.
For container network security, the solution supports IPsec to encrypt traffic between pods on different nodes on the cluster network and traffic from pods on the host network to pods on the cluster network. The solution also includes an egress firewall to limit the external hosts that some or all pods can access from within the cluster.
The egress firewall supports scenarios such as stopping connection initiations to the public internet or to internal hosts that are outside the OpenShift Container Platform cluster. A policy example is allowing project access to a specified IP range but denying access to a different project. Or application developers can be restricted from updating from Python pip mirrors, and updates can be forced to come only from approved sources.
Challenges
As networking is just a component of OpenShift’s solution, the vendor scores lower on a number of metrics described in the report, including network policy definition, routing, observability, and troubleshooting.
The OVN-Kubernetes CNI cluster network provider does not support setting the external traffic policy or internal traffic policy for a Kubernetes service to local resources. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.
The solution also has limited policy definition mechanisms and basic observability capabilities. Moreover, it should further develop capabilities for cross-environment networking to include non-containerized environments and external resources such as web services.
Purchase Considerations
While OpenShift’s scope is wider than just its networking component, it can be considered to build upon the basic networking components of Kubernetes. OpenShift is particularly suitable for enterprises that deal with networking challenges and require the additional benefits of DevOps suitability and enterprise-grade support.
With respect to container networking, OpenShift can improve administrators’ and DevOps teams’ experience for managing networks using the OVN CNI and capabilities for networking policies and traffic security via encryption and firewalls rules.
Sonar Chart Overview
OpenShift is positioned on the Feature Play side of the Sonar because the vendor’s container networking solution is part of the wider OpenShift container management product, which makes OpenShift OVN unsuitable for use with non-OpenShift management solutions.
Tigera, Calico Enterprise
Solution Overview
Tigera is a container networking and security solution provider and the creator of the widespread open source Calico CNI. Tigera offers the following container networking solutions: Calico Open Source, Calico Enterprise, and Calico Cloud. In this report, we are evaluating the Calico Enterprise and Cloud products.
Calico’s base networking capabilities include high-performance scalable pod networking, advanced IP address management, direct infrastructure peering without the overlay, dual top-of-rack peering, security policy enforcement, data-in-transit encryption, egress gateway, DNS policies, workload-based IDS/IPS, workload-centric web application firewall (WAF), network-based anomaly detection for zero-day attacks, and deep packet inspection.
Calico is currently the only solution that supports multiple data planes, which include the Windows, Standard Linux, and the eBPF data plane. While eBPF has a better overall performance in benchmarks compared to the Windows and Standard Linux data planes, a non-eBPF plane can be suitable for compatibility with kube-proxy and existing iptables user space utility platform rules.
Strengths
The solution has comprehensive security capabilities, which include network segmentation and microsegmentation, traffic filtering, access controls, IDS/IPS, encryption using Wireguard, deep packet inspection, and zero-day threat detection based on network and container activity. Calico Enterprise can generate policy recommendations based on flow logs in customers’ clusters for traffic to and from namespaces, network sets, private network IPs and public domains. Policies are tiered, which means that they are grouped to enforce higher precedence policies that cannot be circumvented by other teams. Tiers have built-in features that support workload microsegmentation.
Calico Enterprise has extensive routing capabilities, including support for multicast, BGP, Layer 4 load balancing, pod CIDR allocation, service IPAM, subnetting, overlay network routing, and a choice of three data planes—eBPF data plane, Windows HNS data plane, and VPP data plane.
The solution can automatically provision networking constructs when given a source and destination address and define policies such as forcing internal cluster traffic to go through a common gateway, define DNS-specific policies, target services by name, and apply default policies to all namespaces or pods. Calico supports dynamic IP allocation, service discovery integration, stateless networking, dynamic service routing, graceful draining, and predictable IP space.
For monitoring and visualization, Calico’s Service Graph displays a topological view of pod-to-pod communications within and across clusters. The solution also has packet capture features to troubleshoot service connectivity, which can be integrated with the Service Graph to capture traffic for a specific namespace, service, replica set, daemonset, statefulset, or pod.
Challenges
Tigera is the creator of the Calico CNI, which offers comprehensive Layer 3 and 4 networking security and observability, but the vendor does not currently natively offer ingress controllers, which means that customers who require the capabilities supported by this component need to rely on third-party products.
Purchase Considerations
Tigera’s Enterprise and Cloud plans for Calico are particularly suitable for enterprises who have already deployed the Calico open source CNI and require a hardened version of the solution with additional features for security and observability.
The solution supports intra-cloud, multicloud, hybrid cloud, and edge connections. With host endpoint service, DNS policies, and network sets, it supports connectivity, monolithic third-party applications, web services, and APIs.
Sonar Chart Overview
Tigera is positioned on the Platform Play side of the Sonar because it is the creator of the widely adopted Calico open source CNI, and it offers an enterprise-grade service wrap on top of the CNI.
VMware, NSX-T
Solution Overview
VMware is a key provider of virtualization technologies, as well as multicloud and security services. It has a comprehensive portfolio of products, including particularly extensive cloud and edge infrastructure management, networking, and security solutions.
VMware’s container networking solution is supported via the NSX-t product, which is a network virtualization and security solution that enables VMware’s cloud networking solution with a software-defined approach to networking that extends across data centers, clouds, and application frameworks.
NSX-T supports two CNIs to enable container networking: the NSX Container Plugin (NCP), which is VMware’s proprietary CNI, and Antrea, an open source CNCF project.
NSX Container Plugin provides integration between NSX and container orchestrators such as Kubernetes, as well as integration between NSX and container-based PaaS products such as OpenShift and Tanzu Application Service. NCP enables the automatic creation of an NSX logical topology for a Kubernetes cluster and separate logical networks for each Kubernetes namespace, connecting Kubernetes pods to the logical network. It also implements Kubernetes network policies with NSX distributed firewall, and implements Kubernetes Ingress with NSX Layer 7 load balancer.
VMware Container Networking with Antrea provides in-cluster networking and Kubernetes network policy with commercial support and signed binaries. Integration with NSX provides multicluster network policy management and centralized connectivity troubleshooting via traceflow through the NSX management plane.
Strengths
The solution has well-developed security capabilities, which include traffic encryption, firewall-based filtering, context-aware microsegmentation, FQDN filtering, automated security policy recommendations, IDPS, and network traffic analysis.
The solution simplifies Kubernetes networking by using a unified network stack across Kubernetes and across multiple cloud providers and on-premises environments. The solution ensures secure pod connectivity with enforcement of Kubernetes network policies and advanced native policies.
Challenges
VMware’s solution is complex, with the full extent of the container networking feature realized by deploying multiple products, including VMware Tanzu, VMware Aria operations, VMware Distributed Firewall, NSX Gateway Firewall, and VMware NSX Advanced Load Balancer. This makes NSX-T mainly suitable for organizations that already have a VMware footprint or in-house NSX skills.
Purchase Considerations
Companies with an existing VMware footprint can leverage the container networking solution with a short time to value. Organizations with a small, but expanding containerized environment can opt for using the open source Antrea CNI before opting for VMware’s Container Networking with Antrea solution. Customers with valid licenses for VMware NSX Advanced receive VMware Container Networking at no extra charge.
Container Networking with Antrea can be used for network policy enforcement for managed Kubernetes services to define rules for ingress and egress traffic between pods in the cluster. The solution also provides security policy enforcement on managed Kubernetes clusters from public cloud providers and it encrypts traffic between pods.
Sonar Chart Overview
VMware is positioned on the Platform Play side of the Sonar as container networking is a core part of NSX-T’s feature set. The solution provides networking and security services across VMs, containers, and physical servers across multicloud networks.
6. Near-Term Roadmap
Container networking requires both a different skill set and a different tool set compared to networking on-premises or in cloud environments. As containers and the orchestration platforms become the preferred method of building and running applications, we expect to see more use cases in which new containerized workloads need to communicate with the rest of the existing infrastructure, which includes bare metal workloads, on-premises and cloud-hosted VMs, and microservices, as well as third-party services such as APIs or web services.
The distributed nature of containers and hierarchies, consisting of pods, nodes, namespaces, services, and clusters, makes it difficult to define policies across environments. As such, container networking vendors are developing Layer 3 through Layer 7 cross-environment networking capabilities. Container networking solutions are best positioned to define end-to-end connections and policies as they have awareness of all container-related constructs.
Today, organizations typically require a multitude of tools, including CNIs, service meshes, observability platforms, multicluster orchestrators, ingress gateways, bare-metal load-balancers, and multiNIC meta plugins to address Kubernetes networking, observability, and security requirements. Container networking solutions that can address many of these use cases natively will be able to deliver better user experience and simplify the technology stack required to manage containerized environments.
7. Analyst’s Outlook
Organizations immediately see the benefit of a cloud networking solution that can enable connectivity across hybrid and multicloud environments because they have experienced the difficulties of managing networks across different cloud providers. And just as enterprises took a do-it-yourself approach to managing cloud networks before buying into cloud networking solutions, so most organizations today take a DIY approach to managing container networks.
Where cloud providers natively offer virtual networking appliances that can be set up using GUIs and are documented by the cloud providers themselves, networking in containers has so far been mainly a community effort with little to no prescriptive advice for how the networking components need to behave.
A DIY approach to container networking is much more difficult compared to cloud networking. Container networking requires knowledge of both container runtimes and orchestration platforms, requires multiple-third party plug-ins such as CNIs and ingress controllers, and requires skills that are typically associated with DevOps teams, which have not traditionally been (and should not be) responsible for network deployment and management.
Only recently have enterprise-grade container networking solutions entered the market, offering an experience comparable to cloud provider’s native networking features. But while cloud networking is difficult to manage across different providers, managing clusters of containers in different cloud environments is significantly more difficult.
As the container networking space has been mainly driven by open source projects, it is challenging to define how an enterprise-grade container networking solution looks and which vendors are able to deliver these advanced capabilities. The main choice for organizations has been to look at open source CNIs to make a start on Kubernetes networking. Cilium and Calico have been some of the most widely deployed CNIs, so their enterprise-grade versions have been the most obvious choice. However, several CNIs, such as Flannel, Canal, or kuber-router do not have an enterprise-grade plan. Some CNIs, such as Tungsten Fabric and Weave Net, the latter having been a widely deployed CNI, have been discontinued and are no longer supported.
Interestingly, a considerable number of networking vendors such as Cisco, Juniper, and Arista have developed proprietary CNIs to offer container networking solutions as part of their product. The challenge with this approach is that the go-to DIY strategy has already caused organizations to opt for open source CNIs. Migrating from an already deployed open source CNI to a commercial solution like the ones provided by Cisco, Juniper, and Arista may entail more effort and therefore require more reasons to buy into these commercial solutions. Networking vendors are starting too late to generate demand via the open source CNI method. They can capitalize only on the existing deployments of Calico and Cilium if they offer superior features or if they can leverage any existing hardware deployments for a tightly integrated solution across all form factors.
In the future, we expect the container networking market of enterprise-grade solutions to become better defined and to see an increase in demand. At the moment, the open source to enterprise solution track has only two horses: Isovalent and Tigera, with VMware’s support for Antrea being a notable challenger.
To learn about related topics in this space, check out the following GigaOm Radar reports:
8. Report Methodology
A GigaOm Sonar report analyzes emerging technology trends and sectors, providing decision-makers with the information they need to build forward-looking—and rewarding—IT strategies. Sonar reports provide analysis of the risks posed by the adoption of products that are not yet fully validated by the market or available from established players.
In exploring bleeding edge technology and addressing market segments still lacking clear categorization, Sonar reports aim to eliminate hype, educate on technology, and equip readers with insight that allows them to navigate different product implementations. The analysis highlights core technologies, use cases, and differentiating features, rather than drawing feature comparisons. This approach is taken mostly because the overlap among solutions in nascent technology sectors can be minimal. In fact, product implementations based on the same core technology tend to take unique approaches and focus on narrow use cases.
The Sonar report defines the basic features that users should expect from products that satisfactorily implement an emerging technology, while taking note of characteristics that will have a role in building differentiating value over time.
In this regard, readers will find similarities with the GigaOm Key Criteria and Radar reports. Sonar reports, however, are specifically designed to provide an early assessment of recently introduced technologies and market segments. The evaluation of the emerging technology is based on:
- Core technology: Table stakes
- Differentiating features: Potential value and key criteria
Over the years, depending on technology maturation and user adoption, a particular emerging technology may either remain niche or evolve to become mainstream (see Figure 2). GigaOm Sonar reports intercept new technology trends before they become mainstream and provide insight to help readers understand their value for potential early adoption and the highest ROI.
Figure 2. Evolution of Technology
9. About Andrew Green
Andrew Green is an enterprise IT writer and practitioner with an engineering and product management background at a tier 1 telco. He is the co-founder of Precism.co, where he produces technical content for enterprise IT and has worked with numerous reputable brands in the technology space. Andrew enjoys analyzing and synthesizing information to make sense of today’s technology landscape, and his research covers networking and security.
10. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
11. Copyright
© Knowingly, Inc. 2024 "GigaOm Sonar for Container Networking" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.