- API Management in the Cloud
- GigaOm API Workload Test Setup
- Test Results
- About NGINX
- About William McKnight
Application programming interfaces, or APIs, are now a ubiquitous method and de facto standard of communication among modern information technology applications. Large companies and complex organizations have turned to APIs for exchanging data to knit these heterogeneous systems together and turn data into a service.
APIs have begun to replace older, more cumbersome methods of information sharing with reusable, loosely-coupled microservices. This gives organizations the ability to share data across disparate systems and applications without creating the technical debt of proprietary, unwieldy vendor tools. APIs and microservices also give companies an opportunity to create standards for the interoperability of applications—both new and old—creating modularity and governance. Additionally, it broadens the scope of data exchange with the outside world, from mobile technology and smart devices to the Internet of Things (IoT), because organizations can securely share data with non-fixed location consumers and producers of information.
Due to the proliferation of APIs, the need has arisen to manage the multitude of services a company relies on—both internal and external. API endpoints themselves vary greatly in the protocols, allowed methods, authorization/authentication schemes, and usage patterns. Additionally, IT departments need granular control over their hosted APIs to enforce rate limiting, quotas, policies, and user identification, as well as to ensure high availability and prevent abuse and security breaches. Exposing APIs opens the door to many partners who can co-create and expand the core platform without even knowing anything about the underlying technology.
This report focuses on API management platforms deployed in the cloud. The cloud enables enterprises to differentiate and innovate with microservices at a rapid pace. It allows API endpoints to be cloned and scaled in a matter of minutes. The cloud offers elastic scalability compared to on-premises deployments, enabling faster server deployment and application development, and allowing less costly compute.
More importantly, many organizations depend on their APIs and microservices for high performance and availability. For the purposes of this paper, we define “high performance” as companies that experience workloads of more than 1,000 transactions per second and need a maximum latency less than 30 milliseconds across their API landscape. For these organizations, the need for performance is tantamount to their need for management, because they rely on these API transaction rates to keep up with the speed of their business.
An API management solution cannot be a performance bottleneck. On the contrary, many of these companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. If a business experiences 1,000 transactions per second, this translates to 3 billion API calls in a month. Large companies with high-end API traffic levels can see monthly API calls eclipse 10 billion. Thus, performance can be a critical factor when choosing an API management solution.
In this paper, we reveal the results of performance testing we completed with four full-lifecycle API management platforms: NGINX Controller, Kong Enterprise, Kong Cloud, and AWS API Gateway.
Note: We wanted to include Apigee in this report, due to its standing among popular solutions. However, its end user license agreement (EULA) expressly prohibits benchmarking the platform and publishing the results without Google’s written consent.
Performance Testing Highlights
In our benchmark testing, NGINX outperformed Kong EE at all attack rates for a single node setup. NGINX had 30% lower latency than Kong EE at the 99.99th percentile. The latencies for NGINX and Kong EE diverged at higher percentiles. Although the differences are minimal until you get to the 99.9th percentile, the difference in latency is pronounced at the 99.9th, 99.99th percentile, and the maximum latency seen during the test run.
With three-worker node cluster configurations, NGINX produced less than half of the latency of Kong EE at the 99.99th percentile.
For fully managed offerings including AWS API Gateway and Kong Cloud, at the 99.99th percentile, NGINX had over a thousand-fold lower latency than Kong Cloud and AWS API Gateway.
The maximum transaction throughput achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency was over 30,000 requests per second for NGINX and over 20,000 for Kong EE. That is a 50% throughput advantage for NGINX compared to Kong EE. Neither of the fully-managed offerings we tested (AWS API Gateway or Kong Cloud) could achieve this level of throughput out of the box.
Testing hardware and software in the cloud is very challenging. Configurations may favor one vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the workload itself. Even more challenging is testing fully managed, as-a-service offerings where the underlying configurations (processing power, memory, networking, etc.) are unknown to us. Our testing demonstrates a narrow slice of potential configurations and workloads.
As the sponsor of the report, NGINX opted for the default API gateway configuration as provisioned by the NGINX Controller solution – the solution was not tuned or altered for performance. GigaOm selected the configuration for Kong Enterprise that was closest in terms of CPU and memory configuration. The fully managed offerings (Kong Cloud and AWS API Gateway) were used “as-is,” since by virtue of being fully managed, we have no access, visibility, or control of their respective infrastructures.
We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to look past marketing messages and discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.