- Full Cycle API Management
- Benchmark Setup
- Benchmark Results
- About William McKnight
- About Jake Dolezal
- About Kong
Application programming interfaces, or APIs, are now a ubiquitous method and de facto standard of communication among modern information technologies. The information ecosystems within large companies and complex organizations are a vast array of applications and systems, many of which have turned to APIs as the glue to hold these heterogeneous artifacts together. APIs have begun to replace older, more cumbersome methods of information sharing with lightweight, endpoints. This gives organizations the ability to knit together disparate systems and applications. APIs and microservices also give companies an opportunity to create standards and govern the interoperability of applications—both new and old—creating modularity. Additionally, it broadens the scope of data exchange with the outside world, particularly mobile technology, smart devices, and the Internet of Things, because organizations can securely share data with non-fixed location consumers and producers of information.
Due to the popularity and proliferation of APIs and microservices, the need has arisen to manage the multitude of services a company relies on—both internal and external. APIs themselves vary greatly in protocols, methods, authorization/authentication schemes, and usage patterns. Additionally, IT needs greater control over their hosted APIs, such as rate limiting, quotas, policy enforcement, and user identification, to ensure high availability and prevent abuse and security breaches. Exposing APIs opens the door to many partners who can co-create and expand the core platform without even knowing anything about the underlying technology.
Many organizations have turned to APIs and microservices lock, stock, and barrel. Organizations depend on these services to be properly managed, with high performance and availability. For the purpose of discussion, we will define “high performance” herein as companies who experience workloads of more than 1,000 transactions per second on their API endpoints. For these organizations, their need for performance is tantamount to their need for management because they rely on these API transaction rates to keep up with the speed of their business. An API management solution must not become a performance bottleneck. On the contrary, many companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. Imagine a financial institution with 1,000 transactions happening per second—this translates to 86 million API calls in a single 24-hour day! Thus, performance is a critical factor when choosing an API management solution.
This report focuses on API management platforms deployed in the cloud. The cloud enables enterprises to differentiate and innovate with microservices at a rapid pace. API endpoints can be cloned and scaled in a matter of minutes. The cloud enables elastic scalability compared to on-premises deployments, faster server deployment and application development, and less costly compute. For these reasons and others, many companies are leveraging the cloud to maintain or gain momentum as a company.
This report examines the results of a performance benchmark completed with two popular API management solutions: Kong and Apigee—two full life-cycle API management platforms built with scale-out potential and architectures for large scale, high performance deployments. Despite these similarities, there are some distinct differences in the two platforms.
Kong was created by Mashape and subsequently divested thus releasing its API platform. Kong had a keen eye on delivering performance with a lightweight infrastructure when it based its platform on a lightweight proxy. Kong is known for its ability to handle more than 10,000 simultaneous connections to multiple API endpoints with minimal memory usage and while delivering ultra-low latency. Kong became an open-source project in 2015. Today, it is being used by well over 5,000 organizations on 400,000 running instances and has had 54 million GitHub downloads.
Kong is available as an open source software component (OSS) that has an impressive range of functionalities, including open-source plugin support, load balancing, and service discovery. Kong Enterprise (KE)—the edition tested in this benchmark—features expanded functionalities, such as the management dashboard, a customizable developer portal, security plugins, metrics, and 24×7 support. In this report, any mention of Kong should be applied to Kong Enterprise as well.
Kong and Kong Enterprise can be deployed both in the cloud or on-premises. For us, the installation took less than 10 minutes from scratch on an Amazon Web Services (AWS) EC2 instance. Debian and RedHat-based package managers (Yum and Apt-Get) have Kong in their repositories; Docker and CloudFormation options are also available.
Kong can operate as a single node or can join others to form a cluster. In a single-node configuration, the PostgreSQL or Cassandra database can live on the same instance as Kong. In a cluster configuration (as pictured below), the database is on a separate instance. Scaling Kong horizontally is simple. Kong is stateless, so adding nodes to the cluster is as easy as pointing a new node to the external database, so it can fetch all the configuration, security, services, routes, and consumer information it needs to begin processing API requests and responses. Also in a cluster environment, a load balancer (such as Nginx or HAProxy) is used on the edge to provide a single address for clients and to distribute requests among the Kong nodes using a chosen strategy (e.g., round robin or weighted).
The following is a diagram representing the architecture of Kong.
Kong has a thriving ecosystem of plugins (referred to as the Kong Hub), available as part of the open source and enterprise versions, such as LDAP authentication, CORS, Dynamic SSL, AWS Lambda, Syslog, and others. Based on Nginx, Kong allows users to create their own plugins using LuaJIT.
Apigee has been around a long time—even before the advent of containers. Google acquired Apigee in September 2016 to give the large vendor an API management solution to compete with other large cloud vendors’ products like Amazon API Gateway and Microsoft’s Azure API Management. Apigee’s primary microservices product is called Edge, but they have a broader product offering, such as Sense to detect suspicious API usage activity. While Apigee is available on-premises (a deployment they call Private Cloud), they discourage its use in favor of its software-as-a-service (SaaS) Edge in the public cloud. In fact, Apigee even exhorts on its own website1 to “think twice” about an on-premises deployment, calling it “an iceberg of maintenance and cost.” This might be Google’s influence, preferring that Apigee be deployed in Google Cloud.
One of the reasons why a company might shy away from an on-premises deployment with Apigee is that the architecture is the more involved of than a pure cloud deployment. The diagram below shows what resources are needed in an on-premises deployment. Since Apigee is clearly recommending to customers their public cloud offering, we tested it out of the box with a business level license—permitting 300 million API calls per day, and no rate limiting or bandwidth reduction.
|First Released||2010 (as Mashape)||2009|
|Current Version||EE 0.34||18.12.04|
|Based on/Written in||Nginx (C and LuaJIT)||Proprietary|
|Database(s)||PostgreSQL or Cassandra||PostgreSQL + Cassandra|
|Other Dependencies||None||Apache Qpid + Zookeeper|