Table of Contents
- API Functional Automated Testing Primer
- Report Methodology
- Decision Criteria Analysis
- Evaluation Metrics
- Key Criteria: Impact Analysis
- Analyst’s Take
- About Don MacVittie
Functional testing has long been the backbone of development-oriented testing, verifying the behavior of monolithic application code and its business logic and determining if an application performs its function as designed. Now the emergence of application programming interface (API)-driven development is changing that. Business logic today is expressed via thousands of discrete building blocks that interact using rich APIs to enable functionality. This radical change in how applications are composed has likewise transformed the tools used to test the API-driven logic in them.
Traditional functional testing—particularly automated functional testing—is designed around the application user interface (UI). This provides the user view of the application and allows automation scripts to be written based upon that UI. While automated functional testing remains important for testing the UI and its underlying code, modern applications—based upon layers of APIs—require a different approach. API functional testing evaluates the API interface and its underlying code. In both cases, automation is required to ensure testing occurs consistently. If significant staff or calendar time is invested in testing, the first time a project runs behind, testing will be shortened or eliminated.
If API functional automated testing can exercise each API in a project, validate that it performs as expected, identify side effects of running the code, and prove that the API is ready for production, staff can spend time on other issues.
Automated testing can reduce time on task and improve testing consistency. By emphasizing reuse, testers can go from manually running tests and reviewing all the results to reviewing only those results that require human attention. In a fully automated environment, test staff do not need to look at results because test failures are placed into defect management and assigned to a developer.
Increasingly, API functional testing is a standard practice for developers and development operations (DevOps). API functional testing does not cover composition of APIs in the UI, however; modern application UIs are usually covered by web UI testing or mobile UI testing. So UI testing plus API functional testing creates an entire test environment that can improve code quality.
This GigaOm Key Criteria report and its sibling GigaOm Radar report combine to provide an overview of the API functional automated testing sector, identify capabilities (table stakes, key criteria, and emerging technology) and evaluation metrics for selecting an API functional automated test platform, and detail vendors and products that excel. These reports give prospective buyers an overview of the top vendors in this sector and help decision-makers evaluate solutions and decide where to invest.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.