In the course of this year’s evaluation into the API functional automated testing space, we found some evolving trends that have bearing on those considering implementing a formal API testing program.
The Evolution of API Testing
Based on its naming, you may be surprised to learn that API testing has its roots in development, not testing. Unlike application testing, tests for APIs can’t be written without coding, and API testing tools were developed with low-code methods to call APIs. However, it took years for this to become truly useful.
At the same time, vendors began to recognize that the increase in API usage meant that application testing tools needed API testing to be complete, and they started to implement API testing as part of larger test suites.
Thus, two classes of API testing products were created that still loosely exist today:
- Tools that are easier for developers to use
- Tools that are easier for testers to use
Of course, the two tool types are complementary, and over time, vendors have moved toward supporting both types of testing to meet the overall needs of the enterprise.
The problem is that creating API tests using low-code methods is still difficult. In the end, the API call is still occurring, and not all test teams have staff that understand API mechanisms well enough to build tests even with low-code methods.
This often results in tests still being created by developers as they write their code, whch are then used by testers later. This seems like it would be an acceptable solution, except that we wouldn’t need testers at all if developers were good at it.
In short, if developers knew about an issue, they wouldn’t have written code that created the issue to begin with. This is pretty much the last stumbling block that separates API testing during development from API testing that’s done afterward.
API Testing Today
That brings us to the most recent development: testing is one of the first spaces where the promise of AI will be fulfilled.
Test generation is already happening, enabling the creation of many tests to cover many more permutations, so many in fact that we’re going to need a way to filter down. Traditional limiters that test tools implement like “only exercise what’s changed” have been extended and may be enough, but more specialized training of models is likely to be needed to end up with enough tests to exercise the code without burying users in test results. No one wants to end up with excessive tests that aren’t being used.
With test generation, we can lighten the burden on less-technical test teams even more, allowing them to point the generator at an API and say “create the tests,” without having to write scripts or know about things like transport protocols to exercise the important parts. Assuming tests created during development are the core, and the generated tests enhance them, the very most important bits will be exercised by the developer’s normal work, and then by the AI-generated testing during standard pre-deployment testing.
This signals the ultimate merging of development-oriented and testing-oriented tools: if both groups can use the same UI and AI can generate the bulk of tests, these tools can achieve the goal of being useful across the SDLC.
API Testing Tomorrow
Vendors are at different points in this journey, but we do expect that by the time of our next evaluation into this space, most tools will be usable by both testing and development teams, thanks in large part to the substantial use of AI.
Importantly, not only are the testing tools merging, but the two types of testing (application testing and API testing) are as well. More and more, APIs are the plumbing of all applications, so tools that test applications are increasingly being asked to test the APIs that make them up. Once all these changes take place, we expect the trend of merged testing will accelerate.
Among the greatest issues impacting testing has been balancing coverage with enough testing without creating a bottleneck that consumes massive amounts of staff time. AI promises to increase coverage on the test generation side and reduce results review on the other end. That only leaves one remaining major issue–performance. More tests take more time, and more complex tests take even more time. Parallel execution helps a lot, and virtualizing test engines makes spinning more up far easier, but this is the next big issue that must be tackled.
Some development teams will continue to prefer test tools that cater to the development stage of API testing, and test teams will not want to compromise to support development teams in their daily work. This means organizations with established testing programs will be able to continue using and sourcing development-specific tools for a while to come yet.
Organizations with broader testing needs should consider making API testing a purchase requirement for any testing tool they are evaluating. This will future-proof implementations by folding in APIs from the beginning.
Several of the testing-centric API test tools are part of broader test suites that support developers at varying levels of depth. More and more tools are following this path, and the level of support for developers is growing as well.
There has literally never been a better time to begin a rigorous API functional automated testing program. It’s worth evaluating your organization’s needs and how current tools and processes can meet those needs. Still, we think it likely that current API testing leaves a gap in the overall testing process, and that’s something you’ll want to keep in mind.
To learn more, take a look at GigaOm’s API testing Key Criteria and Radar reports. These reports provide a comprehensive view of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
- Gigaom Key Criteria for Evaluating API Functional Automated Testing Solutions
- GigaOm Radar for API Functional Automated Testing
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.