provetas-118222_1920

Key Criteria for Evaluating Performance Testing Tools v1.0

An Evaluation Guide for Technology Decision Makers

Summary

Performance testing is a process that determines how responsive a system is under varying loads and provides actionable feedback as part of the lifecycle of IT business solutions. It has been an essential element of software testing for a long time, and while several well-established tools and practices exist to deliver on its goals, it is now a critical part of site reliability engineering (SRE), the quality-first approach seen as crucial in cloud-native environments.

The mission of performance testing is to inform management and development staff about how well an IT business solution performs. To achieve this, the test tool needs to enable test design and configuration, and to reflect real-world scenarios when run. The results of these tests enable teams to gauge how an application responds under the stress of large quantities of data input, transaction volumes, or processing. Test results can also help direct stakeholders toward root causes of problems, trigger resolution events via integration with other tools, and identify potential optimization opportunities.

Performance testing enables key stakeholders on software teams – including developers, testers, performance engineers, and business analysts – to ensure applications can scale to meet demand in terms of users, transactions, or data and processing volumes. As application types and architectures change, and as the end-user experience increases in importance, performance testing is evolving to meet a diverse set of needs.

Two factors in particular are influencing how we think about performance engineering today:

Shift Left: The move to “shift left,” or to test as early as possible in the software life cycle, as evidenced by DevOps adoption, demands frictionless tools. Today’s performance testing tools must do more than just run load tests—they should fit into a software development toolchain to automate the regular execution of performance tests. Based on interviews, testing can kill innovation and increase the complexity of delivering solutions. The need to update old tests to reflect evolving conditions results in many efforts to skip or minimize performance testing. When a test tool is very difficult to use, bureaucratic approaches become a bottleneck of access to a limited number of skilled testers.

Data-Driven Development: Data-driven development is an organizational approach to ensure best practices are followed and to minimize the friction between developers and testers. Data-driven automation can keep the software development process flowing by including test-driven code development, allowing the tests to be aggregated, and creating performance tests that can be maintained more easily as the business process evolves. “Right Help” is the process of including hooks and outputs in the development process to help operations optimize (or train an AIOps tool for) monitoring and response by giving them easy access to known-good load tests that include performance metrics supplied by the business product owner. This process gives the product owner actionable information about current operations or the proposed release with an acceptable level of confidence.

Performance testing tools range from open source solutions such as Apache’s JMeter to enterprise offerings such as Micro Focus’ LoadRunner Enterprise. Testing can also be outsourced to reduce internal staffing and training costs. Choosing to outsource adds another dimension in the selection process—third parties may use either generally available or proprietary tools. This report will help evaluate the outsource candidate’s ability to meet both short-term and long-term testing needs.

You can avoid lock-in by working with a third-party tester that uses readily available tooling and specifying in the contract that your company owns the test cases and test data. That way you can switch to another vendor or bring the effort in-house without having to do a lot of reworking or rationalize old tests with the results of new tests. Additionally, operational management features of some performance testing tools can offer the ongoing post-deployment performance information needed to ensure continued compliance with SLAs or SLOs.

At GigaOm, we feed all of these options and drivers into our key criteria and radar reports on performance testing, as laid out below.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Full report available to GigaOm Subscribers.

Subscribe to GigaOm Research