This GigaOm Research Reprint Expires Mar 2, 2024

Key Criteria for Evaluating Cloud Performance Testing Solutionsv3.0

An Evaluation Guide for Technology Decision-Makers

1. Summary

Cloud computing technologies are relatively mature and have achieved high levels of adoption in many organizations. This adoption requires stakeholders to ensure the applications can scale to meet ever-growing demand. Today, these stakeholders include developers, testers, quality assurance (QA) teams, performance engineers, and business analysts. These teams leverage performance testing to ensure the application is performing as expected and users have an optimal experience. Confirming this ability to scale—in terms of users, transactions, and data and processing volumes—is accomplished using performance testing tools.

These performance testing solutions help to isolate issues and identify ways to resolve them. Cloud performance tools are evolving and becoming more robust in their ability to identify problems and bottlenecks earlier in the process. Many have the ability to pinpoint issues and possible solutions. From a database perspective, these solutions help to test and optimize configurations for cache size, bucket size, and input/output (I/O).

This review is focused on cloud-based load-testing solutions in which the ability to scale is based on leveraging hyperscale cloud economics and allowing usage-based billing. This approach expands the ease of use for load testing and removes the need to schedule projects to use limited on-premises testing capacity. Cloud testing enables an application programming interface (API)-level of interaction between developers and testing capacity. It also provides automated low-cost ways for operational teams to test applications before and after changes to corporate infrastructure. Such testing reduces the need to have large pools of people online during a change control event to ensure their application still works after the change or that a change fixes the issue that initially prompted the change.

This GigaOm Key Criteria report identifies the common features needed to make a solution viable (table stakes), details the capabilities that differentiate products (key criteria), and highlights important evaluation metrics for selecting an effective cloud performance testing platform. The companion GigaOm Radar report identifies vendors and products that excel in those criteria and factors. Together, these reports provide an overview of the category and its underlying technology, identify leading cloud performance testing offerings, and help decision-makers evaluate these platforms so they can make a more informed investment decision.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

2. Cloud Performance Testing Primer

Performance testing remains an essential element of software development. Several well-established tools and practices enable test teams and management to design, configure, run, and oversee application performance benchmarks based on real-world scenarios. This, in turn, enables teams to gauge how an application responds under the stress of large quantities of data input, huge transaction volumes, or the enormous amount of processing that takes place in cloud computing environments. These tools can also help stakeholders identify root causes of problems and potential optimization opportunities, as well as trigger resolution events by integrating testing outputs with other tools.

However, performance testing needs to go beyond simple benchmarking, troubleshooting, and event-based integration. These tools need to respond to the two challenges that kill innovation in general and software delivery in particular: complexity and friction.

Often, what starts as a simple and effective solution to a problem balloons out of proportion, becoming unnecessarily complex. The same is true of tests, which can become onerous, unsuited to the task at hand, and, in the worst case, ignored or avoided. Friction can be caused by manual overheads or overly bureaucratic approaches. Performance testing tools that provide data-driven automation can help users respond to both challenges: keeping on top of complexity and alleviating the issues of friction to deliver faster results with greater efficiency. Development teams can then spend more time innovating and less dealing with software delivery overhead.

A tool should deliver significantly greater returns and ownership benefits than its cost. One cost consideration for testing tools is the ability to reuse work done by developers versus having to recreate it to support the load-testing tool. The ability to automatically convert developer scripts or use case testing for products like JMeter to something that can be run at scale by a load-testing tool reduces the cost of testing. Vendors whose products integrate with or can run alongside existing open-source tools will make the cultural transition easier as well as reduce the pressure to deliver a knife-edge cutover quickly, which also will provide long-term value to buyers.

The overall testing lifecycle (see Figure 1) can also impact returns and costs by automating many of the manual tasks needed to run load tests and by enabling developers or DevOps teams to run quality testing as part of their daily development activities. This approach allows quicker identification of performance issues within a release sprint, compared to testing at deployment time and having issues be added as a task to the backlog of work to get prioritized in a future sprint. This change makes testing a natural part of the daily code development process and not a future consideration where problems are identified weeks or months after code is written. As Figure 1 shows, testing is not one but a series of steps that need to be accounted for in the tool selection process to ensure that the tool meets the current and future needs of a business.

Figure 1. Performance Testing Enables Powerful, Scalable, Iterative Testing during the SDLC

Performance testing tools range from open source to enterprise solutions. Testing can also be outsourced to third parties, who may use generally available or in-house tools. Operational management tools can also offer post-deployment performance information.

Technical organizations are the typical buyers of these solutions. However, these tools can be leveraged by many different skill levels and many kinds of teams, such as performance engineering groups, QA, development teams, and even business analysts.

Overall, cloud performance testing tools can leverage vast amounts of cloud resources in a cost-effective manner. These solutions enable organizations to scale applications to meet the needs of the business and end users quickly, accurately, and cost effectively.

To facilitate the cloud performance testing solution selection process, this Key Criteria report acts as a buyer’s guide for technology decision-makers, exploring the table stakes, key criteria, and evaluation metrics of these solutions and highlighting the technological capabilities we expect to see in the near future.

3. Report Methodology

A GigaOm Key Criteria report analyzes the most important features of a technology category to help IT professionals understand how solutions may impact an enterprise and its IT organization. These features are grouped into three categories:

  • Table Stakes: Assumed Value
  • Key Criteria: Differentiating Value
  • Emerging Technologies: Future Value

Table stakes represent features and capabilities that are widely adopted and well implemented in a technology sector. As these implementations are mature, they are not expected to significantly impact the value of solutions relative to each other and will generally have minimal impact on total cost of ownership (TCO) and return on investment (ROI).

Key criteria are the core differentiating features in a technology sector and play an important role in determining potential value to the organization. Implementation details of key criteria are essential to understanding the impact that a product or service may have on an organization’s infrastructure, processes, and business. Over time, the differentiation provided by a feature becomes less relevant and it falls into the table stakes group.

Emerging technologies describe the most compelling and potentially impactful technologies emerging in a product or service sector over the next 12 to 18 months. These emergent features may already be present in niche products or designed to address very specific use cases; however, at the time of the report, they are not mature enough to be regarded as key criteria. Emerging technologies should be considered mostly for their potential downfield impact.

Over time, advances in technology and tooling enable emerging technologies to evolve into key criteria and key criteria to become table stakes, as shown in Figure 2. This Key Criteria report reflects the dynamic embedded in this evolution, helping IT decision-makers track and assess emerging technologies that may significantly impact the organization.

Figure 2. Evolution of Features

Understanding Evaluation Metrics

Table stakes, key criteria, and emerging technologies represent specific features and capabilities of solutions in a sector. Evaluation metrics, by contrast, describe broad, top-line characteristics—things like scalability, interoperability, or cost effectiveness. They are, in essence, strategic considerations, whereas key criteria are tactical ones.

By evaluating how key criteria and other features impact these strategic metrics, we gain insight into the value a solution can have to an organization. For example, a robust application programming interface (API) and extensibility features can directly impact technical parameters like flexibility and scalability while also improving a business parameter like TCO.

The goal of the GigaOm Key Criteria report is to structure and simplify the decision-making process around key criteria and evaluation metrics, allowing the first to inform the second and enabling IT professionals to make better decisions.

4. Decision Criteria Analysis

In this section, we describe the specific table stakes, key criteria, and emerging technologies that organizations should evaluate when considering solutions in this market sector.

Table Stakes

This report considers the following table stakes—features we expect all performance testing solutions to support:

  • Test definition and management
  • Application-based customization
  • Integration with other testing types and tools
  • Integration with development environments and CI/CD
  • Management reporting and dashboards
  • Flexible load generation

Test Definition and Management
Tools must enable and simplify the creation, recording, debugging, ongoing management, and maintenance of tests, scripts, scenarios, and reports—by application or project, for example. This can range from simply recording a user navigating a website with a vendor-provided plug-in to more complex workflows, such as specific programing and documentation of the test. While vendors differ on how to create a definition and manage the tests, test definition and management in and of itself is a requirement that separates this market from simple load tools, and that means solutions are viable for assuring long-term code quality and meeting business performance expectations.

Application-Based Customization
Solutions must carry out performance tests on a per-application basis, supporting common application protocols, languages, and architectures. While most vendors can support more than API testing over HTTP, the ability to customize the tool to meet performance test requirements is necessary to ensure the long-term viability of a solution. The ability to use one test script to test multiple users or features and functions was critical to make the tool applicable for this report. While vendors differ in the extent to which they support customization of load tests, support of the basic need is what makes a tool an enterprise-grade solution.

Integration with Other Testing Types and Tools
Performance testing must work with other kinds of testing; for example, functional testing, which is potentially based on open source tools. It also must work with models across the lifecycle—A/B and canary testing, for example—as well as with feature flags. Tools should exist as part of an IT ecosystem and not as a unique custom tool that’s isolated to just quality assurance testers. To be viable for this report, the tool must integrate with other tools to inform them or be informed by them in the execution of performance tests. This is where many of the open source development tools fail to move from a niche tool used by highly skilled developers to a general tool that can support DevOps teams or infrastructure teams that need to manage operations post deployment.

Integration with Development Environments and CI/CD
A performance testing tool should interoperate easily with an integrated development environment (IDE), enabling developers and testers to run tests using their tools of choice. Tools must offer APIs or plug-ins to enable integration with IDEs and other continuous integration and continuous delivery/deployment (CI/CD) tools. The market is shifting, and quality control is now moving to the DevOps teams or even to a daily developer activity. To support this shift, tools from the developers’ IDE as well as a triggered event and a CI/CD tool chain all need to be used.

Management Reporting and Dashboards
Test results must be presentable as reports and/or as dashboards, which can show success over time (compared to regression testing). Solutions should provide simple reports within the tool as well as export the data to be visualized by the dashboard of choice of the customer. While some vendors offer more than basic reporting, the table stake is basic reporting that provides the minimal viable feature set of reporting needs.

Flexible Load Generation
The tool must generate and deliver loads from multiple sources in the cloud; for example, when simulating user interaction and behaviors at scale from multiple locations around the world with virtual users, based on the need to simulate real user experiences. The tool needs to support the minimal requirement of user-selected sources from which traffic is generated as well as the ability to set the size of the test based on the user needs (from a few to hundreds of users). While some vendors can support millions of users, our table stake is hundreds of users from two or more locations the buyer can select for each load test.

Key Criteria

Here, we explore the primary criteria for evaluating solutions based on attributes or capabilities that some vendors may offer but others don’t. These criteria will be the basis on which organizations decide which solutions to adopt for their particular needs. The key criteria for performance testing are:

  • Automated test definitions
  • Advanced load types
  • Testing as code
  • Root cause analysis
  • Performance insight
  • Collaboration
  • Deployment environment support

Automated Test Definitions
Solutions should generate performance tests that effectively assess application performance based on configuration or production data, or by parsing code. One way to do this would be to use a browser plug-in to test customer experience and capture user interaction with the front end of a business application. Plug-ins or agents also could be added to an operating system (OS) or side cars could be used in Kubernetes clusters to record network activity before it’s encrypted for transport.

A more advanced process would be to look at the client and service source code and identify interactions based on user stories in the development process. The ability to automatically convert a single transaction to a test definition is what matters, not how the product records or observes interactions. The goal is to have a reusable test definition that can be converted into a scalable test script so it can be used for performance testing without human interaction. In some cases, this feature can be extended to automate the creation of the actual test instead of just defining the elements and size of a test.

Advanced Load Types
A performance testing tool should generate more advanced load patterns beyond simply increasing or decreasing load. This could include burst patterns from single or distributed sources, for example. It should also support smoke testing, long-running tests, and testing that impacts wide area network (WAN) performance. (See Table 1 for more detail.)

Table 1. Advanced Load Types: Types of Tests and Tools Used

TYPE OF TEST TOOL USED WHO USES IT WHY
Functional Test Functional Test Tools Developers, DevOps, QA Validates that a feature achieves the expected business outcome and looks for failures in negative testing.
Negative Testing Functional Test Tools Developers, DevOps, QA, Security Looks for error handling and the ability to prevent or detect hacking.
Unit Test Functional Test Tools Developers, DevOps Determines whether a small bit of code performs as expected. This is the smallest functional test and should be performed as code is checked in by the developer.
Combination Test Performance Test Tools DevOps, QA, Test Team, Ops Simulates real user behavior by multiple users using different features.
Load Test Performance Test Tools DevOps, Test Teams, Ops Validates SLA or SLO compliance.
Ramp-Up Test Performance Test Tools DevOps, Test Teams Gradually increases load to simulate real user behavior.
Smoke or Brake Test Performance Test Tools DevOps, Test Teams Checks for errors or performance degradation that can occur over long run times, typically multiple hours or days. These tests can identify issues such as memory leaks.
Spike Test Performance Test Tools DevOps, Test Teams Rapidly increases usage for a short time and looks to see how scaling and error handling respond to the sudden surge.
Source: GigaOm 2023

Testing as Code
The testing solution should manage test configuration information, test scripts, and input data in a textual format in such a way that it can be stored under configuration management. The code, which should be human-readable, is typically stored in the customer’s Git repository of choice and tied to a specific release or build of code. It should be managed within a GitOps or infrastructure as code (IaC) methodology. Lastly, the code should use variables and take advantage of secret stores so identity and passwords or secret tokens are not stored as clear text.

Root Cause Analysis
The tool should offer general indicators of where performance issues lie as well as a detailed stack analysis of their causes, potentially using a normalized data model. This is a post-test run for most products for which a rules engine or an artificial intelligence/ machine learning (AI/ML) engine is used to identify possible causes of negative performance results. The greater the insight as to possible root causes, the greater the value to the buyer.

Performance Insight
The tool should make recommendations as to how and where performance can be improved and help encourage good performance engineering practice. While root cause is typically about a failure, performance insight is about ways to exceed the current service level agreement or objective (SLA/SLO), or the accepted levels of performance. The key point here is the ability to look at the code and determine how performance can be enhanced. Unlike root cause analysis, which is more about system sizing and configuration of components, performance insight is related to improving the code’s efficiency.

Collaboration
The tool should enable stakeholders to work together to create, run, and manage tests. This collaboration is most often achieved by integrating testing features into the developer IDE but can also be achieved through integration with third-party tools, such as Slack, Microsoft Teams, Asana, or Jira. Performance tools that can provide bidirectional communication or integration are more valuable than those that use only one-way methods. This is the most common place for performance tools to update a value stream mapping or management tool through a status update. The results from load tests of code that’s been promoted into production also should be fed to cloud management platforms (CMPs), cloud resource optimization, and financial operations (FinOps) tools so they can be more proactive about sizing and budgetary optimizations.

Deployment Environment Support
The tool should offer specific features that simplify testing of specific target environments; for example, whether software is on-premises or cloud-hosted, running as a service, or based on a microservices architecture. When testing cloud-based applications, a tool’s ability to address cloud-specific settings, such as load balancing or knowledge of instance types used by the target system, increases its value and reduces the risk of inaccurate load tests.

Emerging Technologies

Finally, we consider key emerging capabilities in this sector. We expect these technologies to become widely relevant over the next year or two:

  • Integration of AI/ML
  • Automated test creation
  • Kubernetes- or Cloud Foundry-deployed applications support
  • Microservices and service meshes support
  • Cloud-based as-a-service testing or integration support

Integration of AI/ML
AI/ML can have a significant impact on performance testing. It can be leveraged to develop performance test scripts, create workload modeling, facilitate correlation of test results across many tests both historical and current, aid in the identification of the root cause of issues, and provide suggestions for optimization. An AI/ML-based approach can suggest changes that users, developers, or operations staff were not previously aware of to enhance operations and deliver more stable performance, including during so-called black swan events that are unpredictable and have potentially severe consequences.

Automated Test Creation
Some tools are able to create test scripts and code automatically when no scripting or specific testing tool skills are required. Developers check in code to validate it and ensure the system stays within acceptable SLA/SLO ranges. The emergence of ML in performance tools can help with the automation of tests and with failure point predictors. The ability to create tests from watching real users already exists. But some tools can take functional or unit test cases as well and convert them into high-volume test runs. This feature has the greatest potential to reduce staff time spent and training on ensuring performance targets are met and maintained.

Kubernetes- or Cloud Foundry-Deployed Applications Support
Some tools can talk to the control planes of Kubernetes or Cloud Foundry systems and consume feeds from the likes of Prometheus or from public-cloud monitoring APIs. Some vendors also support running their tools in Kubernetes containers, making the systems easier to patch and upgrade.

Microservices and Service Mesh Support
While microservices and service meshes are often deployed on Kubernetes or Cloud Foundry, function as a service solutions (FaaS)—such as Amazon Web Services (AWS)’s Lambda services—require different abilities to track performance and provide valuable insight on how to correct performance problems. Not all tools support load testing of microservices or of service meshes, but as use of Kubernetes, serverless, and FaaS grows, this capability will move closer to becoming a key criterion. Today, it’s still a smaller segment of the market that needs to test these independently of the larger business product (application). The way you test microservices may require a proxy that lets an internet load generator tunnel down the ingress (entry) point of the microservice. The network traffic aspect and the method of obtaining metrics differ when they’re from a website or internet-exposed APIs, requiring a tool that specifically supports this type of testing.

Cloud-Based As-a-Service Testing or Integration Support
When applications look to connect with or receive traffic from third-party software as a service (SaaS) solutions, such as Salesforce, the tool needs to understand the process and limits that apply when testing or sending traffic to the service. Some testing tools can simulate third-party services to load test the application without impacting the third-party, SLA, or contractual terms related to load testing. When applied to no-code or low-code scenarios, robotic process automation (RPA), integration platform as a service (iPaaS), and other SaaS-based application services, the ability of the IT function to load test those systems to ensure they do not cause a problem for the rest of the IT portfolio is an emerging use case.

5. Evaluation Metrics

Our assessment of the solution space continues with an exploration of the strategic evaluation metrics we use to evaluate the impact that a cloud performance testing solution might have on an organization. For purposes of this exploration, we consider the following evaluation metrics:

  • Scalability
  • Flexibility
  • Usability
  • Licensing terms
  • Overall ROI and TCO

Scalability
Performance tests should be scalable by nature but also need to encompass the breadth of areas and dimensions required, including parallelization of tests, without causing time or cost overhead. The best systems support geographic load generation and small-scale tests involving just a few transactions per second that can be scaled to reflect more than 100,000 transactions per second. A solution should support a single development group, as well as perhaps hundreds of development groups as required, while aiding in test scheduling to optimize cloud costs.

Flexibility
Testing should cover a range of inputs—such as virtual users, streamed data, transactions, HTTP, and API calls—and span a range of platforms, protocols, and both proprietary and open source third-party solutions commonly found in enterprises. The best tools include either many load methods or support for adding and removing them as needed to manage costs. Ideally, an enterprise would use one load-testing solution that serves all of its needs.

Usability
The tool should offer different interfaces for different stakeholders, including performance testers, developers, and management, and provide capabilities for users at multiple skill levels. It’s not enough to just have predefined user roles; the best solutions will enable users to see any project or part of a project they have access to. For highly skilled developers, the user experience may be contained within their IDE of choice, whereas a manager or security operations (SecOps) staff person may require a more graphical view that enables visibility of a large number of projects as well as key metrics about each, with the ability to drill down and see specific details about a test and its results over time and across versions.

Licensing Terms
Provisioning should scale according to need with commercial terms following suit, offering a per-use model or a low entry point for a reduced set of functions. Different models may be a better fit for some organizations than for others. For example, a pay-as-you-go model may result in financial uncertainty for some due to the unpredictability of the cost of services. On the other hand, a model that uses prepaid tokens that can be checked out to fund a test and then returned to the enterprise pool may spark intra-organizational competition over who gets to use tokens and how entitlement spending is managed over time. The ability to show that a vendor’s licensing model is appropriate for its target audience is critical to achieving high scores for this evaluation metric.

Overall ROI and TCO
Tools should deliver benefits that are significantly greater than their cost. For new buyers, using handcrafted open source tools has both short and long-term costs. Some vendors that can show they provide better time to market often deliver better ROI, while vendors that can show long-term value have a better TCO story. The best vendors are ones that are affordable today and will continue to be a good fit in future.
The tool should be able to consume open source or functional test scripts, providing the flexibility that enables developers or QA testers to use a popular tool of their choice without requiring rework by performance test tool users. The ability to integrate with or run alongside existing open source tools will make the cultural transition easier as well as reduce the pressure to deliver a knife-edge cutover quickly, which will also provide value in the longer term to buyers.

6. Key Criteria: Impact Analysis

In this section, we provide guidance on how the key criteria features described earlier impact each of the evaluation metrics just defined. Table 1 helps the reader understand the impact that each feature or capability has on each evaluation metric, making it possible to better assess the value a solution may have to an organization.

Table 1. Impact of Key Criteria on Evaluation Metrics

Scalability Flexibility Usability Licensing Terms Overall ROI & TCO
Automated Test Definitions 3 3 5 1 3
Advanced Load Types 5 4 4 5 3
Testing as Code 5 4 4 2 3
Root Cause Analysis 3 5 4 3 5
Performance Insight 3 4 4 3 5
Collaboration 3 2 5 3 3
Deployment Environment Support 4 3 5 5 3

Impact on Scalability
Advanced load types and testing as code are both critical to moving from a few applications tested per month to tens or hundreds of applications tested per day. Advanced load types allow teams to use one tool for multiple types of load testing. Testing as code reduces error and increases the ability to automate testing both as part of application development and as validation of infrastructure changes (such as with patches or version upgrades). When a tool’s value to the business can increase as the company grows, it goes a long way toward future-proofing the investment.

Impact on Flexibility
Root cause analysis has the greatest impact on how a tool scores on this evaluation metric. However, the ability to handle advanced load types is almost as important here as it is to scalability, along with performance insight. For a tool to discover the root cause of an issue, it has to have access to or consume feeds from the components and infrastructure that are used in load testing and correlate that data with what it observes in the test results.

Impact on Usability
Automated test definitions, performance insight, and collaboration features are all important when evaluating the usability of a solution in this space. However, as an evaluation metric, usability has the most significant overall impact on the value a buyer will get from the testing solution. The ability to create tests quickly, understand what’s negatively impacting performance, and communicate that to others is critical for tools to deliver the most value to a business in the shortest amount of time.

Impact on Licensing Terms
Deployment environment support and advanced testing can have an enormous impact on licensing terms. While scale and users often impact the cost of a solution, they don’t affect what’s included in terms of the licensing agreements. Some vendors have specific licensing terms on protocols used, cloud vendors supported, and types of tests that can be run.

Impact on TCO and ROI
Both performance insight and root cause analysis are considered most important when it comes to overall TCO and ROI. Knowing a test passed or failed is not as valuable as finding out what is limiting performance and if there is any over- or underspending on cloud resources that impacts the value delivered to the business. Showing the performance increase or decrease of each release is critical for businesses to gain value from cloud billing. The ability to project operational costs early in the development process increases the likelihood of properly determining the cost and evaluating the worth of new application features to the business.

7. Analyst’s Take

This year’s evaluation of performance testing tools looks exclusively at tools that are consumed as a SaaS solution or can be natively deployed to the customer’s own cloud environments and managed by the customer; thus, scoring for the companion Radar report does not take on-premises features into account.

The cloud performance testing tools market is mature, but some open source projects are still being acquired by legacy vendors to fill gaps in their offerings. These cover two major types of cloud-native testing for assessing user experience (UX): browser-based and protocols or API-based testing.

While protocol-level testing can evaluate the UX with some loss given, it doesn’t actually load a browser. However, browsers are the only way to test API-based solutions. In fact, many mobile and content-rich sites use API calls from the browser to render content. Most browser-based testing defaults to Google Chrome but can also include Microsoft’s default Edge browser, which runs on a Chrome engine. The other major browsers—Firefox (Mozilla) and Safari (Apple)—are not tested by most solutions. About 25% of global internet traffic does not use Chrome as a browser engine, so some buyers may need to load test for those user groups as well.

Browser testing has grown in importance due to its impact on the end-user experience. The ability of an application to create a successful digital experience for end users can directly affect a company’s image.

The movement to shift left and move testing into the development lifecycle rather than post-process—after deployment to a production or stage environment—means that business leaders can understand the performance and cost of a feature earlier in the lifecycle and better address prioritization of work (backlog items) to ensure the final solution optimizes the business value of a solution. To this end, a performance tool may need to consume telemetry (including component performance and other types of data) and feed performance metrics to a FinOps tool or a cloud resource optimization tool to ensure critical data points are available that allow the cloud performance testing tool to make the best recommendations for optimization.

For larger organizations looking to reduce the number of vendors used, platform players that offer development, cloud resource optimization, FinOps, cloud platform management (automation), monitoring, application performance monitoring (APM), and load testing may offer the best value if the provider has fully integrated those products. Other companies may prefer best-of-breed tools if they aren’t already using products from the load-testing vendor in their cloud environments.

Overall, cloud performance testing solutions continue to evolve to match the growing need to test application scale and performance in many different scenarios and performance metrics. The cloud allows a virtually infinite number of resources to perform this testing when needed, making these solutions more cost effective than ever before.

8. About Dana Hernandez

Dana Hernandez is a dynamic, accomplished technology leader focused on the application of technology to business strategy and function. Over the last three decades, she had extensive experience with design and implementation of IT solutions in the areas of Finance, Sales, Marketing, Social Platforms, Revenue Management, Accounting, and all aspects of Airline Cargo, including Warehouse Operations. Most recently, she spearheaded technical teams responsible for implementing and supporting all applications for Global Sales for a major airline, owning the technical and business relationship to help drive strategy to meet business needs.

She has led numerous large, complex transformation efforts, including key system merger efforts consolidating companies onto one platform to benefit both companies, and she’s modernized multiple systems onto large ERP platforms to reduce costs, enhance sustainability, and provide more modern functionality to end users.

Throughout her career, Dana leveraged strong analytical and planning skills, combined with the ability to influence others with the common goal of meeting organizational and business objectives. She focused on being a leader in vendor relationships, contract negotiation and management, and resource optimization.

She is also a champion of agile, leading agile transformation efforts across many diverse organizations. This includes heading up major organizational transformations to product taxonomy to better align business with enterprise technology. She is energized by driving organizational culture shifts that include adopting new mindsets and delivery methodologies.

9. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

10. Copyright

© Knowingly, Inc. 2023 "Key Criteria for Evaluating Cloud Performance Testing Solutions" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.