Table of Contents
1. Executive Summary
The fundamental underpinning of any organization is its transactions. They must be done well, with integrity and performance. Not only has transaction volume soared recently, but the level of granularity in the details has reached new heights. Fast transactions greatly improve the efficiency of a high-volume business. Performance is incredibly important.
There are a variety of databases available for transactional applications. Ideally, any database would have the required capabilities; however, depending on the application’s scale and the chosen cloud, some database solutions can be prone to delays. Recent information management trends see organizations shifting their focus to cloud-based solutions. In the past, the only clear choice for most organizations was on-premises data using on-premises hardware. However, the costs of scale have chipped away at the notion that this is the best approach for some, if not all, a company’s transactional needs. The factors driving operational and analytical data projects to the cloud are many. Still, advantages like data protection, high availability, and scale are realized with infrastructure as a service (IaaS) deployment. In many cases, a hybrid approach serves as an interim step for organizations migrating to a modern, capable cloud architecture.
This report outlines the results from two GigaOm Field Tests (one transactional and the other analytic) derived from the industry-standard TPC Benchmark™ E (TPC-E) and TPC Benchmark™ H (TPC-H) to compare two IaaS cloud database offerings:
- Microsoft SQL Server 2019 Enterprise on Amazon Web Services Windows Server 2022 (AWS) Elastic Cloud Compute (EC2) instances with gp3 volumes
- Microsoft SQL Server 2019 Enterprise on Window Server 2022 on Azure Virtual Machines (VM) with the new Premium SSD v2 disks
Both are installations of Microsoft SQL Server 2019, and we tested the Windows Server OS using the most recent versions available as a preconfigured machine image.
Data-driven organizations also rely on analytic databases to load, store, and analyze volumes of data at high speed to derive timely insights. Data volumes within modern organizations’ information ecosystems are rapidly expanding, placing significant performance demands on legacy architectures. Today, to fully harness their data to gain a competitive advantage, businesses need modern, scalable architectures and high levels of performance and reliability to provide timely analytical insights. In addition, many companies like fully managed cloud services. With fully managed as-a-service deployment models, companies can leverage powerful data platforms without the technical debt and burden of finding talent to manage the resources and architecture in-house. With these models, users only pay as they play and can stand up a fully functional analytical platform in the cloud with just a few clicks.
The results of the GigaOm Transactional Field Test are valuable to all operational functions of an organization, such as human resource management, production planning, material management, financial supply chain management, sales and distribution, financial accounting and controlling, plant maintenance, and quality management. The Analytic Field Test results are insightful for many of these same departments today using SQL Server, which is frequently the source for interactive business intelligence (BI) and data analysis.
Testing hardware and software across cloud vendors is challenging. Configurations favor one cloud vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the benchmarking workload. Our testing demonstrates a narrow slice of potential configurations and workloads.
During our Transactional Field Test, SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with Premium SSD v2 disks had 57% higher transactions per second (tps) than AWS SQL Server 2019 Enterprise on Windows Server 2022 with gp3 volumes. Azure’s price-performance is 34% less expensive than the price-performance of AWS SQL Server 2019 on Windows Server 2022 without AWS license mobility/Azure Hybrid Benefit. With AWS license mobility and Azure Hybrid Benefit pricing, SQL Server 2019 on Windows Server 2022 on Azure Virtual Machines provided price-performance that was 47% less expensive than AWS SQL Server 2019 on Windows Server 2022. Azure with Hybrid Benefit price-performance is 54% less expensive than the price-performance of AWS with license mobility and a three-year commitment.
During our Analytic Field Test, SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with Premium SSD v2 disks had best queries per hour (QPH); it had 41% higher QPH than AWS SQL Server 2019 Enterprise on Windows Server 2022 with gp3 volumes. The price-performance of SQL Server 2019 on Windows Server 2022 on Azure Virtual Machines without AWS license mobility/Azure Hybrid Benefit proved to be 26% less expensive than AWS SQL Server 2019 on Windows Server 2022 deployments. With license mobility in place, the price-performance advantage for Azure widened to 41%. And for SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with license mobility and a three-year commitment, price-performance was 49% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022 deployments.
As the report sponsor, Microsoft selected the particular Azure configuration it wanted to test. GigaOm selected the closest AWS instance configuration for CPU, memory, and disk configuration.
We leave the issue of fairness for the reader to determine. We strongly encourage you to look past marketing messages and discern what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
In the same spirit of the TPC, price-performance intends to be a normalizer of performance results across different configurations. Of course, this has its shortcomings, but at least one can determine “what you pay for and configure is what you get.”
The parameters to replicate this test are provided. We used the BenchCraft tool, audited by a TPC-approved auditor who reviewed all updates to BenchCraft. All the information required to reproduce the results are documented in the TPC-E specification. BenchCraft implements the requirements documented in Clauses 3, 4, 5, and 6 of the benchmark specification. There is nothing in BenchCraft that alters the performance of TPC-E or this TPC-E-derived workload.
The scale factor in TPC-E is defined as the number of required customer rows per single tpsE. We changed the number of initial trading days (ITD). The default value is 300, which is the number of eight-hour business days to populate the initial database. For these tests, we used an ITD of 30 days rather than 300. This reduces the size of the initial database population in the larger tables. The overall workload behaves identically with ITD of 300 or 30 as far as the transaction profiles are concerned. Since the ITD was reduced to 30, any results would not be compliant with the TPC-E specification and, therefore, not comparable to published results. This is the basis for the standard disclaimer that this is a workload derived from TPC-E.
However, BenchCraft is just one way to run TPC-E. All the information necessary to recreate the benchmark is available at TPC.org (this test used the latest version, 1.14.0). Just change the ITD, as mentioned above.
We have provided enough information in the report for anyone to reproduce these tests. You are encouraged to compile your own representative queries, data sets, data sizes, and test compatible configurations applicable to your requirements.
2. Cloud IaaS SQL Server Offerings
Relational databases are a cornerstone of an organization’s data ecosystem. While alternative SQL platforms are growing with the data deluge, and have their place, workload platforming decision-makers usually choose the relational database. This is for a good reason. Since 1989, Microsoft SQL Server has proliferated to near-ubiquity as the relational server of choice for the original database use case—online transaction processing (OLTP)—and beyond. Now, SQL Server is available on fully functional infrastructure offered as a service, taking complete advantage of the cloud. These infrastructure-as-a-service (IaaS) cloud offerings provide predictable costs, savings, fast response times, and strong non-functionals.
As our testing confirms, the main difference between SQL Server 2019 on Windows Server 2022 Azure Virtual Machines and SQL Server 2019 on AWS Window Server 2019 workloads is the storage I/O performance.
Microsoft SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines Storage Options
Azure recommends Premium Managed Disk or Ultra Disk for operationally intensive, business-critical workloads. We chose to test the latest Premium SSD v2 Managed Disks. Premium SSD v2 Managed Disks are high-performance SSDs designed to support I/O intensive workloads and provide high throughput and low latency, but with a lower cost compared to Ultra Disk. Premium SSD v2 Managed Disks are provisioned as a persistent disk with configurable size and performance characteristics. They can also be detached and reattached to different virtual machines.
The cost of Premium SSD v2 Managed Disks depends on the capacity of the disk, number of IOPS, and desired throughput (in MB/second). Several persistent disks attached to a VM can support petabytes of storage per VM. Premium SSD v2 disks can achieve up to 64 TB in capacity, 80,000 IOPS, and 1,200 MB/s per disk. This translates to less than one millisecond latency for read operations; thus, the read/write caching capabilities of Premium SSD disks are no longer needed. Premium SSD v2 Managed Disks are supported by several VM types.
Microsoft SQL Server 2019 Enterprise on Amazon Web Services Windows Server 2022 Elastic Cloud Compute (AWS EC2) Instances Storage Options
Amazon Web Services offers Elastic Block Store (EBS) as an easy-to-use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2). EBS supports a range of workloads, like relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, and file systems. With EBS, AWS customers can choose from four volume types to balance optimal price and performance. You can achieve single-digit, millisecond latency for high-performance database workloads.
Amazon EBS has three different types of solid-state drives: General Purpose SSDs (gp2 and gp3) and Provisioned IOPS SSD (io1). Provisioned IOPS disks are more akin to Azure Ultra Disk, so we chose General Purpose SSD (gp3) volumes to balance price and performance for these workloads. AWS recommends this drive type for most workloads. Gp2 tends to underperform unless relegated to system boot volumes, virtual desktops, low-latency applications, and development/test environments.
For the test, we chose one of AWS’s Nitro-based instances. Like Azure, for the SQL Server temporary database (tempdb), we used a solid-state drive.
One of our main objectives in this benchmark is to test an I/O intensive workload on Amazon and Azure’s speed-cost balanced SSD volume types head to head. We want to understand both the performance and price-per-performance differences between two leading cloud vendors’ SQL Server offerings.
3. Field Test Setup
GigaOm Transactional Field Test
The GigaOm Transactional Field Test is a workload derived from the well-recognized industry-standard TPC Benchmark™ E (TPC-E). Aspects of the workload, such as transaction mix, were modified from the standard TPC-E benchmark for ease of benchmarking, and as such, the results generated are not comparable to official TPC Results. From tpc.org:
TPC Benchmark™ E (TPC-E) is an OLTP workload. It is a mixture of read-only and update-intensive transactions that simulate the activities found in complex OLTP application environments. The database schema, data population, transactions, and implementation rules have been designed to broadly represent modern OLTP systems. The benchmark exercises a breadth of system components associated with such environments.
The TPC-E benchmark simulates the transactional workload of a brokerage firm with a central database that executes transactions related to the firm’s customer accounts. The data model consists of 33 tables, 27 of which have the 50 foreign key constraints. The TPC-E results are valuable to all operational functions of an organization, many driven by SQL Server and frequently the source for operational interactive business intelligence (BI).
Field Test Data
The data sets used in the benchmark were generated based on the information provided in the TPC Benchmark™ E (TPC-E) specification. For this testing, we used the database scaled for 1 million customers. This scaling determined the initial data volume of the database. For example, a total of 800,000 customers is multiplied by 17,280 to determine the number of rows in the TRADE table: 13,824,000,000. All of the other tables were scaled according to the TPC-E specification and rules. Besides the scale factor, the test offers a few other “knobs” we turned to determine the database engine’s maximum throughput capability for AWS and Azure.
We completed three runs per test on each platform, with each lasting at least two hours. We then took the average transactions per second for the last 30 minutes of the test runs. A full backup was also restored to reset the database to its original state between each run. The results are shared in the Field Test Results section.
Database Environments
Selecting and sizing the compute and storage for comparison is challenging, particularly across two different cloud vendors’ offerings. There are various offerings between AWS and Azure for transaction-heavy workloads. As you will see in Table 2, there was not an exact match in processors or memory at the time of testing and publication.
We considered the variety of offerings on AWS and selected the memory-optimized R5b family. We used R5b in previous testing and believe it is a solid performer. Its description is similar to the Azure offering. R5b is described as “optimized for memory-intensive and latency-sensitive database workloads, including data analytics, in-memory databases, and high-performance production workloads.”
On the Azure side, we expect mission-critical-minded customers to gravitate toward the Ebdsv5 family, described as delivering “higher remote storage performance in each VM size compared to Ev5 VM series.” The Ebdsv5 allows up to 120,000 IOPS and 4,000 MBps of remote disk storage throughput. Our approach was to find the “nearest neighbor” best fit. The challenge was selecting a balance of both CPU and memory. R5b.8xlarge on AWS has 32 vCPUs and 256 GiB memory. The E32bdsv5 on Azure offers a 32-core instance and has 256 GiB of memory, the same as the r5b.8xlarge. This was our best, most diligent effort at selecting compute hardware compatibility for our testing.
In terms of storage, our objective was to test both Azure Premium SSD v2 Disks and AWS General Purpose (gp3). For both Azure and AWS, we deployed multiple disks for SQL Server data and log files and combined them using Simple Storage Pools (RAID0 disk striping) with Windows. Azure recommended striping the disks because of the design of the platform.
Another configuration difference that may have impacted our results was that for the Azure virtual machine; we used the locally attached temporary storage for the SQL Server “tempdb” database. The AWS EC2 r5b instances do not have locally attached storage, and the tempdb was placed on the root drive. The SQL Server tempdb stores internal objects created by the database engine, such as work tables for sorts, spools, hash joins, hash aggregates, and intermediate results. Having the tempdb on local temporary storage usually means higher I/O performance.
The Azure configuration had 130,000 total IOPS, and AWS had 86,667 total IOPS. With AWS gp3 drives, we arranged the disks to give the maximum allowed IOPS per instance. We worked to employ equivalent configurations for these tests despite the different storage profiles between AWS and Azure.
Results may vary across different configurations, and again, you are encouraged to compile your own representative queries, data sets, data sizes, and test-compatible configurations applicable to your requirements. All told, our testing included two different database environments.
Table 1. Configurations Used for Tests
Cloud | AWS | Azure |
---|---|---|
Database | SQL Server 2019 Enterprise on Windows Server 2022 Datacenter | SQL Server 2019 Enterprise on Windows Server 2022 Datacenter |
Build | Microsoft SQL Server 2019 (RTM-CU12) (KB5004524) - 15.0.4153.1 (X64) Jul 19 2021 15:37:34 Enterprise Edition: Core-based Licensing (64-bit) Windows Server 2022 Datacenter 10.0 <X64> (Build 17763) (Hypervisor) | Microsoft SQL Sever 2019 (RTM-CU18) (KB5017593) - 15.0.4261.1 (X64) Sep 28 2022 Enterprise Edition: Core-based Licensing (64-bit)Windows Server 2022 Datacenter 10.0 <X64> (Build 20348) (Hypervisor) |
Region | Oregon | East US |
Instance Type | r5b.8xlarge | E32bds_v5 |
vCPU | 32 | 32 |
RAM (GiB) | 256 | 256 |
Storage Configuration* | 5x 2 TB gp3 (14,733 iops 420 MB/s) data 1x 1 TB gp3 (10,000 iops 200 MB/s) log 1x 0.5 TB gp3 (3000 iops 200 MB/s) root | Premium SSD v2 2x 1TB (60,000 iops 1,200 MB/s) data + 1x 1TB (10,000 iops 120 MB/s) log |
Total IOPS | 86,667 | 130,000 |
Source: GigaOm 2022 |
*At the time of testing AWS, we used five 2 TB gp3 volumes for AWS. When the Premium SSD v2 disks were released on Azure, we only needed two 1 TB Premium SSD v2 disks. Storage capacity does not have an impact on these performance tests, but to be fair in our price-per-performance calculations, we reduced the price of the AWS configuration to reflect five 0.4 TB volumes to match the capacity of the Azure Premium SSD v2 storage capacity.
Other SQL Server settings include:
- Max degree of parallelism: 1
- Max server memory: 235,930 MB, which is 90% of total available system memory (256 GB)
GigaOm Analytical Field Test
The setup for this Field Test was informed by the TPC Benchmark™ H (TPC-H) spec validation queries. This is not an official TPC benchmark. The queries were executed using the following setup, environment, standards, and configurations.
Database Environments
The preceding table shows the configurations we also used for the Analytic Field Test.
Other SQL Server settings include:
- Max degree of parallelism: 32
- Max server memory: 235,930 MB, which is 90% of total available system memory (256 GB)
Benchmark Data
The data sets used in the benchmark were data sets built from the well-recognized industry-standard TPC Benchmark™ H (TPC-H). Aspects of the workload were modified from the standard TPC-H benchmark for ease of benchmarking, and as such, the results generated are not comparable to official TPC Results.
From tpc.org: “The TPC-H is a decision support benchmark. It consists of a suite of business-oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database were chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.”
For more information about the TPC-H, see their specification document.
The following table gives row counts of the database when loaded with 1 TB of TPC-H-like data to provide an idea of the data volumes used in our benchmark:
Table 2. TPC-H Database Row Count Given 10 TB
TPC-H Table | 10TB Row Count |
---|---|
Customer | 150,000,000 |
Line Item | 6,000,000,000 |
Orders | 1,500,000,000 |
Part | 200,000,000 |
Supplier | 10,000,000 |
Part Supp | 800,000,000 |
Source: GigaOm 2022 |
Queries
We sought to replicate the TPC-H Benchmark queries modified only by syntax differences required by SQL Server. The benchmark is a fair representation of enterprise query needs. The TPC-H testing suite has 22 queries.
Test Execution
To execute the TPC-H Benchmark queries, we ran the test sequence of Power Run, Power Run, Throughput Run. These were read-only queries for both Power and Throughput. A Power Run is a single user executing 22 queries in a serial stream. A Throughput Run is seven concurrent users each executing a stream of the 22 queries, giving 154 query executions with seven parallel streams. We completed each test sequence three times and took the best result.
Test Metric
We used the Throughput Run to calculate the performance metric of queries per hour (QPH). We used the longest running of the seven concurrent threads as the total execution time of the test. To calculate the QPH, we used the following formula:
QPH = 22 queries/Throughput execution time (sec) x 3,600 seconds per hour
4. Field Test Results
Transactional Field Test Results
This section analyzes the transactions per second (tps) from the fastest of the five runs of the three GigaOm Transactional Field Tests described in the table below. A higher tps is better, meaning that more transactions are processed every second.
Figure 1 shows that SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines best tps was 57% higher than AWS SQL Server 2019 Enterprise on Windows Server 2022.
Figure 1. Transactions per Second: SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s Premium SSD v2 Managed Disks vs. AWS SQL Server 2019 Enterprise on Windows Server 2022 EBS General Purpose SSD (gp3). Higher is better.
Analytic Field Test Results
Figure 2 reveals that SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s best queries per hour (QPH) was 40% higher than AWS SQL Server 2019 Enterprise on Windows Server 2022.
Figure 2. Queries per Hour: SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s Premium SSD v2 Managed Disks vs. AWS SQL Server 2019 Enterprise on Windows Server 2022 EBS General Purpose SSD (gp3). Higher is better.
5. Price Per Performance
The price-performance metric is price/throughput (tps). This is defined as the cost of running each cloud platform continuously for three years divided by the transactions per second throughput uncovered in the previous tests. The calculation is as follows:
Price Per Performance = $/tps =
[(Compute with on-demand SQL Server Hourly Rate × 24 hours/day × 365 days/year × 3 years)
+ (Data disk(s) monthly cost per disk × # of disks × 12 months × 3 years) + (Log disk monthly cost per disk × # of disks × 12 months × 3 years)] ÷ tps
When evaluating price per performance, the lower the number, the better. This means you get more compute power, storage I/O, and capacity for your budget.
Pricing Used:
We performed this calculation across two different pricing structures:
- Azure pay-as-you-go versus AWS on-demand
- Azure three-year reserved versus AWS standard three-year term reserved
The prices were at the time of testing and reflect the Oregon (US West 2) region on AWS and East US region on Azure. The compute prices include both the actual AWS EC2/Azure VM hardware itself and the license costs of the operating system and Microsoft SQL Server Enterprise Edition. We also included Azure Hybrid Benefit versus AWS License Mobility rates for existing SQL Server license holders. Rate details are in the Appendix.
Be aware that prices do not include support costs for either Azure or AWS. Each platform has different pricing options. Buyers should evaluate all of their pricing choices, not just those presented in this paper.
Note: At the time of testing AWS, we used five 2 TB gp3 volumes for AWS. When the Premium SSD v2 disks were released on Azure, we only needed two 1 TB Premium SSD v2 disks. Storage capacity does not have an impact on these performance tests, but to be fair in our price-per-performance calculations, we reduced the price of the AWS configuration to reflect five 0.4 TB volumes to match the capacity of the Azure Premium SSD v2 storage capacity.
Transactional Field Test Price-Performance
Figure 3 shows that SQL Server 2019 on Windows Server 2022 Azure Virtual Machine’s price-performance is 34% less expensive than the price-performance of AWS SQL Server Enterprise 2019 on Windows Server 2022 for pay-as-you-go/on-demand price-performance. Note that in this chart a lower price-performance is better—meaning that it costs less to complete the same workload.
Figure 3. Price-Performance, Transactions per Second: SQL Server 2019 on Windows Server 2022 Azure Virtual Machines vs. AWS SQL Server 2019 on Windows Server 2022, Pay-As-You-Go Without License Mobility. Lower is better.
Figure 4 reveals that the price-performance of SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with pay-as-you-go pricing and AWS license mobility and Azure Hybrid Benefit is 47% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022.
Figure 4. Price-Performance, Transactions per Second: SQL Server 2019 Enterprise on Windows Server 2022 vs. AWS SQL Server 2019 Enterprise on Windows Server 2022, Pay-As-You-Go with License Mobility. Lower is better.
We also tested SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with a three-year commitment and SQL Server License Mobility. As shown in Figure 5, SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s price-performance proved to be 54% less expensive than the price-performance of AWS SQL Server 2019 on Windows Server 2022 with license mobility and a three-year commitment.
Figure 5. Price-Performance, Transactions per Second: SQL Server 2019 Enterprise on Windows Server 2022 with Hybrid Benefit vs. AWS SQL Server 2019 Enterprise on Windows Server 2022 with License Mobility and 3-Year Commitment. Lower is better.
Analytic Field Test Price-Performance
Starting with Figure 6, the focus shifts to explore price-performance of Azure and AWS SQL Server deployments based on queries per hour. In this first test, the price-performance of SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with pay-as-you-go pricing and without license mobility proved to be 26% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022. Azure with its lower price-performance value shows that it costs less to complete the same workload on Azure than on AWS.
Figure 6. Price-Performance, Queries per Hour: SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines vs. AWS SQL Server 2019 on Windows Server 2022, Pay-As-You-Go Queries per Hour Without License Mobility. Lower is better.
Next, Figure 7 compares the price-performance of Azure and AWS SQL Server deployments, based on Windows 2022 and with license mobility and pay-as-you-go/on-demand pricing. Here, SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s price-performance proved 41% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022.
Figure 7. Price-Performance, Queries per Hour: SQL Server 2019 Enterprise on Windows Server 2022 vsAWS SQL Server 2019 Enterprise on Windows Server 2022 with License Mobility and Pay-As-You-Go/On-Demand Pricing. Lower is better.
Finally, Figure 8 explores the price-performance of Azure and AWS SQL Server deployments on Windows 2022 with license mobility and a three-year license. The testing reveals that SQL Server 2019 Enterprise on Windows Server 2022 price-performance is 49% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022.
Figure 8. Price-Performance, Queries per Hour: SQL Server 2019 Enterprise Edition on Windows Server 2022 vs. AWS SQL Server 2019 Enterprise on Windows Server 2022 with License Mobility and 3-Year Commitment. Lower is better.
6. Conclusion
This report outlines the results from a GigaOm Transactional Field Test and a GigaOm Analytical Field Test to compare the same SQL Server infrastructure as a service (IaaS) offering of two cloud vendors: Microsoft SQL Server on Amazon Web Services (AWS) Elastic Cloud Compute (EC2) instances and Microsoft SQL Server Microsoft on a Windows Azure Virtual Machines (VM).
We have learned that the database, cloud, and storage all matter to latency, which is a killer for important transactional applications. Microsoft Azure presents a powerful cloud infrastructure offering for the modern transactional workload and analytical workload.
During our Transactional Field Test, Azure SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with Premium SSD v2 disks had 57% higher transactions per second (tps) than AWS SQL Server 2019 Enterprise on Windows Server 2022 with gp3 volumes. Azure’s SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machine’s price-performance is 34% less expensive than the price-performance of AWS SQL Server 2019 Enterprise on Windows Server 2022 without AWS license mobility/Azure Hybrid Benefit. With AWS license mobility and Azure Hybrid Benefit pricing, Azure SQL Server 2019 Enterprise on Windows Server 2022 provided price-performance that was 47% less expensive than AWS SQL Server 2019 Enterprise on Windows Server 2022. SQL Server 2019 Enterprise Edition with Windows Server 2022 on Azure Virtual Machine’s with Hybrid Benefit price-performance is 54% less expensive than the price-performance of AWS SQL Server 2019 Enterprise Edition on Windows Server 2022 with license mobility and a three-year commitment.
During our Analytic Field Test, Azure SQL Server 2019 Enterprise on Windows Server 2022 Azure Virtual Machines with Premium SSD v2 disks produced the most queries per hour and had 41% higher QPH than AWS SQL Server 2019 Enterprise on Windows Server 2022 with gp3 volumes. The price-performance of Azure SQL Server 2019 Enterprise Edition on Windows Server 2022 Azure Virtual Machines without AWS license mobility/Azure Hybrid Benefit proved to be 26% less expensive than AWS SQL Server 2019 Enterprise Edition on Windows Server 2022. With license mobility in place, the price-performance advantage for Azure widened to 41%. And for Azure SQL Server 2019 Enterprise Edition on Windows Server 2022 Azure Virtual Machines with license mobility and a three-year commitment, price-performance was 49% less expensive than AWS SQL Server 2019 Enterprise Edition on Windows Server 2022.
Keep in mind that tests are configured to get the best from each platform according to publicly documented best practices. Optimizations on both platforms would be possible as their offerings evolve or internal tests point to different configurations.
7. Disclaimer
Performance is important but it is only one criterion for a business-critical database platform selection. This test is a point-in-time check into specific performance. There are numerous other factors to consider in selection across factors of Administration, Integration, Workload Management, User Interface, Scalability, Vendor, Reliability, and numerous other criteria. It is also our experience that performance changes over time and is competitively different for different workloads. Also, a performance leader can hit up against the point of diminishing returns and viable contenders can quickly close the gap.
The benchmark setup was informed by the TPC Benchmark™ E (TPC-E) and the TPC Benchmark™ H (TPC-H) specification. The workloads were derived from TPC-E and TPC-H and are not official TPC benchmarks nor may the results be compared to official TPC-E or TPC-H publications.
GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of queries to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the data set sizes, the platforms, the queries, etc. used. The reader is left to determine for themselves how to qualify the information for their individual needs. The report does not make any claim regarding the third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.
This is a sponsored report. Microsoft chose the competitors, the test, and the Microsoft configuration. GigaOm chose the most compatible configurations for the other tested platform and ran the testing workloads. Choosing compatible configurations is subject to judgment. We have attempted to describe our decisions in this paper.
8. About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
Microsoft offers SQL Server on Azure. To learn more about Azure SQL Database visit https://azure.microsoft.com/en-us/services/sql-database/.
9. About William McKnight
William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.
Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.
10. About Jake Dolezal
Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.
11. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
12. Copyright
© Knowingly, Inc. 2023 "SQL Transaction Processing and Analytic Performance Price-Performance Testing" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.