Table of Contents
- Summary
- The Latest IaaS Offerings for SQL Server
- Field Test Setup
- Field Test Results
- Price Per Performance
- Conclusion
- Appendix
- Disclaimer
- About Microsoft
- About William McKnight
- About GigaOm
- Copyright
1. Summary
In terms of transactional data, relational databases are still the platform of choice for most organizations and most use cases. Arguably, the most prevalent relational database engine over the past couple of decades is Microsoft SQL Server. Since 1989, the use of Microsoft SQL Server has been widely adopted for day-in-day-out On-Line Transaction Processing (OLTP) and many other uses. It has proven itself to be a useful and powerful database, and we see it underpin a variety of operational and reporting processes in companies under every industry vertical.
Today, with cloud computing becoming more and more prevalent, SQL Server is even easier to deploy and use. A SQL Server can be quickly stood up on infrastructure offered as a service, taking complete advantage of the cloud. SQL Server Infrastructure as a Service (IaaS) cloud offerings provides predictable costs, cost savings, fast response times, and strong non-functionals.
But does it matter which public cloud? If you deploy SQL Server on one cloud provider’s infrastructure versus another, is there a significant difference? We sought to answer the question does one cloud vendor’s infrastructure better support SQL Server than another? We conducted some testing recently to see if this were the case.
Since Microsoft SQL Server is offered on both AWS and Azure, we desired to see if deploying on Azure gives SQL Server a better infrastructure foundation for transactional processing. We closely aligned the hardware configuration between both clouds as reasonably as possible. This is a very difficult task to assure sameness and fairness across configurations.
To test this hypothesis, we conducted a GigaOm Transactional Field Test, derived from the industry-standard TPC Benchmark™ E (TPC-E), compared:
- Microsoft SQL Server 2019 on an Amazon Web Services (AWS) r5a.8xlarge Elastic Cloud Compute (EC2) instance with General Purpose (gp2) volumes
- Microsoft SQL Server 2019 on an Azure E32as_v4 Virtual Machine (VM) with P30 Premium Storage drives
Both the AWS R5a and Azure Eas v4 are the latest release instance types. The r5a.8xlarge and E32as_v4 instances both have 32 vCPUs and 256GB of RAM. Both setups were installations of Microsoft SQL Server 2019 running on Windows Server 2019 Datacenter Edition.
With the Azure feature of local cache, Microsoft SQL Server on Microsoft Azure Virtual Machines (VM) indicated 3.6x more transactional throughput on Windows over Microsoft SQL Server on Amazon Web Services (AWS) Elastic Cloud Compute (EC2
By using the transaction-based price-performance formula, SQL Server on Microsoft Azure Virtual Machines (VM) had up to 84.2% better price-performance when comparing Azure Hybrid Benefit to AWS License Mobility for three-year reservations, and up to 71.6% better price-performance when comparing the Azure pay-as-you-go pricing to Amazon on-demand rates.
Testing hardware and software across cloud vendors is very challenging. Configurations favor one cloud vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the benchmarking workload itself. Our testing demonstrates a narrow slice of potential configurations and workloads.
As the sponsor of the report, Microsoft selected the particular Azure configuration it desired to test. GigaOm selected the AWS instance configuration that was closest in terms of CPU and memory configuration.
We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to look past marketing messages and discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
Also, in the same spirit of the TPC, price-performance is intended to be a normalizer of performance results across different configurations.
The parameters to replicate this test are provided. We used the BenchCraft tool, which was audited by a TPC-approved auditor, who has also reviewed all updates to BenchCraft. All the information required to reproduce the results are documented in the TPC-E specification. BenchCraft implements the requirements documented in Clauses 3, 4, 5, and 6 of the benchmark specification. There is nothing in BenchCraft that alters the performance of TPC-E or this TPC-E derived workload.
The scale factor in TPC-E is defined as the number of required customer rows per single tpsE. We did change the number of Initial Trading Days (ITD). The default value is 300, which is the number of 8-hour business days to populate the initial database. For these tests, we used an ITD of 30 days rather than 300. This reduces the size of the initial database population in the larger tables. The overall workload behaves identically with ITD of 300 or 30 as far as the transaction profiles are concerned. Since the ITD was reduced to 30, any results obtained would not be compliant with the TPC-E specification and, therefore, not comparable to published results. This is the basis for the standard disclaimer that this is a workload derived from TPC-E.
However, BenchCraft is just one way to run TPC-E. All the information necessary to recreate the benchmark is available at TPC.org (this test used the latest version 1.14.0). Just change the ITD, as mentioned above.
We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative queries, data sets, data sizes, and test compatible configurations applicable to your requirements.