Table of Contents
1. Executive Summary
This GigaOm Benchmark Report was commissioned by Microsoft.
Windows Server 2022: Datacenter – Azure Edition offers numerous features exclusive to the Azure platform that make a compelling case for migrating workloads to the cloud. Features like Azure Extended Network, Server Message Block (SMB) over Quick UDP Internet Connections (QUIC), Hotpatch for Windows Server, and Compression for Storage Replication allow organizations to simplify cloud migration or hybrid workload strategies and reduce architectural complexity, reducing the cost of migration.
What’s more, once workloads are hosted in the cloud on Window Server 2022: Datacenter – Azure Edition, they benefit from superior performance and reduced cost compared to AWS. In our testing, using the GigaOm Transactional Field Test, we found that Microsoft SQL Server Standard Edition provides impressive performance on Azure, especially for transaction processing workloads. Azure outperformed AWS in our throughput tests by about 50%. This performance, combined with lower costs, produced a sizable price-performance advantage over AWS.
The GigaOm Transactional Field Test simulates workloads in a financial environment where users perform various transactions such as placing stock orders, checking account balances, and researching market data.
Figure 1 shows the results based on our tests with Microsoft SQL Server Standard Edition and the new Azure Easv6 instances powered by AMD’s 4th Generation EPYCTM 9004 processors. AWS was tested with R7a EC2 instances also using 4th Generation AMD processors. Azure produced a price-performance figure of $435.05, while AWS produced price-performance that was 66% more expensive, at $719.70. (Figure 1)
Figure 1: AWS vs Azure: GigaOm Transactional Field Test Price-Performance Comparison
The financial outlook for workloads on Azure are even more compelling when Azure Hybrid Benefit (AHB) licensing is considered. AHB provides deep discounts for on-premises licenses extended to the cloud.
Our hands-on assessment of key tooling reveals that while Azure and AWS are exceptional platforms, Azure offers unmatched value and capability, especially for organizations aligned on Microsoft technologies. In our hands-on testing, both Azure Migrate and AWS Application Migration Service worked as expected and moved workloads from point A to point B. While we generally preferred the user experience of the Azure Migrate solution, we do not regard this as the only differentiator for adoption.
These products and features combine to form a strong ecosystem with which to run your enterprise applications and allow your organization to leverage the benefits of cloud computing. Once in the cloud, opportunities emerge to improve efficiency, increase rate of innovation, and leverage new technologies, such as artificial intelligence, to improve customer experience.
In conclusion, we found that Azure offers higher performance, lower costs, and a superior ecosystem of management tools compared to AWS.
2. About GigaOm Benchmarks
GigaOm Benchmarks consist of lab-based performance and user experience tests designed to reflect real-world scenarios and assess vendor claims. Our Benchmark reports inform technology buyers with transparent, repeatable tests and results, backed by GigaOm’s expert analysis. Where quantitative metrics may not fully describe an experience, qualitative metrics and analyst commentary may be used to provide context and product positioning in the market.
While no testing environment can fully meet the complexity of production implementations, we design benchmark test suites to validate a set of hypotheses that have been carefully selected to demonstrate each product’s core business value and differentiation against its competitors.
3. Field Test Overview
To test Windows Server 2022: Datacenter – Azure Edition, we conducted hands-on tests to assess server migration and operating system patching, as well as performance-based testing of Microsoft SQL Server. We also reviewed the platform features that could not be tested in our environment for feasibility and potential impact on a migration approach.
Migration
The process of moving workloads onto the cloud can be fraught, which is why both Azure and AWS provide their own migration solutions. These help IT organizations safely transition on-premises workloads to the cloud, where they can take better advantage of cloud capabilities and infrastructure.
When planning migrations, there is a common framework for scoping called “The 6 R’s.” These refer to rehost, replatform, repurchase, refactor, replace, and retire. Each approach has significant implications for the cost, difficulty, and resulting value of a migration effort. We explore them briefly here:
- Rehost is the simplest method, often referred to as a “Lift and Shift.” This involves simply moving your VMs from one hosting provider to another, such as your on-premises data center to AWS or Azure. While this is the lowest-cost option, it also gives the least opportunity for modernizing and reducing technical debt.
- Replatform involves changing the underlying platform that hosts your application, such as moving your VM-based SQL Server workloads to a fully managed cloud-based SQL provider such as Azure SQL or Amazon RDS. This could also refer to moving a VM-based container workload to a containerization service such as Azure Kubernetes Service or Amazon EKS.
- Repurchase would refer to replacing a self-hosted application with a SaaS product, such as replacing your on-premises version control system, like GitLab or Team Foundation Server, with a cloud-based solution such as Azure DevOps or GitHub.
- Refactor is generally the most expensive option and involves rewriting the applications to run natively on a new platform. This could be paired with a move to Serverless architecture using Azure Functions or AWS Lambda functions or rebuilding a monolithic application into containerized microservices behind a service bus.
- Replace is an option where an application is replaced with another. This is often a solution when moving to a competitor’s product and achieving feature parity is not difficult. Replacing a solution can enable an organization to save costs, adopt additional features, or integrate more deeply with the cloud environment they are moving to.
- Retain is an option when a solution is simply not able to be migrated; this means the solution continues to run in its current state without being migrated. A number of reasons could cause this—such as running on legacy technologies not supported by cloud platforms, lack of expertise about the solution or its codebase to update it, or another replacement for the solution being developed on a longer timeline than the migration effort.
- Retire is an often underutilized approach, as organizations are reluctant to change processes and reduce technical debt. An excellent example of software to retire would be infrastructure monitoring systems for on-premises environments. These simply don’t need to be migrated to cloud, as this functionality is already available out-of-the-box from the cloud platforms.
Migration Test Plan
In our tests, we migrated a Windows Server workload from our on-premises environment into both AWS and Azure. Both platforms provide tools that enable migration of Windows Servers and workloads to the cloud. The AWS Application Migration Service and the Azure Migrate service enable you to assess the applications running on servers, right-size the target VMs, and then replicate your instances to the target platform.
The migration tooling landscape becomes more complicated when databases are brought into the architecture. If you perform a homogeneous migration (not changing the database platform), it’s relatively easy to replicate data from one location to another and cut over. However, if you are considering a heterogeneous migration or changing the database platform, some level of application refactoring may be required. This adds cost and complexity to the migration process.
Supporting Migrations: Extended Network for Azure
Extended network for Azure stretches an on-premises subnet into Azure, so on-premises virtual machines can keep their existing private IP addresses even when migrating to Azure. Maintaining a static IP address significantly simplifies the migration of mission critical, legacy applications. In the past, IT organizations had to work with complicated multidirectional NAT solutions or consider abandoning a migration altogether.
As Microsoft describes, the network is extended using a bidirectional VXLAN tunnel between two Windows Server 2019 VMs acting as virtual appliances, one running on-premises and the other running in Azure, with each then connected to the subnet to be extended. Each expanded subnet requires a pair of appliances, with multiple pairs able to support multiple subnets.
Microsoft recommends that an extended network for Azure should only be used for machines that cannot have their IP address changed when migrating to Azure. Once crossed over, it is always better to change the IP address and connect it to a subnet that wholly exists in Azure, if that is an option.
Patch Management – Operational Readiness
Operating system patching is a critical part of maintaining a strong security posture in enterprise environments. Hot patching is especially valuable because it enables seamless, nondisruptive updates without having to schedule or delay the activity. Hot patching ensures that systems remain fully up-to-date and protected without producing downtime. Among the benefits:
Vulnerability mitigation: Hackers and cybercriminals continually develop new methods to exploit vulnerabilities. Patching addresses known vulnerabilities before attackers can exploit them, reducing the threat posed by malware, ransomware, and other cyber threats.
Compliance requirements: Regular patching of systems is often a requirement for regulatory or certification compliance. Failure to comply with these regulations may result in legal consequences, fines, and other penalties.
System stability and performance: Patches also improve the overall stability and performance of the operating system, which is vital for maintaining a reliable and efficient IT infrastructure to drive business operations.
Network security: A compromised system can provide a foothold for further attacks on the network. Regularly patching systems helps create a more robust and secure network environment, reducing the risk of lateral movement by attackers.
Hotpatch For Windows Server: Azure Edition
Windows Server has in the past required a reboot to apply operating system updates, causing administrative toil and downtime for single-node applications. Hotpatch for Windows Server: Azure Edition, resolves this issue. Administrators can configure platform-managed automatic patching so that Windows Server workloads are kept up-to-date automatically with no downtime required. Operating system subsystems are patched in-memory, and hotfixes and select updates are applied without restarting the server or impacting the applications running on it.
Performance Test
To measure performance, we performed transaction processing tests using the GigaOm Transactional Field Test. This test workload is derived from the TPC-E Benchmark. Note that the GigaOm Transactional Field Test is not comparable to published TPC-E Benchmark results, as this implementation does not comply with all requirements of the TPC-E Benchmark.
The GigaOm Transactional Field Test is an OLTP workload. It is a mixture of read-only and update-intensive transactions that simulate the activities found in complex OLTP application environments. The database schema, data population, transactions, and implementation rules have been designed to broadly represent modern OLTP systems. The benchmark exercises a breadth of system components associated with such environments.
The Field Test simulates the transactional workload of a brokerage firm with a central database that executes transactions related to the firm’s customer accounts. The data model consists of 33 tables, 27 of which have the 50 foreign key constraints. The GigaOM Transactional Field Test results are valuable to all operational functions of an organization, many driven by SQL Server and frequently the source for operational interactive business intelligence (BI).
For our testing, we deployed the GigaOM Transactional Field Test data set of 800,000 customers with a scale factor of 500 and 300 days of trading history. We conducted throughput testing using the internal Microsoft SQL benchmarking tool Benchcraft and ran each test for four hours. The load generation servers (user and market simulation) were pinned to processors 25-32, and SQL Server was configured with processor affinity for processors 1-24. Note that Microsoft SQL Server Standard is limited to 24 vCores.
System Configuration
Table 1 shows the configuration of each system used for testing. The IOPS for each primary data volume was selected by dividing the total available IOPS in the instance class by 3. Data volumes 1 through 3 were configured as a striped simple volume in Windows Storage spaces to create the primary SQL Server data volume. Data volumes 4 and 5 were configured this way to create the SQL Logs drive, while data volumes 6 and 7 were configured in the same way for the tempdb drive.
Table 1: System Configuration
Azure | AWS | |
---|---|---|
VM SKU | E32as_v6 | r7a.8xLarge |
vCPUs | 32 | 32 |
RAM | 256 GB | 256 GB |
Processor Generation | 4th Gen AMD EPYC | 4th Gen AMD EPYC |
Available Storage IOPS | 57,600 | 40,000 |
OS Disk | 256 GB SSD | 256 GB SSD |
DATA DISKS 1, 2, 3 | Premium SSD v2 | EBS GP3 SSD |
Storage | 15,360 GB | 15,360 GB |
IOPS | 19,200 IOPS | 13,333 IOPS |
Throughput | 300 MB/s | 300 MB/s |
DATA DISKS 4, 5, 6, 7 | ||
Storage | 1,024 GB | 1,024 GB |
IOPS | 5,000 IOPS | 5,000 IOPS |
Throughput | 125 MB/s | 125 MB/s |
Source: GigaOm 2024 |
Total Cost of Ownership Measurement
We use public pricing data to calculate the three-year TCO of the as-tested configurations for Windows Server workload on each platform.
In addition, we compute the price-performance ratio for each platform, as core-for-core comparisons are not always accurate assessments of the value proposition for each VM class. We then normalize this price-performance ratio based on the on-premises operating costs, showing the dollar-for-dollar performance value of money spent on-premises versus in the cloud.
Pricing will be calculated based on the configuration employed to test the GigaOm Transactional Field Test workload.
4. Field Test Results
Migration
In our hands-on testing of Azure Migrate and AWS Application Migration Service, each platform’s tooling worked as expected. We generally preferred the user experience of the Azure Migrate solution; however, we did not find this to be a platform differentiator, as AWS Application Migration Service performed ably as well.
Both tools get VMs from point A to point B, and in both cases, this is a one-off task that, after migration, does not materially affect the operational complexity or cost of the environment. There is some overlap between migration tooling and disaster recovery solutions, but for the purpose of this benchmark, we are specifically assessing migration of servers into the cloud.
We recommend that IT organizations take a thoughtful approach to migration, especially if they want to leverage the hot patching capabilities of Azure. Consider a step-based approach, where the bulk of servers are migrated using tooling, and then over time, as a company modernizes to auto-scaling sets and additional cloud-native features, it can redeploy these workloads on Azure-specific virtual machine SKUs that support hot patching.
Operational Readiness – Patch Management
Both platforms provide managed services to orchestrate patch management for servers. They are AWS Systems Manager (SSM) Patch Manager and Azure Update Manager.
Microsoft describes Azure Update Manager as follows:
Azure Update Manager is a service that helps manage updates for all your machines, including those running on Windows and Linux, across Azure, on premises, and on other cloud platforms. Monitor update compliance from a single dashboard. Make updates in real time, schedule updates within a maintenance window, or automatically update during off-peak hours.
AWS describes Systems Manager Patch Manager as follows:
AWS Systems Manager helps you select and deploy operating system and software patches automatically across large groups of cloud or on-premises instances and edge devices. Through patch baselines, you can set rules to auto-approve select categories of patches to be installed, such as operating system or high severity patches, and specify a list of patches that override these rules and are automatically approved or rejected. You can also schedule maintenance windows for your patches so that they are only applied during preset times. Systems Manager helps ensure that your software is up-to-date and meets your compliance policies.
Both solutions allowed us to configure a patch baseline and create a deployment schedule or manually deploy patches.
Azure Update Manager further expands on this functionality, allowing us to enable fully managed patching and updating by the Azure platform. Additionally, for VMs running Windows Server: Azure Edition with Hotpatch, this platform-managed patching can automatically apply updates without requiring restarts to become active.
The best way to deploy Azure Edition servers is by directly deploying them on Azure and then deploying workloads on them. Migrated servers must first be updated to the Azure Edition using an ISO, then a series of scripts run to enable hot patching. Enabling Hotpatch this way comes with a number of caveats:
- Hotpatch configuration isn’t available via Azure Update Manager.
- Hotpatch can’t be disabled.
- Automatic Patching orchestration isn’t available.
- Orchestration must be performed manually (for example, using Windows Update via SConfig).
To enable manual hot patching on our migrated servers, we followed the guide provided by Microsoft: Enable Hotpatch for Azure Edition virtual machines built from ISO.
Figure 2 shows the Powershell commands to configure Hotpatch on Windows Server 2022: Datacenter that has been upgraded to Azure Edition using the ISO installer.
Figure 2: Hotpatch Configuration Scripts
Hot patching represents an approach to implementing OS security updates on supported Windows Server Datacenter: Azure Edition virtual machines (VMs) without necessitating a system reboot after installation. It achieves this by applying patches to the in-memory code of actively running processes, eliminating the need for process restarts.
Figure 3 shows the configuration screen to configure Hotpatch for an Azure VM.
Figure 3: Enable the Hotpatch Service
Hot patching minimizes the impact on system workloads by reducing the frequency of necessary reboots. This translates to improved system availability and performance.
Hotpatch updates are specifically designed for Windows security patches, ensuring quicker installation without the disruption of system restarts. This contributes to a higher level of protection against security threats.
Hot patching helps reduce the time during which systems are exposed to security vulnerabilities and minimizes the required change windows for updates. It also streamlines patch orchestration through Azure Update Manager.
How Hotpatch Works
With Hotpatch, your organization can better maintain its security posture. Staff can now apply critical security updates promptly, without needing to plan for disruptive reboots. Likewise, Hotpatch can reduce the number of reboots required for processing other Windows and application updates. These benefits reduce downtime and simplify administration.
In addition, Azure offers an industry-leading cloud security suite in Microsoft Defender for Cloud. This gives your SOC a single pane of glass to secure cloud-based servers, storage, databases, and more.
Further enhancing the management experience for Windows Server workloads is Azure Arc, which allows flexible management of Azure, hybrid, and multicloud workloads in one place.
The hot patch process operates by initially establishing a baseline using the current Cumulative Update for Windows Server. Periodically, typically every three months, this baseline is refreshed with the latest Cumulative Update. Subsequently, hot patches are issued for the following two months. For instance, if January receives a Cumulative Update, February and March will see hot patch releases.
Figure 4 shows an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes). See more on Microsoft’s website: Reference.
Figure 4. Example of an Annual 3-Month Schedule
Since Hotpatch modifies the in-memory code of running processes without necessitating a process restart, your applications remain unaffected during the patching process. This action is separate from any potential performance or functionality implications associated with the patch itself.
There are two distinct types of baselines:
Planned baselines follow a regular release cadence, interspersed with hot patch releases. These planned baselines incorporate all updates present in a comparable latest cumulative update for that month and necessitate a system reboot.
- The sample image in Figure 4 illustrates four planned baseline releases in a calendar year, with a total of five in the diagram, alongside eight hot patch releases.
Unplanned baselines are released when an essential update, such as a zero-day fix, is made available and cannot be delivered as a hot patch. In such cases, the hot patch release for that month is replaced by an unplanned baseline. Like planned baselines, unplanned baselines include all updates from a comparable Latest Cumulative Update for that month and also require a system reboot.
- The sample schedule illustrates two unplanned baselines that would replace the hot patch releases for those months (the actual number of unplanned baselines in a year isn’t known in advance).
Supported Updates
Hot patch functionality in Windows operating systems encompasses the application of Windows Security updates while maintaining alignment with the content of security updates disseminated via the standard, non-hot patched Windows update channel.
It is imperative to bear in mind certain critical considerations when operating a Windows Server Azure Edition Virtual Machine (VM) with Hotpatch enabled. Namely, the necessity of periodic reboots persists for the installation of updates not encompassed within the Hotpatch program. Additionally, scheduled reboots are mandated following the deployment of a new baseline to ensure synchronization with non-security patches incorporated in the most recent cumulative update.
Patches currently excluded from the purview of the Hotpatch program encompass non-security updates issued for the Windows operating system, updates pertaining to the .NET framework, and non-Windows updates, such as those pertaining to drivers and firmware. These categories of patches may necessitate a reboot during months in which hot patch updates are applied.
Patch Orchestration Process
Hotpatch represents an extension of the Windows Update framework and conventional orchestration procedures. Here are things to consider to effectively orchestrate Hotpatch updates:
In the Azure environment, virtual machines created within Azure benefit from Automatic VM Guest Patching as a default feature when employing a supported Windows Server Datacenter: Azure Edition image. There are a couple key aspects of Automatic VM guest patching in Azure to consider, as enumerated in the bullet items below.
For automatic download and application of Critical or Security classified patches on the virtual machine:
- Patch application occurs during off-peak hours according to the VM’s time zone.
- Patch orchestration is managed by Azure, with a primary focus on availability-first principles.
- Continuous monitoring of virtual machine health, utilizing platform health signals, facilitates the early detection of patching failures.
In the Azure Stack HCI context, Hotpatch updates for virtual machines created on Azure Stack HCI are orchestrated through:
- Utilization of Group Policy to configure the Windows Update client settings.
- Configuration of Windows Update client settings or the employment of SCONFIG for Server Core environments.
- Alternatively, the adoption of a third-party patch management solution.
Understanding the Patch Status for Your VM in Azure
The interface, as depicted in the accompanying screenshot (Figure 5), delineates the hot patch status and enumerates pending updates, excluding critical and security updates which are deployed autonomously through Automatic VM Guest Patching—requiring no further user intervention. Noncritical patches, discernible under the “Update compliance” tab, necessitate manual oversight and are not subject to automatic installation. Furthermore, the “Update history” feature furnishes a 30-day ledger of update deployments, complete with detailed patch installation records.
Automatic VM guest patching perpetually evaluates the VM for pending updates, a process designed to ensure prompt detection and installation of new patches, as confirmed by the last assessment timestamp visible in the provided image. For immediate patch evaluation, the “Assess now” option triggers an ad hoc assessment, the results of which are available post-evaluation.
For direct patch application, the “Install updates now” function permits the manual installation of updates, categorized by patch classification or delineated by specific knowledge base articles. Note that on-demand installations may not adhere to availability-first principles, potentially necessitating additional reboots and consequent VM downtime.
For local patch verification, the “Get-HotFix” PowerShell command will display the installed updates. Note that Hotpatch updates cannot be rolled back.
Figure 5. Updates Preview
Performance Testing
To assess performance, we ran our GigaOm Transactional Field Test benchmark on Microsoft SQL Server Standard Edition. Azure employed the new Easv6 instances powered by AMD 4th Generation EPYCTM 9004 processors, while AWS was tested with R7a EC2 instances also using 4th Generation AMD processors.
As shown earlier in Table 1, the new Azure Easv6 VM class offers a maximum storage IOPS of 57,600, 44% higher than the AWS instance. For storage-based workloads, this directly translates into additional performance per dollar.
Figure 6. GigaOm Transactional Field Test: Transactions per Second
The chart in Figure 6 shows that Azure produces a significant advantage over AWS in terms of overall throughput in transaction processing workload. Transactions per second (tps) for Azure generally ranged about 50% higher than those for AWS on similarly configured systems.
This result is higher than the rated 44% IOPS advantage (19,200 vs 13,333, as shown in Table 1) for Azure over AWS and is likely a product of the higher boost clock of the Azure instance compared to the AWS instance. Looking across all runs, the average throughput for Azure was 685.1 tps, which is 51% higher than the throughput for AWS at 453.9 tps.
TCO Analysis
Table 2 depicts the monthly costs of the primary cost components of the cloud systems used in the performance benchmark. Note that cloud environments have numerous ancillary costs, such as data egress, networking services, and other subscriptions, which, at a high level, are largely comparable across environments.
Based on our configuration, the overall TCO for Azure is nearly 9% lower than the TCO for AWS, yielding a three-year net savings of $28,642.94.
Table 2. 3-Year System TCO Breakout
Azure | AWS | |
---|---|---|
e32as_v6 | $1,603.77 | |
r7a.8xLarge | $2,001.57 | |
SQL Server Standard Edition(Included with instance) | $2,336.00 | $2,803.20 |
Data Volumes 1-3 | $3,969.65 | $3,902.40 |
Data Volumes 4-7 | $369.79 | $367.68 |
Total Monthly | $8,279.21 | $9,074.85 |
3-Year TCO | $298,051.66 | $326,694.60 |
Source: GigaOm 2024 |
Figure 7 breaks out the three-year spend by component. The chart shows that the bulk of the cost advantage for Azure comes from reduced instance and SQL Server licensing costs.
Figure 7. Three-Year Spend by Component
Note that the cost savings for Azure are even more dramatic for organizations that can take advantage of Azure Hybrid Benefit (AHB), which reduces the cost of licenses for those moving on-premises systems to the cloud. Azure Hybrid Benefit specifically addresses organizations planning to continue their on-premises Windows Server and Microsoft SQL Server licenses with Software Assurance. For more information on AHB, see Microsoft’s website.
For purposes of this analysis, we do not factor in AHB pricing. All prices are based on standard licensing terms and can be accessed from the vendor’s websites.
Price-Performance
Next, we apply the overall performance results from our benchmark tests against the computed TCO to derive a normalized cost per unit of performance between the two platforms. For this, we take the average tps produced from our three series of tests, each consisting of 240 individual runs. For AWS, the average measured throughput was 453.9 tps, while Azure produced a 51% gain in performance with 685.1 tps.
The resulting price-performance analysis yields a significant advantage for Azure over AWS. Azure produced a price-performance figure of $435.05, while AWS produced price-performance that was 65% higher, at $719.70. (Figure 8)
Figure 8. Overall Price-Performance
One of the key contributors to this superior price performance is Azure Boost, a technology that offloads virtualization processes onto specialized hardware.
Finally, Table 3 outlines the calculations used to estimate the price to performance relationship between the Azure and AWS environments.
Table 3. Price to Performance Comparison Chart
Azure | AWS | Cost Savings vs AWS | |
---|---|---|---|
3-Year TCO (USD) | $298,051.66 | $326,694.60 | $28,642.94 |
Transactions per Second | 685.100 | 453.931 | |
Price-Performance (USD) | $435.05 | $719.70 | $284.65 |
% Price-Performance Savings vs. AWS | 39.55% | ||
Source: GigaOm 2024 |
5. Conclusion
Choosing a cloud platform represents a complex decision matrix that involves dozens of stakeholders, variable costs, and critical tradeoffs between product feature sets. We explored the two leading platforms for cloud-based migration, Microsoft Azure and AWS, to better understand how they compare against each other as candidates for hosting Windows Server workloads.
Our finding: Windows Server 2022: Datacenter – Azure Edition running on Microsoft Azure provides lower costs, higher performance, and exclusive features compared to AWS. Our benchmark testing shows that Azure enjoys a 50% advantage in transaction processing throughput over AWS while offering three-year TCO that is 9% less expensive.
The result: Azure produced a significant price-performance advantage over AWS, resulting in approximately 66% more performance per dollar for transaction processing workloads.
For organizations relying heavily on Windows Server workloads, we have identified that there is a strong value proposition to migrate and deploy these workloads on Azure for the platform-exclusive features available through Windows Server 2022: Datacenter – Azure Edition.
Features available only on Windows Server: Azure Edition are significant to both operational efficiency and security posture, and if relevant to your architecture, they could significantly impact the overall cost of a cloud migration. Additionally, Azure has strong licensing benefits for both Windows Server and Microsoft SQL server, which can tip the scales in Microsoft’s favor for the performance-normalized total cost of ownership for an environment.
6. Appendix
Microsoft SQL Server Enterprise Results
In addition to testing transaction throughput on SQL Server Standard Edition on Azure and AWS, we ran an additional GigaOm Transactional Field Test on SQL Server Enterprise Edition using the same hardware configuration; however, the settings differed slightly, so the results are not directly comparable with SQL Server Standard results. As Figure 9 shows, Microsoft SQL Server Enterprise produced a similar performance advantage for Azure, with 58% higher throughput than AWS.
Figure 9. SQL Server Enterprise Results
The results from SQL Server Enterprise Edition are lower than the SQL Server Standard Edition testing for two reasons:
- SQL Server Enterprise Edition uses all 32 processor cores, so our load generation servers, which were previously running on dedicated processor cores on the same machine, now have other workloads to contend with.
- SQL Server Enterprise Edition was tested with a full-scale dataset, using 500 scale factor and 300 days of history, while SQL Server Standard Edition was tested with only 30 days of history. Each transaction required more work to complete compared to the abridged data set.
Links
Links to pricing and other relevant sources:
- Easv6 and Eadsv6-series (Preview)
- Amazon EC2 R7a Instances
- Azure Pricing Calculator (Using v5 VM as v6 is not yet available)
- SQL Server Standard Virtual Machines pricing (Azure Easv6 Pricing)
- AWS Pricing Calculator
Links to information about the SMB over Quic protocol:
Link to database migration types:
https://learn.microsoft.com/en-us/windows-server/get-started/hotpatch#how-hotpatch-works
7. About Eric Phenix
Eric Phenix is Engineering Manager at GigaOm and responsible for our cloud platforms and guiding the engineering behind our research. He has worked as a senior consultant for Amazon Web Services, where he consulted for and designed both systems and teams for over 20 Fortune 1000 enterprises; and as cloud architect for BP, where he helped BPX Energy migrate their process control network from on-premises to AWS, creating the first 100% public cloud control network, operating over $10 billion in energy assets in the Permian Basin.
8. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
9. Copyright
© Knowingly, Inc. 2025 "GigaOm Benchmark: Migrating Windows Server Workloads to Azure" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.