This GigaOm Research Reprint Expires Mar 5, 2025

High-Volume Data Replicationv1.0

Evaluating Fivetran HVR and Qlik Replicate

1. Executive Summary

This report was commissioned by Fivetran.

Whether for operational or analytical purposes – databases are the backbone of how many businesses run; from collecting consumer behavior on your website to processing IOT data across your supply chain and so much more. Accessing and replicating massive volumes of database content is key to business success and the responsibility of managing this crucial element of your infrastructure falls to data leaders and their teams.

Ensuring your solution for database replication can keep up with your business is a pressing need for every data leader across every industry and company size. In this report, we investigate two major vendors in database replication and put them to the test in terms of speed and cost.

Behind the Scenes: How it Works
The process of locating and recording modifications to data in a database and instantly sending those updates to a system or process downstream is known as data replication or change data capture (CDC).

Data is extracted from a source, optionally transformed, and then loaded into a target repository—such as a data lake or data warehouse. Ensuring that all transactions in a source database are recorded and instantly transferred to a target keeps the systems synchronized and facilitates movement of data between on-premises sources and the cloud with minimal to no downtime for dependable data replication.

CDC–an incredibly effective method for moving data across technologies–is essential to modern cloud architectures. The real-time data transfer accelerates analytics and data science use cases. Enterprise data architectures utilize CDC to efficiently power continuous data transport between systems. Log-based CDC is a CDC method that uses a database’s transaction log to capture changes and replicate them downstream.

Using competing technologies Fivetran HVR and Qlik Replicate, our scenario assessed the replication latency and the total cost of ownership (TCO) of syncing 50 GB to 200 GB per hour of change data between a source Oracle database and a target Snowflake data warehouse using log-based CDC on the source. These tests simulate scenarios commonly encountered by large enterprises when utilizing technologies for log-based CDC.

At 200 GB/hour Fivetran HVR produced 27x lower

latency and proved 63% less costly than Qlik Replicate.

In this study, we found significant differences in replication latency and total cost of ownership between Fivetran HVR and Qlik Replicate.

  • Fivetran HVR showed a flat linear trend in replication latency as volumes increased, while Qlik Replicate showed an accelerated growth trend in replication latency with larger redo log change data volumes. The replication latency difference between Fivetran HVR and Qlik Replicate increased with greater change data volumes: Fivetran HVR showed 27 times lower latency at 200 GB/hour than Qlik Replicate.
  • Total Cost of Ownership (TCO) calculations reveal that Fivetran HVR is less expensive than Qlik Replicate across all tested volumes. When using these two high-volume data replication platforms for a year, Fivetran HVR is 63% less expensive than Qlik Replicate at 200 GB/hour.

2. Platform Summary

When examining change data capture technologies, here are the three most important criteria to consider:

  • Ability to maintain consistency and ensure recoverability across systems via CDC transaction processing.
  • Support for asynchronous log-based CDC–the ideal method to capture changes without impacting database performance.
  • Total cost of ownership of not only solution licensure, but also infrastructure requirements and compute or other costs associated with the target software.

CDC enables many use cases such as real-time analytics, data consolidation and centralization, and backup failover and data recovery. Finding a platform that supports short-term and long-term business needs via log-based CDC is important. In this study, we examine two viable CDC technologies: Fivetran HVR and Qlik Replicate.

Fivetran HVR
Fivetran HVR is an enterprise-grade data replication and validation solution that works across complex, heterogeneous environments. It works with major on-premises and cloud technologies, data lake and data warehouse solutions, and various data formats. This flexibility makes it ideal for a wide range of analytical, operational, and transactional use cases, such as AI/ML workloads, predictive analytics, and cloud migration. Additionally, Fivetran HVR supports a broad array of topologies, including uni-directional, broadcast, consolidation, and cascading.

Fivetran HVR improves the efficiency of data replication by capturing only relevant changes from the source and compressing the data as soon as it is captured. Data always moves in a compressed format from the source and is only decompressed upon delivery into the target. To deliver data into a target, Fivetran HVR uses high-speed native technologies, including clustered file systems, staging tables, and other technology-appropriate options for maximum performance. With end-to-end encryption, cross-platform data validation and repair, and rich monitoring and reporting, Fivetran HVR is a reliable, secure, and scalable data platform built for the enterprise.

Qlik Replicate
Qlik Replicate (formerly Attunity Replicate) can be used to replicate data from a source database to a target database. The configuration of these jobs to transfer data from the source to the destination database can vary. Qlik Replicate enables the creation of replicas of production endpoints, the efficient and rapid loading of data to operational data stores and warehouses, and the distribution of data among endpoints.

Using a scalable multi-server, multi-task, and multi-threaded architecture, Qlik Replicate is made to support and grow with large-scale enterprise data replication scenarios. Qlik Replicate is a system that includes a replication server and an online console for transferring data between diverse and homogenous data sources. Instant access to tasks, status, performance, and resource usage data is given to users, both in the present and past.

3. Test Setup

The setup for this field test was informed by our field experience in the data integration market. The queries were executed using the following setup, environment, standards, and configurations.

Testing Design and Measurements

The purpose of our testing was to measure the comparative performance of both platforms at high rates of change data per hour. We designed a field test that executes a workload on our source database generating a large amount of change data over a short period and taking several measurements:

  • Replication latency (lag): The time delta between when the source (Oracle) data change workload stopped and when replication to our target database (Snowflake) completed.
  • Rate of change data: The growth of the Oracle Redo Log in terms of GB per hour during the change data workload.

These metrics became the basis for our performance and TCO comparison between these two products. To meet our testing objectives, we used the following architecture:

Figure 1. Testing Architecture

We installed Oracle databases on separate AWS EC2 instances and used the HammerDB benchmarking tool to create a TPC-C-like database. We also created additional wide “detail” tables and triggers to expand the amount of change data produced by each HammerDB transaction (see Appendix). We installed each platform’s software on their own EC2 instances in addition to a Fivetran High-Volume Agent on the Oracle database with which Fivetran was configured to synchronize changes. We created separate target databases and warehouses in our Snowflake account to receive the change data from each platform.

Testing Workloads

We used HammerDB to generate high volumes of redo log changes in our source Oracle databases. As the name implies, HammerDB is often used to hit the database as hard as possible. It does not have a transaction rate limiter. However, we achieved variable redo log growth rates by adjusting the number of HammerDB threads (i.e., virtual users) and TPC-C scale factor (i.e., warehouses). We found a linear correlation between TPC-C transactions per minute and redo log growth per hour.

Figure 2. HammerDB TPC-C Transactions Per Minute vs. Oracle Redo Log Growth

We also added multiple database schemas (i.e., Oracle tablespaces) to divide the transaction workload to achieve lower redo log change rates. For example, we found that we produced roughly 100 GB/hour of redo log changes with TPC-C warehouses at a scale factor of 10 and HammerDB set to 1 virtual user thread. We could then divide that work in half between two Oracle tablespaces (50 GB/hour each) and sync only one of the tablespaces to achieve approximately 50 GB/hour.

For readers wishing to replicate our testing, we iterated through the HammerDB and Oracle configurations detailed in Table 1 to run our tests. We achieved redo log growths as low as 24 GB/hour and as high as 212 GB/hour using the extended TPC-C database (detail provided in the Appendix). For more information about Snowflake warehouse sizing, go to this link: https://docs.snowflake.com/en/user-guide/warehouses-overview.

Table 1. HammerDB and Oracle Test Iterations

Configuration Iterations
Oracle EC2 Instance Sizes r5b.4xlarge (16 vCPU, 128 GB RAM)
r5b.8xlarge (32 vCPU, 256 GB RAM)
Snowflake Warehouse Size Small (2 credits/hour)
Oracle Tablespaces 1, 2, and 3
TPC-C Scale Factors 10, 20, and 30 warehouses
HammerDB virtual user threads 1, 2, 5, 10, 20, and 30
Test Durations 15 and 60 minutes
Source: GigaOm 2024

Other Oracle configurations we used were:

  • Version 19c
  • Oracle Linux 8
  • Total memory size set to 90% of total EC2 instance memory
  • Redo log size set to 1 GB
  • Add supplemental log data
  • Single tenant (non-container database without pluggable database)

All other setups were completed following vendor documented steps and recommendations so that configurations were as close to “out of the box” as possible.

Testing Specifications

Table 2 details the testing environment configurations. Here are the installation downloads for AWS Marketplace and Fivetran Account Dashboard.

Table 2. Test Configurations

Configuration Qlik Replicate Fivetran HVR
Version November 2023  6.1.5
Release Date November 14, 2023 November 21, 2023
Installation Download AWS Marketplace Fivetran Account Dashboard
Hub Operating System Microsoft Windows Server 2019 Base Ubuntu 22.04
EC2 Instance Size m5a.4xlarge c6id.4xlarge
EC2 CPU + Memory 16 vCPU, 64 GB RAM 16 vCPU, 32 GB RAM
EC2 On-Demand Rate $0.6880 $0.8064
Hub Disk Storage EBS gp3 3,000 IOPS EC2 Locally-attached SSD
Additional Settings N/A CycleByteLimit=0
Source: GigaOm 2024

Note that the Qlik Replicate offering in the AWS Marketplace does not have an option for an EC2 instance with locally-attached SSD storage.

4. Performance Test Results

The following section details the results of our performance testing using the methods and configurations detailed above. We took many measurements during our testing, and the most relevant and revealing was replication latency, representing the lag between when the HammerDB workload completed and when replication to our Snowflake databases completed.

While testing, we were specifically targeting redo log rates of 50, 100, and 200 GB/hour. Without a rate limiter and the inherent variability of testing results with HammerDB, it was virtually impossible to produce exact amounts of redo log change rates. Therefore, we completed nearly 30 test runs that produced changes in the range of 50 to 200 GB/hour.

In preliminary testing, we found that 15-minute and 60-minute test durations produced linearly correlative results, so we used a 15-minute test duration for our total cost of ownership analysis in the next section.

The following chart is a scatter plot of all our results when comparing the measured rate of redo log changes against the replication latency or lag after the 15-minute HammerDB workload had completed. The chart in Figure 3 shows the results.

Figure 3. Latency vs. Redo Log Rate

We noticed a clear pattern in the data and drew trendlines to highlight them. Fivetran HVR had a flat linear trend, with minor latency increases as we upped the rate of redo log changes. Conversely with Qlik Replicate, we found that not only did latency increase with the redo log change rate—it accelerated. As shown in Figure 3, a power series curve provided the best fit. Table 3 summarizes our findings at our target redo log change rates.

Table 3. Latency at 50, 100, and 200 GB/hour Redo Log Changes

Redo Log Rate Qlik Lag Fivetran Lag Fivetran Advantage
50 GB/hour 2.8 min 2.1 min 1.3x
100 GB/hour 15.3 min 2.5 min 6x
200 GB/hour 83.4 min 3.1 min 27x
Source: GigaOm 2024

5. Total Cost of Ownership

TCO for a data integration platform refers to the comprehensive cost of acquiring, implementing, and maintaining the platform throughout its lifecycle, including initial costs, ongoing expenses, and potential hidden costs. It encompasses factors such as licensing, infrastructure, training, support, and other expenses associated with the platform to provide a holistic view of its economic impact on an organization. Since our study uses cloud infrastructure platforms with consumption-based pricing, we chose to zero in on total usage costs within the overall TCO umbrella. Our total usage costs included:

Calculations

Assumptions
Our total usage cost is based on a number of assumptions. First, as noted in the Performance Test Results section, we found that 15-minute and 60-minute test durations produced linearly correlative results. As such, we projected the 15-minute test duration when calculating figures for the once-per-hour, 24-hours/day pattern. The following scenario and assumptions were made:

  • One-year period
  • Data replication is performed once an hour, 24 hours per day
  • Data replication job must complete within the hour or an additional hub would be counted to partition the workload
  • Snowflake Warehouse set to automatically suspend after five minutes

EC2 Instances
Table 4 details the EC2 instance costs for running each competitors’ hub software using reserved instance one-year all upfront pricing in the US West 2 region.

Table 4. AWS EC2 Instance Costs

AWS EC2 Instance Cost Qlik Replicate Fivetran HVR
EC2 Instance Size m5a.4xlarge c6id.4xlarge
1-Year Hardware Cost $4,155 $3,544
Source: GigaOm 2024

While testing, we found that our Fivetran HVR Hub had plenty of CPU and memory headroom remaining even during the 200 GB/hour tests, so we could have used a smaller, cheaper EC2 instance size. For the sake of fairness, we chose to keep our test instance sizes the same for Qlik Replicate and Fivetran HVR in our cost calculations.

Qlik Replicate
For Qlik Replicate, we used the annual software license offering from the AWS Marketplace with a 21% discount over the standard hourly rate. At 200 GB/hour, Qlik Replicate was unable to complete the sync in under one hour, so we priced in an additional instance. Table 5 shows this.

Table 5. Qlik Replicate Costs

Qlik Replicate Costs 50 GB/hour 100 GB/hour 200 GB/hour
EC2 Instance Size m5a.4xlarge m5a.4xlarge m5a.4xlarge
Quantity 1 1 2
1-Year Software Cost $41,320 $41,320 $82,640
Source: GigaOm 2024

Fivetran HVR
Fivetran uses Monthly Active Rows (MAR) to calculate pricing. We calculated the number of active rows by using the Fivetran Dashboard, which is connected to our hub and records the MAR for billing purposes. The results are in Table 6.

Table 6. Fivetran HVR Costs

Fivetran HVR Costs 50 GB/hour 100 GB/hour 200 GB/hour
Monthly Active Rows 2,969,530 5,939,061 11,878,122
Figure # 4 5 6
1-Year Software Cost $21,053 $33,600 $52,710
Source: GigaOm 2024

Figures 4, 5, and 6 show the Fivetran pricing curve at 50 GB, 100 GB, and 200 GB per hour, respectively from the fivetran.com/pricing page. The charts show how the number of monthly active rows processed by Fivetran impacts the cost per million MAR, with the cost per million declining as the number of monthly active rows goes up. The cost per million MAR at 50 GB/hour is $586.80, at 100 GB/hour $468.23, and at 200 GB/hour $371.30.

Figure 4. Fivetran Pricing at 50 GB/hour

Figure 5. Fivetran Pricing at 100 GB/hour

Figure 6. Fivetran Pricing at 200 GB/hour

Snowflake
For our target data warehouse costs, we used Snowflake Enterprise Edition pricing at $3.00 per credit per hour. We used a small warehouse size, which consumes two credits per hour, giving a grand total of $6.00 per hour. This is calculated by the total amount of time over the course of a year that the warehouse is active. Since we are only syncing data once per hour, the Snowflake warehouse is up and running during the sync and then automatically suspends itself after five minutes of inactivity. Thus, the faster the sync runs, the more time the warehouse stays in a suspended, no-billing state. Table 7 shows the results with Qlik Replicate.

Table 7. Snowflake Costs with Qlik Replicate

Qlik Replicate Snowflake Costs 50 GB/hour 100 GB/hour 200 GB/hour
Latency 2.8 min 15.3 min 83.4 min
Snowflake Active Time Per Hour 7.8 min 20.3 min 93.4 min
1-Year Snowflake Cost $6,852 $17,818 $81,802
Source: GigaOm 2024

Note that at 200 GB/hour, Qlik Replicate was unable to complete the sync in under one hour, so we priced in a second Snowflake warehouse and divided the processing time between them plus the five-minute suspension timeout for both. Table 8 shows the results with Fivetran HVR, while Figure 7 compares the one-year cost between Qlik and Fivetran.

Table 8. Snowflake Costs with Fivetran HVR

Fivetran HVR Snowflake Costs 50 GB/hour 100 GB/hour 200 GB/hour
Latency 2.1 min 2.5 min 3.1 min
Snowflake Active Time Per Hour 7.1 min 7.5 min 8.1 min
1-Year Snowflake Cost $6,233 $6,531 $7,127
Source: GigaOm 2024

Figure 7. One-Year Snowflake Cost

Total Usage Cost

Adding our calculations together, we arrive at the total usage costs of using these two high-volume data replication platforms for one year, as shown in Table 9 and Figure 8.

Table 9. Total Usage Cost

Total Usage Costs 50 GB/hour 100 GB/hour 200 GB/hour
Qlik Replicate $51,716 $62,682 $171,530
Fivetran HVR $31,441 $44,286 $63,991
Fivetran % More Cost Effective 39% 29% 63%
Source: GigaOm 2024

Figure 8. One-Year Total Usage Cost

At 50 GB/hour, Fivetran HVR is 39% less expensive and at 100 GB/hour it is 29% cheaper than Qlik Replicate. The biggest difference came when we scaled to 200 GB/hour, where Fivetran HVR is 63% less expensive than Qlik Replicate. These are savings that go directly to the bottom line.

6. Conclusion

Data replication or change data capture is an important part of any enterprise data architecture. Replication latency can inhibit real-time strategies, and replication costs will recur frequently, making it imperative that it be examined with scrutiny.

Examining these two high-volume database replication technologies, Fivetran HVR and Qlik Replicate produced useful findings related to replication latency and total cost of ownership. Replication latency, or the difference in time between the completion of the HammerDB workload and the completion of replication to our Snowflake databases, was the most significant and informative metric we measured during our testing.

The results revealed a distinct pattern. Despite our increasing pace of redo log updates, Fivetran HVR showed a flat linear trend, with latency that hardly changed. On the other hand, we discovered that using Qlik Replicate, latency increased and even accelerated with growing redo log change rates.

As the redo log rate increased, so did the replication delay differential. Fivetran HVR was 27 times faster than Qlik Replicate at a rate of 200 GB/hour.

Fivetran HVR, at 50, 100, and 200 GB per hour, produced savings

of 39%, 29%, and 63%, respectively, compared to Qlik Replicate.

Ultimately, we calculated the total usage expenses for one year of using these two high-volume data replication technologies by including the costs of Snowflake, AWS, and licensure. Fivetran HVR is 39% less expensive than Qlik Replicate at 50 GB/hour and 29% less expensive than Qlik Replicate at 100 GB/hour. When we grew to 200 GB/hour, Fivetran HVR was 63% less expensive than Qlik Replicate. This was the biggest difference.

When considering the acquisition of a new database replication technology to support your data pipelines, there are many features, performance, and cost considerations. From the results of this study comparing Fivetran HVR and Qlik Replicate, Fivetran HVR has lower latency and cost of ownership across the tested spectrum of redo log change data volumes.

7. Appendix

To reach our change volume targets, we used the following Oracle SQL DDL script to modify and expand the base TPC-C tables, which increased the amount of change data during our tests.

create table customer_detail
( c_id number(5) not null, c_d_id number(2) not null, c_w_id number(4) not null, c_first varchar2(16) not null, c_middle char(2), c_last varchar2(16) not null, c_street_1 varchar2(20), c_street_2 varchar2(20), c_city varchar2(20), c_state char(2), c_zip char(9), c_phone char(16), c_since date, c_credit char(2), c_credit_lim number(12,2), c_discount number(4,4), c_balance number(12,2), c_ytd_payment number(12,2), c_payment_cnt number(8), c_delivery_cnt number(8), c_data varchar2(500), c_id2 number(5) not null, c_d_id2 number(2) not null, c_w_id2 number(4) not null, c_first2 varchar2(16) not null, c_middle2 char(2), c_last2 varchar2(16) not null, c_street_12 varchar2(20), c_street_22 varchar2(20), c_city2 varchar2(20), c_state2 char(2), c_zip2 char(9), c_phone2 char(16), c_since2 date, c_credit2 char(2), c_credit_lim2 number(12,2), c_discount2 number(4,4), c_balance2 number(12,2), c_ytd_payment2 number(12,2), c_payment_cnt2 number(8), c_delivery_cnt2 number(8), c_data2 varchar2(500), c_id3 number(5) not null, c_d_id3 number(2) not null, c_w_id3 number(4) not null, c_first3 varchar2(16) not null, c_middle3 char(2), c_last3 varchar2(16) not null, c_street_13 varchar2(20), c_street_23 varchar2(20), c_city3 varchar2(20), c_state3 char(2), c_zip3 char(9), c_phone3 char(16), c_since3 date, c_credit3 char(2), c_credit_lim3 number(12,2), c_discount3 number(4,4), c_balance3 number(12,2), c_ytd_payment3 number(12,2), c_payment_cnt3 number(8), c_delivery_cnt3 number(8), c_data3 varchar2(500), last_updated timestamp, primary key (c_id, c_w_id, c_d_id, last_updated)
) ;
insert /*+ append */ into customer_detail
select c_id, c_d_id, c_w_id, c_first, c_middle, c_last, c_street_1, c_street_2, c_city, c_state, c_zip, c_phone, c_since, c_credit, c_credit_lim, c_discount, c_balance, c_ytd_payment, c_payment_cnt, c_delivery_cnt, c_data, c_id, c_d_id, c_w_id, c_first, c_middle, c_last, c_street_1, c_street_2, c_city, c_state, c_zip, c_phone, c_since, c_credit, c_credit_lim, c_discount, c_balance, c_ytd_payment, c_payment_cnt, c_delivery_cnt, c_data, c_id, c_d_id, c_w_id, c_first, c_middle, c_last, c_street_1, c_street_2, c_city, c_state, c_zip, c_phone, c_since, c_credit, c_credit_lim, c_discount, c_balance, c_ytd_payment, c_payment_cnt, c_delivery_cnt, c_data, systimestamp
from customer ;
create or replace trigger customer_bru
before update on customer
for each row
begin
  insert into customer_detail
  values
  ( :new.c_id, :new.c_d_id, :new.c_w_id, :new.c_first, :new.c_middle, :new.c_last, :new.c_street_1, :new.c_street_2, :new.c_city, :new.c_state, :new.c_zip, :new.c_phone, :new.c_since, :new.c_credit, :new.c_credit_lim, :new.c_discount, :new.c_balance, :new.c_ytd_payment, :new.c_payment_cnt, :new.c_delivery_cnt, :new.c_data, :new.c_id, :new.c_d_id, :new.c_w_id, :new.c_first, :new.c_middle, :new.c_last, :new.c_street_1, :new.c_street_2, :new.c_city, :new.c_state, :new.c_zip, :new.c_phone, :new.c_since, :new.c_credit, :new.c_credit_lim, :new.c_discount, :new.c_balance, :new.c_ytd_payment, :new.c_payment_cnt, :new.c_delivery_cnt, :new.c_data, :new.c_id, :new.c_d_id, :new.c_w_id, :new.c_first, :new.c_middle, :new.c_last, :new.c_street_1, :new.c_street_2, :new.c_city, :new.c_state, :new.c_zip, :new.c_phone, :new.c_since, :new.c_credit, :new.c_credit_lim, :new.c_discount, :new.c_balance, :new.c_ytd_payment, :new.c_payment_cnt, :new.c_delivery_cnt, :new.c_data, systimestamp
  ) ;
end ;
/
create table orders_detail
( o_id number not null, o_w_id number not null, o_d_id number not null, o_c_id number not null, o_carrier_id number, o_ol_cnt number, o_all_local number, o_entry_d date, o_id2 number not null, o_w_id2 number not null, o_d_id2 number not null, o_c_id2 number not null, o_carrier_id2 number, o_ol_cnt2 number, o_all_local2 number, o_entry_d2 date, o_id3 number not null, o_w_id3 number not null, o_d_id3 number not null, o_c_id3 number not null, o_carrier_id3 number, o_ol_cnt3 number, o_all_local3 number, o_entry_d3 date, o_id4 number not null, o_w_id4 number not null, o_d_id4 number not null, o_c_id4 number not null, o_carrier_id4 number, o_ol_cnt4 number, o_all_local4 number, o_entry_d4 date, o_id5 number not null, o_w_id5 number not null, o_d_id5 number not null, o_c_id5 number not null, o_carrier_id5 number, o_ol_cnt5 number, o_all_local5 number, o_entry_d5 date, primary key (o_id, o_w_id, o_d_id)
) ;
insert /*+ append */ into orders_detail
select  o_id, o_w_id, o_d_id, o_c_id, o_carrier_id, o_ol_cnt, o_all_local, o_entry_d, o_id, o_w_id, o_d_id, o_c_id, o_carrier_id, o_ol_cnt, o_all_local, o_entry_d, o_id, o_w_id, o_d_id, o_c_id, o_carrier_id, o_ol_cnt, o_all_local, o_entry_d, o_id, o_w_id, o_d_id, o_c_id, o_carrier_id, o_ol_cnt, o_all_local, o_entry_d, o_id, o_w_id, o_d_id, o_c_id, o_carrier_id, o_ol_cnt, o_all_local, o_entry_d
from orders ;
create or replace trigger orders_bri
before insert on orders
for each row
begin
  insert into orders_detail
  values
  ( :new.o_id, :new.o_w_id, :new.o_d_id, :new.o_c_id, :new.o_carrier_id, :new.o_ol_cnt, :new.o_all_local, :new.o_entry_d, :new.o_id, :new.o_w_id, :new.o_d_id, :new.o_c_id, :new.o_carrier_id, :new.o_ol_cnt, :new.o_all_local, :new.o_entry_d, :new.o_id, :new.o_w_id, :new.o_d_id, :new.o_c_id, :new.o_carrier_id, :new.o_ol_cnt, :new.o_all_local, :new.o_entry_d, :new.o_id, :new.o_w_id, :new.o_d_id, :new.o_c_id, :new.o_carrier_id, :new.o_ol_cnt, :new.o_all_local, :new.o_entry_d, :new.o_id, :new.o_w_id, :new.o_d_id, :new.o_c_id, :new.o_carrier_id, :new.o_ol_cnt, :new.o_all_local, :new.o_entry_d
  ) ;
end ;
/
create or replace trigger orders_bru
before update on orders
for each row
begin
  update orders_detail
  set o_c_id = nvl(:new.o_c_id,o_c_id), o_carrier_id = nvl(:new.o_carrier_id,o_carrier_id), o_ol_cnt = nvl(:new.o_ol_cnt,o_ol_cnt), o_all_local = nvl(:new.o_all_local,o_all_local), o_entry_d = nvl(:new.o_entry_d,o_entry_d), o_c_id2 = nvl(:new.o_c_id,o_c_id2), o_carrier_id2 = nvl(:new.o_carrier_id,o_carrier_id2), o_ol_cnt2 = nvl(:new.o_ol_cnt,o_ol_cnt2), o_all_local2 = nvl(:new.o_all_local,o_all_local2), o_entry_d2 = nvl(:new.o_entry_d,o_entry_d2), o_c_id3 = nvl(:new.o_c_id,o_c_id3), o_carrier_id3 = nvl(:new.o_carrier_id,o_carrier_id3), o_ol_cnt3 = nvl(:new.o_ol_cnt,o_ol_cnt3), o_all_local3 = nvl(:new.o_all_local,o_all_local3), o_entry_d3 = nvl(:new.o_entry_d,o_entry_d3), o_c_id4 = nvl(:new.o_c_id,o_c_id4), o_carrier_id4 = nvl(:new.o_carrier_id,o_carrier_id4), o_ol_cnt4 = nvl(:new.o_ol_cnt,o_ol_cnt4), o_all_local4 = nvl(:new.o_all_local,o_all_local4), o_entry_d4 = nvl(:new.o_entry_d,o_entry_d4), o_c_id5 = nvl(:new.o_c_id,o_c_id5), o_carrier_id5 = nvl(:new.o_carrier_id,o_carrier_id5), o_ol_cnt5 = nvl(:new.o_ol_cnt,o_ol_cnt5), o_all_local5 = nvl(:new.o_all_local,o_all_local5), o_entry_d5 = nvl(:new.o_entry_d,o_entry_d5)
  where o_id = :new.o_id
  and o_w_id = :new.o_w_id
  and o_d_id = :new.o_d_id ;
end ;
/
create table history_detail
( h_c_idnumber, h_c_d_id number, h_c_w_id number, h_d_id number, h_w_id number, h_date date, h_amount number(6,2), h_data varchar2(24), h_c_id2 number, h_c_d_id2 number, h_c_w_id2 number, h_d_id2 number, h_w_id2 number, h_date2 date, h_amount2 number(6,2), h_data2 varchar2(24), h_c_id3 number, h_c_d_id3 number, h_c_w_id3 number, h_d_id3 number, h_w_id3 number, h_date3 date, h_amount3 number(6,2), h_data3 varchar2(24), h_c_id4 number, h_c_d_id4 number, h_c_w_id4 number, h_d_id4 number, h_w_id4 number, h_date4 date, h_amount4 number(6,2), h_data4 varchar2(24), h_c_id5 number, h_c_d_id5 number, h_c_w_id5 number, h_d_id5 number, h_w_id5 number, h_date5 date, h_amount5 number(6,2), h_data5 varchar2(24), h_c_id6 number, h_c_d_id6 number, h_c_w_id6 number, h_d_id6 number, h_w_id6 number, h_date6 date, h_amount6 number(6,2), h_data6 varchar2(24), h_c_id7 number, h_c_d_id7 number, h_c_w_id7 number, h_d_id7 number, h_w_id7 number, h_date7 date, h_amount7 number(6,2), h_data7 varchar2(24), h_c_id8 number, h_c_d_id8 number, h_c_w_id8 number, h_d_id8 number, h_w_id8 number, h_date8 date, h_amount8 number(6,2), h_data8 varchar2(24), h_c_id9 number, h_c_d_id9 number, h_c_w_id9 number, h_d_id9 number, h_w_id9 number, h_date9 date, h_amount9 number(6,2), h_data9 varchar2(24), h_c_id10 number, h_c_d_id10 number, h_c_w_id10 number, h_d_id10 number, h_w_id10 number, h_date10 date, h_amount10 number(6,2), h_data10 varchar2(24)
) ;
insert /*+ append */ into history_detail
select h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data, h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data
from history ;
create or replace trigger history_bri
before insert on history
for each row
begin
  insert into history_detail
  values
  ( :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data, :new.h_c_id, :new.h_c_d_id, :new.h_c_w_id, :new.h_d_id, :new.h_w_id, :new.h_date, :new.h_amount, :new.h_data
  ) ;
end ;
/
create table order_line_detail
( ol_w_id number not null, ol_d_id number not null, ol_o_id number not null, ol_number number not null, ol_i_id number, ol_delivery_d date, ol_amount number, ol_supply_w_id number, ol_quantity number, ol_dist_info char(24), ol_w_id2 number not null, ol_d_id2 number not null, ol_o_id2 number not null, ol_number2 number not null, ol_i_id2 number, ol_delivery_d2 date, ol_amount2 number, ol_supply_w_id2 number, ol_quantity2 number, ol_dist_info2 char(24), ol_w_id3 number not null, ol_d_id3 number not null, ol_o_id3 number not null, ol_number3 number not null, ol_i_id3 number, ol_delivery_d3 date, ol_amount3 number, ol_supply_w_id3 number, ol_quantity3 number, ol_dist_info3 char(24), ol_w_id4 number not null, ol_d_id4 number not null, ol_o_id4 number not null, ol_number4 number not null, ol_i_id4 number, ol_delivery_d4 date, ol_amount4 number, ol_supply_w_id4 number, ol_quantity4 number, ol_dist_info4 char(24), ol_w_id5 number not null, ol_d_id5 number not null, ol_o_id5 number not null, ol_number5 number not null, ol_i_id5 number, ol_delivery_d5 date, ol_amount5 number, ol_supply_w_id5 number, ol_quantity5 number, ol_dist_info5 char(24), ol_w_id6 number not null, ol_d_id6 number not null, ol_o_id6 number not null, ol_number6 number not null, ol_i_id6 number, ol_delivery_d6 date, ol_amount6 number, ol_supply_w_id6 number, ol_quantity6 number, ol_dist_info6 char(24), ol_w_id7 number not null, ol_d_id7 number not null, ol_o_id7 number not null, ol_number7 number not null, ol_i_id7 number, ol_delivery_d7 date, ol_amount7 number, ol_supply_w_id7 number, ol_quantity7 number, ol_dist_info7 char(24), ol_w_id8 number not null, ol_d_id8 number not null, ol_o_id8 number not null, ol_number8 number not null, ol_i_id8 number, ol_delivery_d8 date, ol_amount8 number, ol_supply_w_id8 number, ol_quantity8 number, ol_dist_info8 char(24), primary key (ol_w_id, ol_d_id, ol_o_id, ol_number)
) ;
insert /*+ append */ into order_line_detail
select ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info, ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info
from order_line ;
create or replace trigger order_line_bri
before insert on order_line
for each row
begin
  insert into order_line_detail
  values
  ( :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info, :new.ol_w_id, :new.ol_d_id, :new.ol_o_id, :new.ol_number, :new.ol_i_id, :new.ol_delivery_d, :new.ol_amount, :new.ol_supply_w_id, :new.ol_quantity, :new.ol_dist_info
  ) ;
end ;
/
create or replace trigger order_line_bru
before update on order_line
for each row
begin
  update order_line_detail
  set ol_i_id = nvl(:new.ol_i_id,ol_i_id), ol_delivery_d = nvl(:new.ol_delivery_d,ol_delivery_d), ol_amount = nvl(:new.ol_amount,ol_amount), ol_supply_w_id = nvl(:new.ol_supply_w_id,ol_supply_w_id), ol_quantity = nvl(:new.ol_quantity,ol_quantity), ol_dist_info = nvl(:new.ol_dist_info,ol_dist_info), ol_i_id2 = nvl(:new.ol_i_id,ol_i_id2), ol_delivery_d2 = nvl(:new.ol_delivery_d,ol_delivery_d2), ol_amount2 = nvl(:new.ol_amount,ol_amount2), ol_supply_w_id2 = nvl(:new.ol_supply_w_id,ol_supply_w_id2), ol_quantity2 = nvl(:new.ol_quantity,ol_quantity2), ol_dist_info2 = nvl(:new.ol_dist_info,ol_dist_info2), ol_i_id3 = nvl(:new.ol_i_id,ol_i_id3), ol_delivery_d3 = nvl(:new.ol_delivery_d,ol_delivery_d3), ol_amount3 = nvl(:new.ol_amount,ol_amount3), ol_supply_w_id3 = nvl(:new.ol_supply_w_id,ol_supply_w_id3), ol_quantity3 = nvl(:new.ol_quantity,ol_quantity3), ol_dist_info3 = nvl(:new.ol_dist_info,ol_dist_info3), ol_i_id4 = nvl(:new.ol_i_id,ol_i_id4), ol_delivery_d4 = nvl(:new.ol_delivery_d,ol_delivery_d4), ol_amount4 = nvl(:new.ol_amount,ol_amount4), ol_supply_w_id4 = nvl(:new.ol_supply_w_id,ol_supply_w_id4), ol_quantity4 = nvl(:new.ol_quantity,ol_quantity4), ol_dist_info4 = nvl(:new.ol_dist_info,ol_dist_info4), ol_i_id5 = nvl(:new.ol_i_id,ol_i_id5), ol_delivery_d5 = nvl(:new.ol_delivery_d,ol_delivery_d5), ol_amount5 = nvl(:new.ol_amount,ol_amount5), ol_supply_w_id5 = nvl(:new.ol_supply_w_id,ol_supply_w_id5), ol_quantity5 = nvl(:new.ol_quantity,ol_quantity5), ol_dist_info5 = nvl(:new.ol_dist_info,ol_dist_info5), ol_i_id6 = nvl(:new.ol_i_id,ol_i_id6), ol_delivery_d6 = nvl(:new.ol_delivery_d,ol_delivery_d6), ol_amount6 = nvl(:new.ol_amount,ol_amount6), ol_supply_w_id6 = nvl(:new.ol_supply_w_id,ol_supply_w_id6), ol_quantity6 = nvl(:new.ol_quantity,ol_quantity6), ol_dist_info6 = nvl(:new.ol_dist_info,ol_dist_info6), ol_i_id7 = nvl(:new.ol_i_id,ol_i_id7), ol_delivery_d7 = nvl(:new.ol_delivery_d,ol_delivery_d7), ol_amount7 = nvl(:new.ol_amount,ol_amount7), ol_supply_w_id7 = nvl(:new.ol_supply_w_id,ol_supply_w_id7), ol_quantity7 = nvl(:new.ol_quantity,ol_quantity7), ol_dist_info7 = nvl(:new.ol_dist_info,ol_dist_info7), ol_i_id8 = nvl(:new.ol_i_id,ol_i_id8), ol_delivery_d8 = nvl(:new.ol_delivery_d,ol_delivery_d8), ol_amount8 = nvl(:new.ol_amount,ol_amount8), ol_supply_w_id8 = nvl(:new.ol_supply_w_id,ol_supply_w_id8), ol_quantity8 = nvl(:new.ol_quantity,ol_quantity8), ol_dist_info8 = nvl(:new.ol_dist_info,ol_dist_info8)
  where ol_w_id = :new.ol_w_id
  and ol_d_id = :new.ol_d_id
  and ol_o_id = :new.ol_o_id
  and ol_number = :new.ol_number ;
end ;
/
create table stock_detail
( s_i_id number(6) not null, s_w_id number(4) not null, s_quantity number(6), s_dist_01 char(24), s_dist_02 char(24), s_dist_03 char(24), s_dist_04 char(24), s_dist_05 char(24), s_dist_06 char(24), s_dist_07 char(24), s_dist_08 char(24), s_dist_09 char(24), s_dist_10 char(24), s_ytd number(10), s_order_cnt number(6), s_remote_cnt number(6), s_data varchar2(50), s_i_id2 number(6) not null, s_w_id2 number(4) not null, s_quantity2 number(6), s_dist_012 char(24), s_dist_022 char(24), s_dist_032 char(24), s_dist_042 char(24), s_dist_052 char(24), s_dist_062 char(24), s_dist_072 char(24), s_dist_082 char(24), s_dist_092 char(24), s_dist_102 char(24), s_ytd2 number(10), s_order_cnt2 number(6), s_remote_cnt2 number(6), s_data2 varchar2(50), s_i_id3 number(6) not null, s_w_id3 number(4) not null, s_quantity3 number(6), s_dist_013 char(24), s_dist_023 char(24), s_dist_033 char(24), s_dist_043 char(24), s_dist_053 char(24), s_dist_063 char(24), s_dist_073 char(24), s_dist_083 char(24), s_dist_093 char(24), s_dist_103 char(24), s_ytd3 number(10), s_order_cnt3 number(6), s_remote_cnt3 number(6), s_data3 varchar2(50), s_i_id4 number(6) not null, s_w_id4 number(4) not null, s_quantity4 number(6), s_dist_014 char(24), s_dist_024 char(24), s_dist_034 char(24), s_dist_044 char(24), s_dist_054 char(24), s_dist_064 char(24), s_dist_074 char(24), s_dist_084 char(24), s_dist_094 char(24), s_dist_104 char(24), s_ytd4 number(10), s_order_cnt4 number(6), s_remote_cnt4 number(6), s_data4 varchar2(50), s_i_id5 number(6) not null, s_w_id5 number(4) not null, s_quantity5 number(6), s_dist_015 char(24), s_dist_025 char(24), s_dist_035 char(24), s_dist_045 char(24), s_dist_055 char(24), s_dist_065 char(24), s_dist_075 char(24), s_dist_085 char(24), s_dist_095 char(24), s_dist_105 char(24), s_ytd5 number(10), s_order_cnt5 number(6), s_remote_cnt5 number(6), s_data5 varchar2(50), s_i_id6 number(6) not null, s_w_id6 number(4) not null, s_quantity6 number(6), s_dist_016 char(24), s_dist_026 char(24), s_dist_036 char(24), s_dist_046 char(24), s_dist_056 char(24), s_dist_066 char(24), s_dist_076 char(24), s_dist_086 char(24), s_dist_096 char(24), s_dist_106 char(24), s_ytd6 number(10), s_order_cnt6 number(6), s_remote_cnt6 number(6), s_data6 varchar2(50), s_i_id7 number(6) not null, s_w_id7 number(4) not null, s_quantity7 number(6), s_dist_017 char(24), s_dist_027 char(24), s_dist_037 char(24), s_dist_047 char(24), s_dist_057 char(24), s_dist_067 char(24), s_dist_077 char(24), s_dist_087 char(24), s_dist_097 char(24), s_dist_107 char(24), s_ytd7 number(10), s_order_cnt7 number(6), s_remote_cnt7 number(6), s_data7 varchar2(50), s_i_id8 number(6) not null, s_w_id8 number(4) not null, s_quantity8 number(6), s_dist_018 char(24), s_dist_028 char(24), s_dist_038 char(24), s_dist_048 char(24), s_dist_058 char(24), s_dist_068 char(24), s_dist_078 char(24), s_dist_088 char(24), s_dist_098 char(24), s_dist_108 char(24), s_ytd8 number(10), s_order_cnt8 number(6), s_remote_cnt8 number(6), s_data8 varchar2(50), s_i_id9 number(6) not null, s_w_id9 number(4) not null, s_quantity9 number(6), s_dist_019 char(24), s_dist_029 char(24), s_dist_039 char(24), s_dist_049 char(24), s_dist_059 char(24), s_dist_069 char(24), s_dist_079 char(24), s_dist_089 char(24), s_dist_099 char(24), s_dist_109 char(24), s_ytd9 number(10), s_order_cnt9 number(6), s_remote_cnt9 number(6), s_data9 varchar2(50), s_i_id10 number(6) not null, s_w_id10 number(4) not null, s_quantity10 number(6), s_dist_0110 char(24), s_dist_0210 char(24), s_dist_0310 char(24), s_dist_0410 char(24), s_dist_0510 char(24), s_dist_0610 char(24), s_dist_0710 char(24), s_dist_0810 char(24), s_dist_0910 char(24), s_dist_1010 char(24), s_ytd10 number(10), s_order_cnt10 number(6), s_remote_cnt10 number(6), s_data10 varchar2(50), s_i_id11 number(6) not null, s_w_id11 number(4) not null, s_quantity11 number(6), s_dist_0111 char(24), s_dist_0211 char(24), s_dist_0311 char(24), s_dist_0411 char(24), s_dist_0511 char(24), s_dist_0611 char(24), s_dist_0711 char(24), s_dist_0811 char(24), s_dist_0911 char(24), s_dist_1011 char(24), s_ytd11 number(10), s_order_cnt11 number(6), s_remote_cnt11 number(6), s_data11 varchar2(50), s_i_id12 number(6) not null, s_w_id12 number(4) not null, s_quantity12 number(6), s_dist_0112 char(24), s_dist_0212 char(24), s_dist_0312 char(24), s_dist_0412 char(24), s_dist_0512 char(24), s_dist_0612 char(24), s_dist_0712 char(24), s_dist_0812 char(24), s_dist_0912 char(24), s_dist_1012 char(24), s_ytd12 number(10), s_order_cnt12 number(6), s_remote_cnt12 number(6), s_data12 varchar2(50), last_updated timestamp, primary key (s_i_id, s_w_id, last_updated)
) ;
insert into stock_detail
select s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_ytd, s_order_cnt, s_remote_cnt, s_data, systimestamp
from stock ;
create or replace trigger stock_bru
before update on stock
for each row
begin
  insert into stock_detail values
  ( :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, :new.s_i_id, :new.s_w_id, :new.s_quantity, :new.s_dist_01, :new.s_dist_02, :new.s_dist_03, :new.s_dist_04, :new.s_dist_05, :new.s_dist_06, :new.s_dist_07, :new.s_dist_08, :new.s_dist_09, :new.s_dist_10, :new.s_ytd, :new.s_order_cnt, :new.s_remote_cnt, :new.s_data, systimestamp
  ) ;
end ;

						
						
						

8. Disclaimer

Performance is important but is only one criterion for a data warehouse platform selection. This is only one point-in-time check into specific performance. There are numerous other factors to consider in selection across factors of administration, integration, workload management, user interface, scalability, vendor, reliability, and numerous other criteria. It is also our experience that performance changes over time and is competitively different for different workloads. A performance leader can hit up against the point of diminishing returns and viable contenders can quickly close the gap.

GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of queries to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the data set sizes, the platforms, the queries, etc. used. The reader is left to determine for themselves how to qualify the information for their individual needs. The report does not make any claim regarding third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.

This is a commissioned report. Fivetran chose the competitors, the test, and the Fivetran configuration. GigaOm chose the most compatible configurations for the other tested platforms and ran the queries. Choosing compatible configurations is subject to judgment. We have attempted to describe our decisions in this paper.

In this writeup, all the information necessary is included to replicate this test. You are encouraged to compile your own representative queries, data sets, data sizes and compatible configurations and test for yourself.

9. About Fivetran

Fivetran, the industry leader in data integration, enables enterprises to power AI workloads like predictive analytics, AI/ML applications and generative AI, and accelerate cloud migration. The Fivetran platform reliably and securely centralizes data from hundreds of SaaS applications and databases into a variety of destinations—whether deployed on-premises, in the cloud, or in a hybrid environment. Thousands of global brands, including Autodesk, Condé Nast, JetBlue, and Morgan Stanley, trust Fivetran to move their most valuable data assets to fuel analytics, drive operational efficiencies, and power innovation. For more info, visit fivetran.com.

10. About William McKnight

William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.

Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.

11. About Jake Dolezal

Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.

12. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

13. Copyright

© Knowingly, Inc. 2024 "High-Volume Data Replication" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.