- Report Methodology
- Key Criteria Definitions
- Vendor Review
- About Andrew Brust
In the enterprise sphere, sources of data are multiplying and data volumes are growing with dizzying speed. Moreover, this rate of growth will continue to accelerate as data is increasingly seen to be a crucial organizational asset, a competitive differentiating factor, and a key component of business success.
The way data is ingested, processed, loaded, and orchestrated has been changing as well, taking these factors into account. While traditional data movement and transformation tools are hardly fading away, a new breed of data pipeline platforms has risen to compete with the classic stalwarts. As a result, the market now offers a broad array of solutions for moving and processing data. Users will need to take a close look at several factors when considering implementation of data pipeline solutions, balancing the need for stability with the desire to take advantage of new capabilities.
In this report, we explore the major data pipeline platforms, outline the key criteria by which they should be evaluated, and recognize each platform based on how they cover these key criteria.
- The marketplace for classic on-premises data pipeline solutions is mature and stable. Most new development is premised on newer, cloud-native environments
- Most offerings provide significant ease of use. Combined with newer, more flexible approaches to data movement and transformation, this allows many non-technical users to have access to data that was previously considered out of reach and to generate insights based on easy exploration
- The most complete offerings provide connectivity to on-premises relational databases, file systems, full-blown data lakes, and to cloud data warehouses and applications
- We have grouped the platforms in this report into two distinct categories:
- Traditional ETL offerings from the industry’s incumbent vendors
- Cloud-native and hybrid offerings from public cloud providers and newer, pure-play vendors