Are you looking to evaluate enterprise technologies faster?
Sign up to watch GigaOm's 4-part on-demand research video series that breakdowns our research methodology and what differentiates us as a leader in emerging technology research.
- Drivers of CI/CD Evolution
- Core Concepts of the End-to-End CI/CD Pipeline
- Evaluating Providers and Delivering on CI/CD
- Table Stakes, Elements Specific to ‘Core’ CI/CD
- Conclusion: Start with Visibility and Best Practice
- About Jon Collins
DevOps, the philosophy ‘du jour’ for software development and operations, has just celebrated its tenth anniversary. At its heart, it proposes a collaboration culture, with speed of innovation a consequence of better-managed, automated software delivery pipelines. While it does not seek to be prescriptive, the core of any DevOps pipeline depends on the joint practices of Continuous Integration and Continuous Delivery (CI/CD), which provide the tools. Today, software development organizations will rely on some kind of CI/CD toolset, or are looking to do so as they move away from linear, ‘waterfall’ approaches to software development, towards models that better support their innovation and digital transformation goals.
So, where did CI/CD start? Without delving too deeply, a clear moment was reached when tools designed to support software delivery were no longer able to keep up with faster-moving, ‘agile’ development practices. These first emerged as the spiral, prototype-driven approaches of the late 1980s, moved through rapid and dynamic methodologies of the 1990s and ultimately, arrived at a watershed when Kent Beck conceived eXtreme Programming (xP) and made it cool to be a developer again.
Even as the need for speed increased, programmers still relied on custom build scripts that were complex and difficult to change. As xP took hold, tools appeared, enabling software builds to be defined and changed more easily, supporting better practices of building code often and early, and bringing in features that supported more rapid delivery – such as diagnostics and re-execution, should a build fail. Open source played its part, leading to the creation of the Jenkins CI tool – while this claims a de facto standard position in CI, many other tools emerged and are in wide use today.
Thus, Continuous Integration appeared both through necessity (the mother of invention) and because the very presence of tools created a market landscape for them. The arrival of standardized software stacks and cloud platforms took the relationship between tools and aspirational goals one step further, with the advent of Continuous Delivery – the ability to deploy executable code to a target environment (we can add Continuous Deployment as another use of ‘CD’, with the caveat that we should not get hung up on terminology).
Since notions of CI/CD were first proposed, the need to deliver software quickly and efficiently has only increased. Some have started to talk about ‘continuous everything’, but realistically, while many enterprises may be looking to apply more advanced concepts such as DevOps, they can struggle to deliver on the basics of continuous anything (see also: “continuous in name only”). Meanwhile, many CI/CD tools still in use today were designed for a different time: cloud-based architectures have evolved, languages have proliferated, and software development approaches have matured still further.
As a result, it is more important than ever for enterprises to deliver a CI/CD foundation so they can accelerate their software delivery efforts, based on modern tools that work with the architectural norms of today, and taking current and future needs into account. In this report we consider:
- The drivers of the continued evolution of CI/CD
- The core concepts of the end-to-end CI/CD pipeline
- Evaluation criteria for the tooling available and entering the market
Overall, we set out pointers that organizations need as they look beyond traditional CI/CD and towards delivering on technology-based innovation at scale.