This is the third and final in a series of three blog posts exploring the relationship between observability and a set of SDLC practices.
In this blog post series, I’ve explored the relationship between observability and a set of software delivery lifecycle practices that help organizations adopt DevOps practices and change their ways of working from being project centric to product-centric. I started with Site Reliability Engineering, then considered Value Stream Management (VSM) and finish with this post on Continuous Integration and Delivery (CI/CD).
Defining Continuous Integration
To achieve Continuous Integration, a team needs a version control system and that all developers merge at least daily into trunk. On every merge, a series of automated tests occur: ideally unit, integration and user acceptance. If all tests pass, the change is accepted into the main development trunk. This practice of Continuous Integration is the basis for performing Continuous Delivery.
Defining Continuous Delivery and Deployment
Performing Continuous Integration gives teams the confidence that their (trunk) software is always in a releasable state. They can then deliver those changes to their customer whenever they want; not on a quarterly release schedule, not even at the end of the sprint, but on-demand – whenever they feel ready. The word ‘continuous’ is ubiquitous in the DevOps world (since a basic tenet is small, incremental and frequent changes). Another popular term is Continuous Deployment, the process of automatically pushing the code changes that pass the tests in the CI/CD pipeline straight into live. Continuous Delivery requires a manual intervention.
Culture and Humans
To reduce the risk of human failure, we create systems. Unfortunately these systems frequently become bureaucratic and interfere with DevOps’ goal of the fast flow of planned work. These systems include release managers/teams, release calendars and Change Approval/Advisory Boards (CABs). Research has shown that heavyweight (i.e. not peer-review) change processes negatively correlate with Continuous Delivery capability and then with technology and organizational performance.
Humans must streamline these processes with confidence. The infallibility of automation is a key lever to build the trust that allows us to dismantle the bureaucracy, break the silos and dependencies, and accelerate the flow of value to customers. Trust is built through data-driven conversations. Thus, CI/CD pipelines should have observability built in that collects data that provides teams with insights into their performance through metrics such as deployment frequency, lead time (from code commit to live in production) and change fail rate. This observability data powers data-driven conversations within teams, between teams and throughout the organization that show improvements and provide a basis for learning. They can also show how well teams and an organization are progressing in the adoption of new ways of working, which is essential for leadership to justify past and future investment in digital and DevOps transformation effort.
CI says that unit, integration and user acceptance tests should be completed, but this is both technically and culturally challenging for many teams and organizations. DevOps and agile promote small, multi-functional, autonomous teams, meaning that a single team contains all the capabilities and skills to build and own a long-lived product including: product ownership, business analysis, development, testing, operations, security and support. This means that testing silos should be broken and testing resources spread through the teams, which can be hard for traditional managers to accept. It also introduces challenges around ensuring that test best practices continue to be shared across testing professionals. This is why Spotify invented Chapters. Using observability across the testing process builds the case for changes in organizational design and in ways of working. Why? Data shows that teams that collaborate more closely and run more automated tests sooner, spend less of their valuable time on unplanned work.
Value Streams and Processes
A value stream is anything that delivers a product or a service, and thinking like a value stream is essential to working in a product centric way. In the digital world, a technology value stream can be an application (e.g. a mobile app), or constructed of several applications or microservices and be a business value stream such as mortgages for a bank, vehicle insurance for an insurer or an catalog line for a retailer.
Every value stream starts with an idea, as does every value stream enhancement. That idea then passes through a number of lifecycle stages as it metamorphosizes into something of material value that can be released to the customer for feedback, hopefully delight. A CI/CD pipeline can be integrated into a product backlog (where all the requirements are managed) and even earlier into the ideation process to planning and portfolio management, allowing for traceability of the work item using observability capabilities across the tools. However, many CI/CD and DevOps toolchain implementations stop at the go-live moment and fail to consider either the impact or outcome of the code changes released or the ongoing maintenance of the product.
True end-to-end value stream observability also takes advantage of service desk ticketing capabilities and monitoring in the live environment so that the flow from idea to value realization is fully understood. AIOps builds on this and allows us to also understand the impact of the value (using metrics such as page conversion and bounce rates) and the stability of the product (using metrics such as MTTR).
Having small, multifunctional, autonomous teams supports the goal of value stream-centric thinking. Bringing testers and developers closer together in the process supports the goal of shifting testing (including security) left; that is, performing it as early as possible in the development lifecycle. Teams frequently find that developers and testers will cross-skill, blurring the gap between these roles. Developers are usually the ones that will write the unit tests. As testers write more automated tests, they find their development skills growing. The team may also begin practicing test-driven development (TDD), where requirements are written as tests. They also may embrace observability driven development (ODD) where they understand the code as it’s written, include telemetry when it’s shipped and regularly review that it continues to operate as expected.
Automation and Tooling
Building a CI/CD pipeline is neither a quick nor easy job; there are many moving parts, starting with a version control system and artifact repository, forming the single source of the truth. A CI server provides the engine for managing the commits and merges to trunk. In DevOps, it’s advised to aim for trunk-based development, where no feature branch lasts for more than a day. This discipline reduces the risk of merge conflicts down the line, but may require significant behavioral adjustment for the developers.
The CI server also coordinates the tests which, when passed, make the change available in the trunk and releasable. As the team builds its test coverage, they’ll be adding more and more testing tools for unit, integration and user acceptance testing (functional) and security and performance (non-functional). The CI/CD pipeline may be considered complete now, but the DevOps toolchain is not; this aims for end-to-end value stream coverage, so the portfolio, planning and product backlog tools are included at the front-end of the process (or start of the cycle), while the service desk and monitoring tools are at the end of the feedback loop, providing data for adaptation into the ideation tools. All of this needs to connect to the collaboration and ChatOps tools. Layering value stream management tooling over the top is a smart way to achieve the necessary integrations; to access observability insights around flow and cycle time; and to create a plug and play capability in the toolchain.
The CI/CD pipeline and DevOps toolchain itself should be observed as it rapidly becomes a critical piece of infrastructure without which the business cannot operate and should also be continually improved. Since it effectively becomes a product or service itself, it may be best to treat it as a value stream itself and, rather than have teams build and run their own, provide it as a shared service (with customizations to cater to teams’ unique requirements).
How Observability Helps Organizations Adopt CI/CD Practices
- Making CI/CD metrics available for data-driven conversations builds trust within and between teams and across the organization for continued DevOps investment
- Reducing the risks associated with test and release failures drives test automation coverage
- Using ODD drives developer behaviors based on feedback and the wisdom of production
- providing insights into the flow of work and feedback on value realization, alongside value stream management in particular
- Building resilience as the CI/CD pipeline and DevOps toolchain become business critical infrastructure
What to Do Next
- Sign up for a Moogsoft trial
- Watch the on-demand webinar by Helen Beal and Adam Frank, ‘Telemetry Everywhere: Observability and AIOps in the DevOps Cosmos’
- Download the free eBook ‘Observability with AIOps for Dummies’ by Adam Frank
About the author
Helen Beal is a DevOps and Ways of Working coach, Chief Ambassador at DevOps Institute and an Ambassador for the Continuous Delivery Foundation. She provides strategic advisory services to DevOps industry leaders and is an analyst at Accelerated Strategies Group. She hosts the Day-to-Day DevOps webinar series for BrightTalk, speaks regularly on DevOps topics, is a DevOps editor for InfoQ and also writes for a number of other online platforms. Outside of DevOps she is an ecologist and novelist.