Continuous Delivery Pitfalls
[Editor's note: This article is featured in DZone's 2014 Guide to Continuous Delivery]
There is little question that Continuous Delivery provides a compelling business story. Everyone is looking for ways to increase their own business agility with cloud-based builds, built-in unit and integration testing, and fully automated deployments, all instrumented with monitoring utilities. If there weren't several high-profile companies discussing both their Continuous Delivery successes and failures so publicly, it might all seem like a fairy tale. The incredible complexity of software production makes it extremely difficult to create a clean path for software to travel from a developer’s keyboard all the way to the customer’s hands.
For every one of the "Continuous Delivery unicorns," like Etsy and Netflix, there are at least ten organizations who struggle significantly to implement CD within their organization. Many of those will ultimately fail entirely to deliver on CD's promises.
Like any new development methodology, implementing Continuous Delivery has a number of pitfalls that can trip up even the most mature organizations. Successful Continuous Delivery processes are predicated on a number of technical and cultural assumptions, and in many cases organizations don’t have the foundation necessary for CD. Organizations often try to copy the practices of a company like Netflix wholesale after reading some of their resources, ignoring the differences in their own products or market. These examples are just a few of the reasons why companies find themselves in trouble when adopting Continuous Delivery. Below are four of the most common pitfalls to avoid when implementing Continuous Delivery.
1. Attempting to Build Continuous Delivery on Top of an Unstable (or Non-Existent) Continuous Integration Foundation.
Continuous integration is a foundational requirement for Continuous Delivery. The constantly referenced Continuous Delivery deployment pipeline is really just the practices of continuous integration extended to infrastructure management and the production environment. Thus, any successful CD implementation begins with a CI system that is stable, operationalized, and able to provide actionable data. The organization should also foster a company culture that knows how to react to that data.
Many environments, however, have no continuous integration currently deployed or a poor CI foundation for CD. Common issues with CI infrastructures include:
Operating system configuration, libraries, and tool inconsistencies between developer and build/test/production environments.
Inconsistencies among CI agent configuration (despite how much it's emphasized as an issue by the DevOps movement, it remains a disturbingly common problem).
CI master servers or agent hosts running on individual employees’ computers or in unofficial/personal cloud accounts. This is often observed in organizations that need to support Mac OS X
CI masters with no access control, with job configurations that can be modified without notification or an audit trail
Unactionable CI, which can be caused by insufficient communication mechanisms from the CI infrastructure (email, which everyone promptly filters for instance), or a cultural barrier, where CI errors or build/test failures are not considered by everyone in the organization to be worthy of a timely response
In the worst cases, organizations get excited about CD and try to skip over CI.
The bad news: the solutions to these problems are varied. Some are simple, while others (especially those involving company culture) may be tougher to overcome.
The good news: once the organization has achieved stable, operationalized, actionable continuous integration, you’ve already done 50% of the work to build a legitimate Continuous Delivery environment.
2. Confusing Continuous Delivery with Continuous Deployment
Since Continuous Delivery and continuous deployment have very similar themes, it's easy to understand the origin of this confusion. Continuous Delivery is the discipline and infrastructure around the ability to deploy changes whenever it makes sense for the business to do so. It does not require that every single developer commit directly out to production in real-time.
Getting every single improvement out to customers the moment that it is checked in is so tantalizing that business stakeholders often believe that this is the ultimate goal of Continuous Delivery. The metric of achieving commit-to-release for all situations starts to be overly emphasized, leading to the Hawthorne Effect, where people improve performance on the metrics they know they're being measured by. Unfortunately, the effect is often temporary.
To emphasize the difference between Continuous Delivery and continuous deployment, one just has to understand that Continuous Delivery is possible even in low-change tolerance, high risk environments, like a nuclear power plant. In such situations, the focus isn't on continuously deploying updates to plant software and infrastructure, but rather on integrating and deploying it to staging infrastructure so that even infrequent deployments will have a higher probability of success.
To be clear, continuous deployment is not a fantasy: organizations who have long focused on their Continuous Delivery pipelines and processes, like Etsy and Facebook, do deploy changes at an attractively high rate. But for organizations starting with Continuous Delivery, focusing on the path commits take through the delivery pipeline, as opposed to the rate at which they're deployed, will produce better initial results and a higher chance of permanent adoption.
3. Complicated or Inconsistent Source Code Workflows
Continuous Delivery father Jez Humble often conducts an informal survey during talks. He'll ask the audience to raise their hand if they do continuous integration. Most hands go up. His second question: "Put them down if all the devs on your team don't check into trunk/mainline/master at least once a day." Many of the hands invariably go down.
This survey indicates that feature branches and complex branching/merging processes are an enemy of continuous integration, and therefore of CD. In fact, in its pure form, Humble argues there is no need for any branch other than (what Git calls) the master branch.
This is especially hard for users of Git to accept because of its ability to easily create and publish infinite feature branches. Overuse and reliance on an unmerged codeline can make code flow through a delivery pipeline difficult for anyone to track. Creating release branches and other automation-related branches can quickly add complexities that make it almost impossible to answer simple questions like, "show me all the commits that are in this release, but not in the integration branch."
This doesn’t mean that Git is a bad tool for CD, but users should know that complicated branching/merging and different individual workflows will make Continuous Delivery very difficult, regardless of the tools used.
4. Attempting to Switch to Continuous Delivery Without any Supporting Infrastructure
The ability to deploy code to production at a moment's notice requires visibility into the state of your application. Your application may require configuration management, monitoring and alarms, and rollback conditions. Setting up this infrastructure is not very exciting and many times new features will get priority and investment instead of infrastructure support. This is a mistake that will make the implementation of CD more difficult.
A great example of this is Mozilla Corporation's move from 18+ month release cycles for its Firefox web browser to six week release trains. The move required a massive investment in QA process and infrastructure, including a cultural change so that every component of the browser and every bug fixed had a comprehensive suite of unit tests, ensuring that any mistakes that would "break the web" would be easy to spot as the rate of change to the code-base increased. It required an investment in build/release infrastructure to support running all of these tests on the supported platforms and mobile devices at a scale previously not encountered.
In short, it required that the business invest heavily in infrastructure which, to put it frankly, isn't sexy and usually doesn't provide direct business value. In many software shops, infrastructure has been chronically underinvested in already, so they are already starting their CD initiative at a disadvantage. Successful Continuous Delivery stories commonly emphasize the organization’s commitment to investing in infrastructure throughout the CD transformation and beyond.
5. Difficult, but Worth It
Moving to Continuous Delivery as the release mechanism for your software is not a trivial undertaking. Similar to implementing DevOps, it requires committed investment in new tooling and a re-examination of the organizational culture from all technical and business teams before any benefits can be gained. This can be very difficult if the cultural foundation for Continuous Delivery is initially unstable. It’s even harder if you’ve fallen into one of the pitfalls mentioned in this article and still believe that you’re going down the right path. It takes work, humility, perseverance, and a commitment to balancing feature development with infrastructure development. If your organization is willing to put in the extra effort to get things right, you will put your business on the path to reliably ship software any time.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)