DevOps Zone is brought to you in partnership with:

I am an author, speaker, and loud-mouth on the design of enterprise software. I work for ThoughtWorks, a software delivery and consulting company. Martin is a DZone MVB and is not an employee of DZone and has posted 80 posts at DZone. You can read more from them at their website. View Full User Profile

Continuous Delivery - TW Live 2011

07.06.2011
| 6873 views |
  • submit to reddit
Agile project management and engineering practices have made great inroads in increasing the productivity and flexibility of development teams. The ability to rapidly create high quality software is not sufficient; software needs to be deployed to the production environment in order to realize value. Continuous Delivery is a set of techniques to reduce the time and expense of the "last mile".

In this issue, we look at a number of these techniques. Jez Humble, author of "Continuous Delivery", writes about the impediment to rapid deployment that can result from over-complex change management processes. Martin Fowler and Mike Mason discuss Feature Branching, a common development discipline that dramtically reduces the ability to create always-ready-to-release software. Steve Morgan reports on TW Live 2011, the first ThoughtWorks customer conference focused on Continuous Delivery. Lastly, we present a case study illustrating how we implemented an automated deployment pipeline for a global retailer.


Face to face
See video

ThoughtWorkers Mike Mason (Head of Technology, Americas) and Martin Fowler (Chief Scientist) talk to Nick Hines (Global CTO - Innovation) about the pros and cons of Feature Branching.
Case study

The Client: Global retailer

A global retailer engaged ThoughtWorks to overhaul its software deployment pipeline. Within weeks, significant reliability improvements began, and within just a matter of months, releases were predictable and frequent, enabling the business to achieve a key strategic goal.

Changes
The transformation began with the introduction of Continuous Integration, and with leveraging automation for both testing and environment configuration. We modularized multiple large, monolithic builds that sometimes took days to complete – or to uncover problems – into five smaller builds that follow a logical, pre-determined order. These smaller stages – Commit, Assemble, Package, Deployment, and Regression – have meant faster, less risky, builds. The technical staff is better able to divide responsibilities for maintaining successful builds, increasing ownership across the IT organization; and the root cause of failure is easier to pinpoint, correct, and prevent.

Scripts now automate environment and configuration management to speed these complicated and lengthy processes, remove errors, and ensure consistency. Virtualization is a key component of ongoing improvements, allowing any application to be dynamically provisioned, configured, and available for testing based on the specific code modules being changed.

In addition to infrastructure and code integration, testing is now automated to a much higher degree than had been the case; and infrastructure – not just code – is part of the testing. Virtualization allows building and testing of components to be parallelized to a great extent. The pipeline has also become ever more intelligent, with tooling, for example, when a stage successfully completes, it communicates what happened to other stages, which may trigger further testing, or cut short needless testing if a problem is raised.

Every night, an automatic deploy process kicks off for all applications. Regression test suites are triggered and the results are distributed and displayed on dashboards available not only to IT but to business representatives. And because the infrastructure itself is part of the test suites, the client is assured that a successful deployment test means the code can be delivered into production with no errors, no conflicts – no surprises. Everyone knows what is deployable, and what is not.

Outcomes
Within a few months after the start of the engagement, the client was able to begin realizing benefits. It was able to launch a new brand online in a timeframe that would have been impossible to achieve before. In a few more months, the release cycle was cut from yearly to monthly with increased quality and production rollbacks, once typical, have become a rare exception.

The business now has confidence they can get changes released with a drastically shorter lead-time. Overall value from IT has increased as timeframes have decreased. Going forward, advanced dynamic virtualization techniques will further cut the testing cycle time by another 50 percent or more, bringing them close to true continuous delivery capability.

From http://www.thoughtworks.com/perspectives/30-06-2011-continuous-delivery

Published at DZone with permission of Martin Fowler, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags: