DevOps Zone is brought to you in partnership with:

Kief is a software delivery consultant with ThoughtWorks in London, specializing in tools, practices, and processes for the continuous delivery of software. Kief is a DZone MVB and is not an employee of DZone and has posted 19 posts at DZone. You can read more from them at their website. View Full User Profile

Extreme Change Management

10.16.2013
| 3281 views |
  • submit to reddit

We need to change the way we talk about change management. New technologies, practices, and commercial pressures have made traditional change management approaches difficult to apply effectively. Traditionalists view these new ways of working as irresponsible, inapplicable in an enterprise environment. Others have decided that change management is obsolete in a world where organizations need to be highly responsive to commercial realities.

Both of these are wrong.

There is no tradeoff between rapid delivery and reliable operations. New technologies such as cloud and infrastructure automation, plus agile approaches like DevOps and Continuous Delivery, allow changes to be made even more reliably, with far more rigorous control, than traditional approaches. There’s a useful parallel to Extreme Programming (XP), which takes quality assurance for software development to “the extreme”, with automated tests written and run in conjunction with the code they test, and run repeatedly and continuously throughout the development process.

The same is true with modern IT infrastrucure approaches. Automation and intensive collaboration support the business to make small changes safely, thorougly validate the safety of each change before rolling it out, immediately evaluate its impact once live, and rapidly decide and implement new changes in response to this information. The goal is to maximize both business value and operational quality.

The key is very tight loops involving very small changesets. When making a very small change, it’s easy to understand how to measure its impact on operational stability. The team can add monitoring metrics and alerts as appropriate for the change, deploy it to accurate testing environments for automated regression testing, and carry out whatever human testing and auditing can’t be automated. When this is done frequently, the work for each change is small, and the team becomes very efficient at carrying out the change process.

It’s good to validate changes before and after applying them to ensure they won’t cause operational problems. So, it must be even better to do this validation continuously, as each change is being worked out, rather than periodically (monthly or weekly).

It’s good to test that disaster recovery works correctly. So it must be even better to use disaster recovery techniques routinely as a part of normal processes for deploying changes, using Phoenix Servers or even Immutable Servers.

If it’s good to have a documented process that people should follow for making changes to systems, it must be even better to have the change process captured in a script. Unlike documentation, scripting will not fall out of date to actual procedure, it won’t skip steps, mis-type, or leave out key steps that certain people “just know”.

If it’s good to be able to audit changes that are made to a system, it must be even better to know that each change is automatically logged and traceable.

If it’s useful to have handovers so that the people responsible for operations and support can review changes and make sure they understand them, it must be even better to have continuous collaboration. This ensures those people not only fully understand the changes, but have shaped the changes to meet their requirements.



Published at DZone with permission of Kief Morris, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)