4 Signposts Towards a DevOps-Friendly SDLC
In last week’s “DevOps Imperative” webcast, I mentioned that DevOps is more directional than prescriptive. I mentioned that the key directions to follow were to push towards more collaboration and automation to enable your team to deliver more frequently with less risk.
Four of the principals and laws we cite most frequently in white papers and webcasts can help reinforce this direction and provide some needed checks as you begin transforming towards an organization whose path from idea to value (the software development lifecycle or SDLC in stodgy terms) needs to be more DevOps friendly.
What it is: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”
Why you should care: Conway’s law points out that when we design some solution, and we put three teams on the project, the solution will likely have three sub-systems and the interaction of those systems will reflect how well the three teams communication. This is a clear warning to organizations that are trying to solve problems that span silos. Their solution quality will be bound to the ability of the silo’d teams to communicate with each other.
Often, communication and collaboration must be improved as a pre-requisite to other solutions. Any tool or process that you put in place that improves visibility and communication across silos will help future problem solving.
Read more: Conway’s Law on Wikipedia
What it is: Dude’s law is modeled on Ohm’s law which states that current equals voltage divided by resistance. Dude’s law states that the flow of Value = Why / How.
Why you care: How we get stuff done is important. In a circuit, we need conductors to move power from point A to B. But we should try to streamline the path and lower the resistance in it. Focusing on the “Why” and trimming out extra resistance to getting there is key.
Teams go wrong here in different ways. Sometimes, Why is over-emphasized – especially when it comes to audit and slow, painful processes are put in place. The How needs to improve there. We also see badly run Lean initiatives optimizing the How of dumb processes. We really need to pay attention to both Why and How to maximize the flow of Value.
When adjusting an SDLC process, start with a deep analysis of Why something needs to be done. I recommend the 5 Why’s technique. Only once you really understand the root needs, can you tackle how the team can meet the need with the right low resistance process.
Read more: Original blog post from the Dude (David Hussman of DevJam)
Reuse/Reuse Equivalence Principal (REP)
What it is: “The granule of reuse is the granule of release. Only components that are released through a tracking system can be effectively reused.” Robert C. Martin’s C++ Report (1996)
Why you care: This principal has long explained good software reuse – it’s better to build libraries and know what version of a library we’re using than to copy code from project A to B. Today, we can apply the same principals to configuration (we should know that Tomcat Server A is using server.xml configuration version 12) as well as infrastructure – we should know the version of the image of the VM that server is running on.
Many DevOps afficionados reccommend storing all your stuff in an source control repository. This is on the right track. Everything that goes out should be versioned. An SCM is the right place for source code, and an OK place for some other items. A package repository is better. We’ll be talking more about binary/artifact/package repositories in an upcoming webcast.
Read more: Reuse maturity model
People Tend to Inconsistency
What it is: This is a key observation from Alister’s Cockburn’s review of people and how they impact software development. While people are good at taking initiative and problem solving, they “…tend to inconsistency. The prediction is that methodologies requiring disciplined consistency are fragile in practice.”
Why you care: When working on any process in the SDLC, you should have a grave distrust for anyone who is required to do things correctly. The more things they have to do correctly, the worse the “smell” of the process. A manual deployment process with dozens of steps? Run from that screaming. Likewise, having Test engineers run through detailed test cases repeatedly is not a promising strategy. Tasks which must be executed precisely according to a script (boring tasks) should be delegated to machines, while people work on creative problem solving.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)