DevOps Zone is brought to you in partnership with:

Ranjib is a system administrator at Google. Prior to Google, Ranjib was a senior consultant with ThoughtWorks. He works on private cloud implementation strategies, cloud adoption, system automation etc. He has worked on both application development as well as system administration, for past 6 years. Prior to ThoughtWorks, Ranjib was working with Persistent Systems . Ranjib has done his gradation in lifescience and masters in Bioinformatics. Ranjib is a staunch FOSS supporter. Ranjib is a DZone MVB and is not an employee of DZone and has posted 13 posts at DZone. You can read more from them at their website. View Full User Profile

4 Reasons Continuous Delivery is Now Possible

03.19.2012
| 7784 views |
  • submit to reddit
I joined ThoughtWorks as a SysAdmin and, I have started working as a consultant for our client projects. Now, as I work with different offshore development teams across projects I actually get to experience  XP practices. A few days back I was discussing with Chai (a veteran thoughtworker) agile software development and its history. I was aware of most of the tenets of agile software development and its implementations like XP , Scrum, etc., but chai gave me a different perspective. He was explaining the timing when agile software development was triggered. Though the pain it addresses was not new, still the the movement or change took place at certain time frame. His explanation was around 4 points, 

  1. Moore's law (cheap hardware helped us focus on maintainable code rather than performant cod)
  2. Why learning from large civil engineering projects are not directly applicable to software projects (Failure in detailed planning or architecture of large software, as software is not governed by laws of physics, which are well defined)
  3. OOPSLA (Object-Oriented Programming, Systems, Languages & Applications)
  4. A graph on `Cost of change` vs time i.e. what will it cost to introduce a code change vs as time progress in a software project. It grows exponentially.


This discussion made me think why we are talking about continous delivery so much now? Is it time to review why 2 years ago we were not making so much buzz around this?

I have worked on various CI tools, and observed a variety of techniques and strategy that can used to ensure the code quality at any given time. We've known for a while that the final bottleneck remained in deployments, and this becomes more evident as the sotware becomes more enterprise class and develops more integration points. But only recently have we been able to extend the CI till deployment. Why ?

  1. Maturity in infrastructure management tools: Be it monitoring solutions or configration management systems, all infrastructure managemenet tool chains have matured significantly to capture the infrastructure context and to easily integrate with each other.
  2. Rise of Clouid Computing: With the rise of cloud, even server provisioning can be triggered programmatically. This helped us join the last dot, i.e. scaling up or down on demand. Having an elastic infrastructure.
  3. Infrastructure as a code: A matured infrastructure toolchain along with a cloud lets you express your infrastructure as code. Hence you can use standard CI or other tooling to test it in a sandbox environment, just like your application code. This also gives you the ability to recreate your infrastructure at will.
  4. The DevOps Movement:  If you can code, that does not mean you can develop a software solution,  you need to understand the functional domain. Similarly you can't really exploit all the infrastructure tooling unless you understand the infrastructure bit itself. DevOps is a movement that encourages breaking the silos of operation and development teams, to foster cross collaboration for better software development. There are a lot of debates happening around whether it is a culture or whether it should be used as a job title or not, but at least to me its a movement that helped me connect to like-minded people, relevant web contents and  awesome tools.



As far as i can think of, these are the few critical points that make CD or continuous delivery feasible now. Obvisously these observations are totally based on my experiences, and there are cases where complementary factors other than this, helped organizations enable CD.

Alternate viewpoint from Dave Farely, co-author the book on Continuous Delivery.

Interesting, but I am afraid that disagree.

I think that much modern infrastructure is less amenable to automation than it was. The rise of dedicated admin consoles and configuration Wizards happened in the 1990's and early 2000s. Before that much of this stuff was easier to script. I hope that the pressure to become CD friendly will push manufacturers in the right direction, but there is a long way to go for many of the big names.

Clouds make CD simpler, but their lack doesn't make it impossible. All of the early implementations of CD that I worked on didn't use clouds or those ideas at all - nice tool but not an essential component.

Infrastructure as code, for me that is much more about mind-set than technology. The key to this is configuration-management and the will to accomplish it - even when the infrastructure in question is difficult to deal with. Of course it is easier if the infrastructure is amenable, but even then it is much more about the idea than the technology for me.

I think that Devops is important, but again I think that this used to be better and our industry went through a bizzare phase where we forgot what was important for a while and tried to make it more bureaucratic. Devops feels to me more like a return to a sanity that was common-place at one time.

What I am trying to say is that I that CD as a practice has a long history, I was doing it on ThoughtWorks projects 7 or 8 years ago, and pieces of it before then, others were doing it a long time before then.

I am pleased at it's current growing popularity, and I hope that Jez, I and others have helped by making it more visible and standardizing some of the language, but I don't think that it was an idea that was waiting for a technology.

Versions of CD were common-place decades ago, what is important is the desire and the will to be a bit more thorough, a bit more scientific in approach - the need to think about a problem and fix it rather than assume that the problem is an inevitable consequence of doing work.

If we assume that it depends on the tech, what are the next problems that we will ignore waiting for someone else to fix them?

Just my 2c ;-)



Published at DZone with permission of Ranjib Dey, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Muhammad Faiz replied on Thu, 2012/04/12 - 12:10pm

Interesting, but I am afraid that disagree.

I think that much modern infrastructure is less amenable to automation than it was. The rise of dedicated admin consoles and configuration Wizards happened in the 1990's and early 2000s. Before that much of this stuff was easier to script. I hope that the pressure to become CD friendly will push manufacturers in the right direction, but there is a long way to go for many of the big names.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.