DevOps Zone is brought to you in partnership with:

Carlos Sanchez is Co-Founder & Architect of MaestroDev, a company building a DevOps Orchestration engine for Continuous Delivery, Agile development, DevOps, and Cloud Federation. Highly committed to open source, he is a member of the Apache Software Foundation among other groups, has contributed to a variety of projects, like Apache Maven, Continuum, Archiva, Spring Security, or Fog, and regularly speaks at conferences around the world. Carlos is a DZone MVB and is not an employee of DZone and has posted 20 posts at DZone. You can read more from them at their website. View Full User Profile

What Developers do Today for Source-to-Deploy is Not Enough

02.16.2012
| 6376 views |
  • submit to reddit
Developer toolset

From the developer point of view, there are some tools involved in the source-to-deploy process

  • Source control management tools: Subversion, Git, Mercurial, Perforce,…
  • Build tools: Maven, Ant, Ivy, Buildr, Graddle, Rake,…
  • Continuous Integration tools: Continuum, Jenkins, Hudson, Bamboo,…
  • Repository (Artifact) management tools: Archiva, Nexus, Artifactory,…

The #1 programmer excuse for legitimately slacking off: My code is compiling

When everything is set together, we can have a CI schedule that is building automatically the changes from the SCM as they are committed, deploying to an artifact repository the result of the build or sending a notification if there is any error. Everything fully automated. A change is made to SCM, the CI server kicks in, builds and runs all sort of tests (unit, functional, integration,…) while you go off for a sword fight with your coworkers.

Now what? somebody sends by email the tarball, zipfile,… to the operations team? oh, no that would be too bad. Just send them the url to download it… And even better send some instructions, a changelog, upgrade task list,…

What developers do today to specify deployments and target environments is not enough. 

The simplest solutions are often the cleverest. They are also usually wrong.

Using tools like Maven in the Java world or Bundle in Rubyland you can explicitly list all the dependencies and versions you need. But there are some critical dependencies that are never set.

It is just too simple.

Packages installed, C libraries, databases, all sort of OS and service level configuration,… That’s the next level of dependencies that should be explicitly listed and automated.

For example, think about versions of libc, postgresql, number of connections allowed, ports opened, opened file descriptors limit,…

Operations

Requirements

From the point of view of the operations team the number of requirements is complex: operating system, kernel version, config files, packages installed,…

And then multiply that for several stage configurations that most likely won’t have the exact same configurations.

  • dev
  • QA
  • pre-production
  • production

Deployment

Deployment of the artifacts produced by the development team is always a challenge

  • How do I deploy this?
  • Reading the documentation provided by the development team?
  • Executing some manual steps?

That is obviously prone to errors

Cloud

It’s nothing new but it has increased with the proliferation of Cloud based environments, making it easier and easier to run dozens or hundreds of servers at any point in time. Even knowing how to deploy to one server, how is it deployed to all those servers? what connections need to be established between servers? how is it going to affect the network?


Source:  http://blog.carlossanchez.eu/2012/02/16/devops-how-we-got-here/
Published at DZone with permission of Carlos Sanchez, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)