DevOps Zone is brought to you in partnership with:

Willie Wheeler is a Principal Applications Engineer with Expedia, working on continuous delivery, including build automation, test automation, configuration management and application performance management. He's also the lead author of the book Spring in Practice (Manning). Willie is a DZone MVB and is not an employee of DZone and has posted 23 posts at DZone. You can read more from them at their website. View Full User Profile

A Battle Plan for DevOps in the Enterprise

06.18.2012
| 4908 views |
  • submit to reddit

Establishing a devops platform in an enterprise environment is challenging because there are a bunch of groups who own different pieces of the puzzle, and they will generally have different ideas on how to move forward. But there’s a way to pull it all together into a coherent, integrated data and tools platform, and this post will explain how my colleagues and I are doing it.

There are three phases:

  1. Establish an initial closed loop around configuration management data and app deployment automation.
  2. Grow your data set by onboarding additional apps into your deployment automation.
  3. Support integration by adopting proper platform discipline.

Let’s look at each of the phases in more detail.

Phase 1: Establsh an initial closed loop around your asset inventory and deployment automation

The first goal is to establish a mechanism by which you can collect good configuration management data. This isn’t quite as easy as it sounds, and a lot of things that people try (even big name consultants) flat out don’t work.

The mechanism you want is a closed loop involving configuration management (CM) data and deployment automation. The CM powers the deployment automation, and the importance of deployments is such that if there’s a problem with the data, someone will correct it in a hurry. At the very least your schema should involve apps, modules (the app broken down into pieces that generate packages), packages, environments, farms and instances, since you need these to support deployment automation, and it turns out that this data will make your asset inventory very useful to others once you actually have the data.

Visually:

Automate the deployment of some of your apps just to get the whole thing started. Note that your automation should be updating your asset inventory where possible. For example, if you have build automation (a good idea), it should be updating packages in your asset inventory following successful builds. Test automation should be updating packages too (e.g., mark the package as having passes the commit stage when the continuous integration commit stage passes.) Deployment automation should be logging instances provisioned out of cloud APIs (e.g., log the newly created IP addresses and FQDNs), which packages are deployed to which instances and so forth.

You’ll find that for the apps you’ve onboarded, you’ll have outstanding CM data.

Now you have the mechanism, but you don’t have a lot of data. So it’s time for phase 2.

Phase 2: Grow your data set by onboarding apps

You have a data collection mechanism in place, but you have to feed it. To do this, you need to onboard more apps onto your deployment automation. You may have to put your marketing hat on here. Give demos to the department and to leadership showing the advantages of deployment automation, ask teams that are already using it to spread the word, etc. Hopefully the automation provides enough value that it mostly sells itself, though there will usually be teams that for whatever reason hold out. That’s OK: focus on the teams that are excited about using it.

Besides enabling faster, more predictable deployments, the benefit of all this onboarding activity will be growth in your dataset, both horizontally and vertically. Horizontally, you’ll be adding more apps, which have more modules, more farms, more instances and so on. Vertically, teams will have other kinds of data they need you to add, like middleware, databases, web services, regions, data centers, load balancers, key pairs, contacts and more. Support as much of it as makes sense given your goals, as it will make your asset data more valuable. Focus on data where you have a good story to tell about how to keep it up to date. E.g., you can keep team data up to date by using it to drive deployment ACLs.

Here’s what phase 2 looks like:

Now you have good momentum on your asset data.

Phase 3: Support integration through proper platform discipline

This is where things get to be a lot of fun, because instead of trying to sell all the good work you’ve been doing, now people are coming to you for data. They’ll want it for monitoring, diagnostics, runbooks, change management, patch management, reporting and security, financial analysis, sprawl management, projects, disaster recovery and more. Now you’re in true devops mode: you have good asset data, and this drives data, tool and process integration.

To realize this vision, you need to have a solid data platform in place. You need web services with well-defined contracts (otherwise you’ll keep breaking your customers by mistake), you’ll need to protect yourself from the guy who wants to load all 8,000 instances from your asset inventory every ten minutes using n+1 queries and so forth. A message bus will be useful in keeping tools decoupled: when someone creates a new app in your asset inventory, for instance, you can publish an event to the bus so the runbook app can provision a new runbook for that app, etc.

When all is said and done, you’ll have put together a great foundation for your devops efforts.

Published at DZone with permission of Willie Wheeler, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)