Devops: What it is and Why you Should be Doing it
Devops is a big deal nowadays, and there’s a variety of ways people describe it and its benefits. Here we’ll move past the fluffy characterization involving developers and operations working together joyously—not to mention the outright wrong characterization of one superrole that does it all—and get to the heart of what devops is really about. Then we’ll go into what’s driving it, and why it’s something you should be doing.
What is devops?
Here’s my take, fluff-free:
Devops is the push to bring the development, test, infrastructure and operations domains under software management and control. It does this through the aggressive pursuit of integration, standardization and automation.
This characterization explains why developers tend to be more excited about devops than ops is. The very last thing ops wants is a bunch of software developers telling them how to manage configuration files, deploy packages or monitor servers. I’m on the development side myself, and for me devops is fun and exciting. But from the other side it has to feel something like a hostile takeover, especially in light of the history of strained relationships between development and IT.
So why does everybody keep talking about dev and ops working together? It’s because both sides have important roles to play:
- First, there’s relevant technical expertise on both sides. We need people who know how to code against the AWS APIs, how to integrate tools using web services and message buses, how to build self-service user interfaces for app deployments an so on. But we also need people who can create reasonable server builds, patch them when there are issues, script against the F5s to support zero-downtime software deployments, etc.
- Second, there’s also relevant domain expertise on both sides. I suspect this is a lot more obvious to developers than ops: most developers would agree that there’s a point beyond which their understanding of infrastructure becomes hazy. But it turns out that development has its own important infrastructure, such as SCM, configuration repos, CI/build servers, Maven repos, test automation and more. This infrastructure is very much a part of the picture, and developers are the experts in this case.
So it’s true that devops involves dev and ops working together, but that’s a byproduct of what’s really going on, which is using software to run increasingly larger parts of the IT show. That makes a lot of people uncomfortable, but it’s happening all over the place, ready or not.
Now let’s see what’s driving the trend.
What’s driving devops?
There are a couple of major forces driving devops.
Cloud and virtualization. The first force is of course the emergence of services and tools that either take advantage of cloud and virtualization technologies, or else offer a way to manage the additional complexity they bring. Examples in the former category include cloud provisioning APIs like Amazon EC2, or SaaS services like New Relic and Loggly, which offer cloud-based turnkey operational capabilities. Examples in the latter category include configuration management tools like Chef and Puppet. As VMs blink in and out of existence, there has to be some way to manage everything that’s happening.
Continuous delivery. There’s a second force too: software engineering itself is maturing. We are seeing growing consensus as to software development best practices, in terms of both design and development process/methodology. While it would be overstating the case to say that software developers are automatically engineers, it is no exaggeration to say there are plenty of developers who practice development as an engineering discipline. Proper use of source control, versioning configuration, continuous integration, unit and integration testing, automating deployment—these are all engineering activities designed to make development predictable and repeatable.
This second force is driving devops in the following way. As a general rule, forward steps in software engineering lead directionally to the continuous delivery of software, which is essentially the ability to deliver high-quality software to users within hours or days of starting work on some development task, as opposed to weeks or months. (Obviously it depends on the size of the task, but the concept is that if you’re able to finish the development task in an hour, your delivery capability shouldn’t prevent you from releasing it hours later. It’s fine if your product folks wants to prevent it, but it’s not fine if your capability prevents it.) Continuous delivery involves integrating a bunch of different tools into a deployment pipeline, and this pipeline—which includes everything from the developer’s IDE to monitoring and diagnostic tools in the operational environment—spans both development and operations toolsets. This drive toward continuous delivery in turn drives the need for development and operations to pull it together.
The previous discussion offers a hint as to why we might want to pursue a devops-style technology practice: it allows us to deliver more quickly to our customers. But there’s more to it than just that.
The devops value proposition
We mentioned earlier the historically strained relationship between development and ops. There are numerous reasons, such as different bodies of best practice (e.g. agile vs. ITIL) and the fact that they’re often separate organizations, which tends to create barriers. Where I work, the IT team rides motorcycles, whereas we on the development side drive Toyotas. I’m not sure why but that’s how it is.
One of the key points of conflict, however, is different attitudes toward change. In most operational environments, change trades against stability. In other words, the more stuff you push out into production, the more likely you are to break something. Ops doesn’t like it when things break, because they’re the ones who have to deal with the fallout. But developers are under pressure to push changes into production, and in fact to accelerate the rate of change.
These different perspectives toward change result in the dysfunctional relationship that often emerges between dev and ops. Development pushes out a bunch of new features but didn’t have time to update the knowledge base, and when something breaks ops doesn’t know what to do. Ops calls for a change moratorium, dev misses release targets and blames ops, ops blames dev right back for producing unsupportable software, and so on.
What devops and continuous delivery bring to the table is the ability to replace the change-or-stability dilemma with a world where increased rates of change actually enhance stability, because change surfaces broadly-visible issues that admit of a common and permanent fix. For example, if there’s only one deployment tool, and it’s not able to handle the delay between the time that an instance comes up and SSH becomes available, then everybody will see it, and somebody will definitely fix it. And it’s very easy to justify putting a correct fix in place (not an ugly hack) because potentially hundreds of apps are depending on it. Moreover, a proper devops practice enables speed and stability improvements beyond what are possible without it:
Integration, standardization and automation are transformational, because they create a situation in which change no longer hurts. Dozens or even hundreds of packages flow through the deployment pipeline every single day, and if there’s a problem, it’s often a common problem with a systematic and permanent fix. Dev and ops are aligned around delivering software to customers quickly and at volume, as well as around maintaining a smoothly running operational environment.
Want to learn more?
I haven’t said much about exactly how devops helps. I’ve really made some simple assertions to the effect that it does. That’s fine for an overview, but you may want to hear some details. The purpose of this blog is to explore exactly those details. So if you’re interested, check back as I’ll be posting more.
And for more information on continuous delivery in particular, it’s hard to beat Continuous Delivery, by Jez Humble and David Farley. It explains the what, why and how. It has been the playbook for the automation platform we’re building at work, which currently has 20,000 automated deployments under its belt.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)