Deployment is the new build (part 3)
Earlier this year, I was invited to present a talk at Devopsdays Boston about deployment as the new build: how deployments are carried out now, how they will need to adapt in a more virtualized, on-demand application landscape and what fundamental improvements will need to come before deployment matures into the invisible, it just works™ experience build is today.
In the previous post, we looked at how Reusable commands, Models and Conventions++ helped turn build from a “black box” process into the “just works” experience we know today.
We then shifted back to deployment and identified Develop a common model, (Re)discover vanilla and Support a “clean build” as three key steps required to achieve a similar transition.
Develop a common model
Before we can advance to the ‘model’ stage, we first…well…need a model. Thankfully, a very simple one can suffice: Packages, Environments and Deployments.
There’s nothing particularly magical to this, and indeed the concepts are commonly found in most organisations. But giving these things explicit labels helps not just formalize the ideas and gives developers and vendors something to support. It also creates a shared vocabulary and language around deployment, which is the first step to shared understanding and reusable functionality.
Indeed, the concepts are so basic that there does not appear to be much to say about them.
Packages capture the components of the versioned item to be released, both artifacts represented by actual files as well as configuration, resource settings and metadata.
In accordance with release management best practice, packages should be stored in a DSL and should be independent of the target environment, so that you have one “gold standard” package running in Development, Test, QA and Production.
Packages also mean that we can version everything, not just the application binaries but also the related configuration and environment settings.
Development and Test just mentioned are examples of Environments, simply collections of infrastructure – physical, virtual, long-running, on-demand, whatever – that applications run in as they progress through the ALM cycle, potentially with approvals or other checkpoints governing the transition from one to the next.
Deployment, then, is perhaps the one concept not immediately widely understood. A Deployment represents not just the activity of getting a Package running in a certain Environment, with a start and stop time, executing user, status and so forth.
Rather, a Deployment also documents the way in which the Package’s components have been deployed and, if applicable, customized. For instance, a Deployment will record that a certain EAR file in the package has been deployed to the following target server(s) or cluster(s), or that the data source password for this specific environment has been customized and set to a new value.
Recording this information is critical because it is very hard to be able to intelligently and correctly modify an application’s state – when upgrading to a new version, for instance, or adding new servers to the target cluster – if you do not know where and with which settings the application is currently running.
If we are going to achieve hassle-free, push-button deployments, another thing we will have to reconsider is whether we really need to tweak and customize our infrastructure in every way possible. Indeed, some companies seem to almost have a policy that any setting that might be a default should be regarded with suspicion and, preferably, changed.
Much as custom project layouts made setting up a build unnecessarily tedious and complicated in a convention- and model-based system, stubbornly refusing to go with infrastructure defaults will make it harder to get hassle-free deployments that truly cover all the steps required.
Sticking with defaults not only encourages reusability because the chances are much higher that a solution developed for a different scenario will also work in yours. It also improves maintainability and cuts down on the risk of “ripple” changes, where a custom value in the setting for the servers hosting application X requires further changes to the setup of application Y etc.
Support a “clean build”
When building a large project, we try to cut down on the time taken
by recompiling only the source code that has been modifying. When
deploying applications, we similarly want to save time when upgrading to
a new version, especially when this time represents production
However, we also know that, eventually, some parts of any incremental build will end up going out of sync, causing strange compilation problems, or features or fixes not appearing when they should.
What do we do in such a case? Do we laboriously try to track down the files that are out of sync and rebuild piece by piece? No, we simply run a clean build to start from scratch, because in 99% of cases it’s much quicker to simply rebuild than try to track down the cause of the problem.
In deployment-land, we seldom have the ability to clean build, and this is one of the main causes for the stressful, time- and resource-consuming troubleshooting hunts that are still far too common. Of course, in order to clean build a system we need full versioning of the environment, its configuration and the applications deployed to it. Virtual appliances and virtualization solutions with snapshot capabilities will have a major role to play here.
We also need a known state for durable resources such as databases, which remains challenging but is being addressed by a growing number of products out there.
Push button deployments
Taking stock, it’s clear that there is still some way to go. We’re
slowly developing a common model, but both “(Re)discovering vanilla” and
“Supporting a “clean build” are visions not quite yet on the horizon of
most large companies.
In fact, it’s not so much technological advances that are required – many startups are pretty close to push-button deployments and continuous delivery. Indeed, the “poster children” of this movement already have setups where every commit can pass through an entire regression, integration and performance testing suite and potentially go straight to production.
No, the important hurdles to be taken are procedural and mental, changing rusty ways of working and entrenched mindsets. For those that can make it, though, the benefits in terms of accelerated business value are already proving to be game changers.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)