DevOps Zone is brought to you in partnership with:

An early believer in the ability of Java to deliver "enterprise-grade" software, Andrew Phillips quickly focused on the development of high-throughput, resilient and scalable J2EE applications. Specializing in concurrency and high performance development, Andrew gained substantial experience of the intricacies, complexity and challenges of enterprise application environments while working for a succession of multinationals. Continuously focused on effectively integrating promising new developments in the Java space into corporate software development, Andrew joined XebiaLabs in March 2009, where he is a member of the development team of their deployment automation product Deployit. Amongst others, he also contributes to Multiverse, an open-source Java STM implementation, and jclouds, a leading Java cloud library. Andrew is a DZone MVB and is not an employee of DZone and has posted 24 posts at DZone. You can read more from them at their website. View Full User Profile

Deployment is the new build (part 3)

06.17.2011
| 5539 views |
  • submit to reddit

Earlier this year, I was invited to present a talk at Devopsdays Boston about deployment as the new build: how deployments are carried out now, how they will need to adapt in a more virtualized, on-demand application landscape and what fundamental improvements will need to come before deployment matures into the invisible, it just works™ experience build is today.

In the previous post, we looked at how Reusable commands, Models and Conventions++ helped turn build from a “black box” process into the “just works” experience we know today.

We then shifted back to deployment and identified Develop a common model, (Re)discover vanilla and Support a “clean build” as three key steps required to achieve a similar transition.

Develop a common model

Before we can advance to the ‘model’ stage, we first…well…need a model. Thankfully, a very simple one can suffice: Packages, Environments and Deployments.

There’s nothing particularly magical to this, and indeed the concepts are commonly found in most organisations. But giving these things explicit labels helps not just formalize the ideas and gives developers and vendors something to support. It also creates a shared vocabulary and language around deployment, which is the first step to shared understanding and reusable functionality.

Indeed, the concepts are so basic that there does not appear to be much to say about them.

Packages capture the components of the versioned item to be released, both artifacts represented by actual files as well as configuration, resource settings and metadata.

In accordance with release management best practice, packages should be stored in a DSL and should be independent of the target environment, so that you have one “gold standard” package running in Development, Test, QA and Production.

Packages also mean that we can version everything, not just the application binaries but also the related configuration and environment settings.

Development and Test just mentioned are examples of Environments, simply collections of infrastructure – physical, virtual, long-running, on-demand, whatever – that applications run in as they progress through the ALM cycle, potentially with approvals or other checkpoints governing the transition from one to the next.

Deployment, then, is perhaps the one concept not immediately widely understood. A Deployment represents not just the activity of getting a Package running in a certain Environment, with a start and stop time, executing user, status and so forth.

Rather, a Deployment also documents the way in which the Package’s components have been deployed and, if applicable, customized. For instance, a Deployment will record that a certain EAR file in the package has been deployed to the following target server(s) or cluster(s), or that the data source password for this specific environment has been customized and set to a new value.

Recording this information is critical because it is very hard to be able to intelligently and correctly modify an application’s state – when upgrading to a new version, for instance, or adding new servers to the target cluster – if you do not know where and with which settings the application is currently running.

(Re)discover vanilla

If we are going to achieve hassle-free, push-button deployments, another thing we will have to reconsider is whether we really need to tweak and customize our infrastructure in every way possible. Indeed, some companies seem to almost have a policy that any setting that might be a default should be regarded with suspicion and, preferably, changed.

Much as custom project layouts made setting up a build unnecessarily tedious and complicated in a convention- and model-based system, stubbornly refusing to go with infrastructure defaults will make it harder to get hassle-free deployments that truly cover all the steps required.

Sticking with defaults not only encourages reusability because the chances are much higher that a solution developed for a different scenario will also work in yours. It also improves maintainability and cuts down on the risk of “ripple” changes, where a custom value in the setting for the servers hosting application X requires further changes to the setup of application Y etc.

Support a “clean build”

When building a large project, we try to cut down on the time taken by recompiling only the source code that has been modifying. When deploying applications, we similarly want to save time when upgrading to a new version, especially when this time represents production downtime.
However, we also know that, eventually, some parts of any incremental build will end up going out of sync, causing strange compilation problems, or features or fixes not appearing when they should.

What do we do in such a case? Do we laboriously try to track down the files that are out of sync and rebuild piece by piece? No, we simply run a clean build to start from scratch, because in 99% of cases it’s much quicker to simply rebuild than try to track down the cause of the problem.

In deployment-land, we seldom have the ability to clean build, and this is one of the main causes for the stressful, time- and resource-consuming troubleshooting hunts that are still far too common. Of course, in order to clean build a system we need full versioning of the environment, its configuration and the applications deployed to it. Virtual appliances and virtualization solutions with snapshot capabilities will have a major role to play here.

We also need a known state for durable resources such as databases, which remains challenging but is being addressed by a growing number of products out there.

Push button deployments

Taking stock, it’s clear that there is still some way to go. We’re slowly developing a common model, but both “(Re)discovering vanilla” and “Supporting a “clean build” are visions not quite yet on the horizon of most large companies.
In fact, it’s not so much technological advances that are required – many startups are pretty close to push-button deployments and continuous delivery. Indeed, the “poster children” of this movement already have setups where every commit can pass through an entire regression, integration and performance testing suite and potentially go straight to production.

No, the important hurdles to be taken are procedural and mental, changing rusty ways of working and entrenched mindsets. For those that can make it, though, the benefits in terms of accelerated business value are already proving to be game changers.

 

From http://blog.xebia.com/2011/06/deployment-is-the-new-build-part-3/

Published at DZone with permission of Andrew Phillips, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Kim David Hagedorn replied on Fri, 2011/06/17 - 10:23am

I think real 'packages' in the meaning of this article would be a nice thing to have, but there are some fundamental problems with this. First, a package provided for deployment should be reusable in different environments and therefore can't provide all configuration details (such as database passwords, hostnames, ...). Second, the widespread configuration paradigm of 'text or xml files spread all over the system' (some in the application directory, some in /etc and so on) is really really hard to put under version control.
So I wont buy into this whole 'continuus deployment' thing until these issues are resolved. (And the linux/.rpm/.deb way of having a bunch of shell scripts is definitely not the way...)

Andrew Phillips replied on Mon, 2011/06/20 - 4:38am

@Kim: Thanks for the good points you raise. I certainly agree that, in today's typical IT setup, at least, it isn't possible to deploy the same package to different environments without customization.

Indeed, customization (of which there are many forms, as e.g. discussed in this previous post) and targeting are arguably the key components of a deployment, so at Labs these topics unsurprisingly come up a lot.

Furthermore, I think we're in agreement that scripts which are packaged with the artifacts are not a good approach for application deployments. Firstly, they're complicated to maintain and distribute, but more importantly in application deployment the middleware and infrastructure knowledge required to properly deploy the application does not typically rest with developers.

So it doesn't make much sense to expect them to deliver it as part of a release package - instead, the middleware knowledge should reside in the deployment system you have chosen. Preferably Deployit, of course ;-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.