Good Software Deployment Practices, Part V: "Deploy Your Deployment Process"

This is part of a series of posts on how to make software deployments boring, because Boring is good!

The phrase “deploy your deployment process” is, I admit, a little odd. Let me explain.

Deployment of software entails risk of messing something up, and this whole series is a set of guidelines that can help minimize the risk. Minimizing deployment risks means that the process itself, whatever it may be, needs to be tested to ensure that it works as expected.

 

Test

The logical place to test your deployment process is in your test environments–which means using the same deployment process in test as you do in production. If you use your process in a Continuous Integration environment, your process will be tested every time someone checks in their source code. Nice!

 

Control

Just as with the software you’re deploying, you need to know that the process you’re using in production is exactly the same one that you’ve tested–which means that you need a means of controlling it. Controlling it means that:

  • You should ensure that any automation code used is in source control.
  • You should version the process.
  • You should know which version of the deployment process is in each environment, and ensure that only that version is used for that environment.
  • You probably will not be able to run the deployment process from your own machine. If you did, how would you ensure that what you’re using has been checked in? That you’re using the right version for each test and production environment? That someone else would get the same results if they ran the process instead of you?
  • You should have a process for upgrading your deployment process in a given environment from one version to another (in other words, a way of deploying your deployment practice–see what I mean now?).

 

Example

I’m lucky enough to work at a company that takes deployments seriously, and I have had a chance to design and implement the process. Here are some elements.

We use automation as much as possible. This makes processes repeatable (and faster). Powershell is the language of choice, although there are still a couple portions that use other technologies–for now.

The Powershell scripts have automated tests, just as with the project code being deployed. Since they involve file systems, multiple databases, and multiple server APIs, development of the tests involves automated reconfiguring the environment to set up each test. Unfortunately, developing the tests constitutes the vast majority of the development time, probably by a factor of 10. BUT, I can feel very confident about making changes of any kind. Upgrading to Powershell 2.0 was not a worry, nor was upgrading Powershell Community Extensions. I just rerun all the tests and find out what broke! I’m frustrated with the extra time required to write the tests, but it has been worth it.

All deployment code is, of course, kept in source control. I’ve organized it all into a single .Net project. The project has an associated “build” in Team Foundation Server (TFS). But–oops–see below.

We use a dedicated “deploy server” for each environment. The server doesn’t need much power; the deployment team merely has to log on to it and run the deployment scripts from there. They don’t log into the servers hosting our company’s software; all the deployment is run from the deployment server. Exception: Biztalk. The Biztalk APIs are all local, meaning you have to remote into each of the servers and deploy from there. I’m investigating doing this via Powershell remoting just to see if it’s practical enough to worth doing. It would help me integrate BizTalk deployment with the rest.

In the development environment the “deploy server” is one of our “build servers” used by TFS. This not only saves a machine; it also makes Continuous Integration much easier to accomplish.

The intent is to have a simple script that will copy the TFS build’s output from the drop directory into the correct locations on the deploy server. The TFS build number would be the method of keeping track of the version of the deployment scripts. I haven’t gotten this part done yet–not following my own advice in this post–so for now I’m simply copying scripts from my machine onto the build server, and then when they’ve run successfully there copying them to the other environments. I rename the old scripts before copying the new, so if needed I can roll back to the prior version. I’ve forgotten to copy something a few times. Gotta get this part done.

So most of what I’m recommending is implemented and working well, with more yet to come. It has worked out really well so far, and the controlled use of these automated scripts in each environment, pushing new versions to higher environments when they’re satisfactory in lower environments, has helped make our deployments BORING. Just right.

Advertisements

Good Software Deployment Practices, Part IV: “Deploy The Same Bits”

This is part of a series of posts on how to make software deployments boring, because Boring is good!

In earlier posts I’ve said that everything—everything—getting deployed should come from the output directory of the build.  No exceptions; include even environment-specific configuration information.

Doing so gives us another way to reduce risk.  We do this by deploying the same bits in every environment.  Allow me to explain.

 

The Bad?

I once worked at a company where standard practice was to recompile the source code before deploying into each environment.  I was always surprised at this and didn’t understand it…at the time I didn’t challenge the practice because I was still new to the industry at that point.  But I still don’t understand it.  Maybe someday I’ll get an explanation. 

The Worse

I’ve since worked, quite often, in organizations where recompiling was not a formal practice but was often done by default.  (These organizations didn’t have a controlled build environment.)  I think it’s a weak practice, at least for the kind of software and operating environments we tend to have now.

So here’s what to do instead: 

The Good

Every time the software is built, its output should be uniquely identified.  If you have something like Microsoft Team Foundation Server or CruiseControl, this identifier is generated for you.  (If not, you’ll have to do it for yourself.)  Then, keep track of which specific build has been deployed in the test environments.  At my current workplace we have a number of test environments, and we know exactly which build numbers have been deployed in each environment.  This means we know exactly what is being tested…

…which means that if we deploy the same build number into production, then we know that those exact bits have been tested.  We don’t risk introducing errors by recompiling or reassembling the deployment artifacts.

Having confidence that we’ve deployed exactly what we’ve tested requires having confidence in these things:

  • Every artifact is in the build output directory (also frequently called the drop directory).
  • The drop directory is controlled—once the artifacts of a build are put in the drop directory, very few people should be able to alter the directory’s contents.
  • We are properly tracking build numbers so we know that the build number we’re currently deploying into production is the one we deployed into our test environments earlier.

This is one more place in which to reduce risk, as we travel on our journey to make deployments boring.

Good Software Deployment Practices, Part III: More on “The Build”

This is part of an ongoing series of posts dealing with the practices that will make software deployments boring.  Because boring is good!

So far I’ve been saying some pretty standard stuff but this time I want to state a practice that is less-often done well, at least in my opinion.

 

What to do with configuration files?

I’ve noticed that many people generate configuration information during the deploy.  For example, the build output might include a development or master copy of a web.config file, and during deployment the environment-specific stuff gets replaced with the correct information.

I think there’s a better way…

…and that better way is to include the config files for ALL environments in the build.  Either keep them all in source code, or go through the “replace elements in the master with the correct values” process, once for each environment.  I would tend to go for the latter, since most config files have a mix of environment-specific and non-environment-specific information, and we don’t want to have to maintain multiple copies of the non-environment-specific information.  But either way, the build output should contain all the configs.

Why?

Earlier I said that every deployment artifact—every artifact—must be generated or assembled during the build and be placed in the output directory of the build.  Handling config files in the manner I’ve described is just an example of that.  And it comes with real benefits:

  • Risk.  Selecting the right config file for an environment and putting it in place is in my opinion less risky than walking through a config file making changes to it.  So we can do the harder work that may be more error prone during the build, and do the less-risky portion during the deploy.  We push risk into an earlier step than deployment.
  • Testing/inspection.  This is related to managing risk.  We should test both the build process and the deployment process, but if we generate config information during the deploy, then the first time we can verify the generation of correct production values is after production deployment!  If we generate the configuration information during the build, we can inspect the result at our leisure, as often as we want, with any process we want, before the actual deployment takes place.

 

In other words, handle configuration files, and any other environment-specific items, just as you would anything else.

…which means we follow the practice of assembling ALL deployment artifacts during the build.  Everything that will get deployed should be in the build’s output directory.  The fewer exceptions we make to that rule the more we make our deployments boring.  And that’s good!

Boring Is Good

I find a lot of excitement in being boring.  Really!

But I suppose I should say something about the context of that…

I’ve been spending a lot of time over the last couple years focused on the build/deploy cycle in a nearly-Agile environment (still working on the Agile part).  I’ve been involved in deployments before but for the first time I’m in an organization that really wants to do it well, and is willing to invest in doing so.

We have certain goals:

  • Have productive development and testing environments
  • Have boring deployments into each environment, including production

And here are the characteristics of boring deployments:

  1. Little impact to the end users
    1. Minimal downtime
    2. No errors introduced by the deployment process
    3. Up-to-date help documents, customer service preparation, training infrastructure, etc.
  2. Not a fire drill
    1. Few people needed to run the deployment process
    2. Minimal time spent gearing up for deployment
    3. Quick deployment time
    4. Low risk of having to call anyone else during deployment (or afterward) to deal with issues
  3. WYSIWYP (What You See Is What You Planned)
    1. Expected features are present and working
    2. Bugs (well, most of them) known
  4. Worry-free
    1. Managers come to expect few problems related to deployment (unfortunate side-effect:  they then forget its importance and difficulty)
    2. Developers, including Quality Assurance staff if the organization has such positions, expect the software to work as well in production as is did in all the test environments
    3. People take vacations without having them interrupted or preempted by deployment issues.

As you can see, I think that the software deployment process should involve the whole organization, not just IT Developers and/or IT Administrators.  I’m (mostly) a technical guy and expect to write more on what this means to developers and administrators.  But I think it’s good to put it in proper organizational perspective.

And it’s good to be boring!