This is part of a series of posts on how to make software deployments boring, because Boring is good!
The phrase “deploy your deployment process” is, I admit, a little odd. Let me explain.
Deployment of software entails risk of messing something up, and this whole series is a set of guidelines that can help minimize the risk. Minimizing deployment risks means that the process itself, whatever it may be, needs to be tested to ensure that it works as expected.
The logical place to test your deployment process is in your test environments–which means using the same deployment process in test as you do in production. If you use your process in a Continuous Integration environment, your process will be tested every time someone checks in their source code. Nice!
Just as with the software you’re deploying, you need to know that the process you’re using in production is exactly the same one that you’ve tested–which means that you need a means of controlling it. Controlling it means that:
- You should ensure that any automation code used is in source control.
- You should version the process.
- You should know which version of the deployment process is in each environment, and ensure that only that version is used for that environment.
- You probably will not be able to run the deployment process from your own machine. If you did, how would you ensure that what you’re using has been checked in? That you’re using the right version for each test and production environment? That someone else would get the same results if they ran the process instead of you?
- You should have a process for upgrading your deployment process in a given environment from one version to another (in other words, a way of deploying your deployment practice–see what I mean now?).
I’m lucky enough to work at a company that takes deployments seriously, and I have had a chance to design and implement the process. Here are some elements.
We use automation as much as possible. This makes processes repeatable (and faster). Powershell is the language of choice, although there are still a couple portions that use other technologies–for now.
The Powershell scripts have automated tests, just as with the project code being deployed. Since they involve file systems, multiple databases, and multiple server APIs, development of the tests involves automated reconfiguring the environment to set up each test. Unfortunately, developing the tests constitutes the vast majority of the development time, probably by a factor of 10. BUT, I can feel very confident about making changes of any kind. Upgrading to Powershell 2.0 was not a worry, nor was upgrading Powershell Community Extensions. I just rerun all the tests and find out what broke! I’m frustrated with the extra time required to write the tests, but it has been worth it.
All deployment code is, of course, kept in source control. I’ve organized it all into a single .Net project. The project has an associated “build” in Team Foundation Server (TFS). But–oops–see below.
We use a dedicated “deploy server” for each environment. The server doesn’t need much power; the deployment team merely has to log on to it and run the deployment scripts from there. They don’t log into the servers hosting our company’s software; all the deployment is run from the deployment server. Exception: Biztalk. The Biztalk APIs are all local, meaning you have to remote into each of the servers and deploy from there. I’m investigating doing this via Powershell remoting just to see if it’s practical enough to worth doing. It would help me integrate BizTalk deployment with the rest.
In the development environment the “deploy server” is one of our “build servers” used by TFS. This not only saves a machine; it also makes Continuous Integration much easier to accomplish.
The intent is to have a simple script that will copy the TFS build’s output from the drop directory into the correct locations on the deploy server. The TFS build number would be the method of keeping track of the version of the deployment scripts. I haven’t gotten this part done yet–not following my own advice in this post–so for now I’m simply copying scripts from my machine onto the build server, and then when they’ve run successfully there copying them to the other environments. I rename the old scripts before copying the new, so if needed I can roll back to the prior version. I’ve forgotten to copy something a few times. Gotta get this part done.
So most of what I’m recommending is implemented and working well, with more yet to come. It has worked out really well so far, and the controlled use of these automated scripts in each environment, pushing new versions to higher environments when they’re satisfactory in lower environments, has helped make our deployments BORING. Just right.