Good Software Deployment Practices, Part V: "Deploy Your Deployment Process"

This is part of a series of posts on how to make software deployments boring, because Boring is good!

The phrase “deploy your deployment process” is, I admit, a little odd. Let me explain.

Deployment of software entails risk of messing something up, and this whole series is a set of guidelines that can help minimize the risk. Minimizing deployment risks means that the process itself, whatever it may be, needs to be tested to ensure that it works as expected.



The logical place to test your deployment process is in your test environments–which means using the same deployment process in test as you do in production. If you use your process in a Continuous Integration environment, your process will be tested every time someone checks in their source code. Nice!



Just as with the software you’re deploying, you need to know that the process you’re using in production is exactly the same one that you’ve tested–which means that you need a means of controlling it. Controlling it means that:

  • You should ensure that any automation code used is in source control.
  • You should version the process.
  • You should know which version of the deployment process is in each environment, and ensure that only that version is used for that environment.
  • You probably will not be able to run the deployment process from your own machine. If you did, how would you ensure that what you’re using has been checked in? That you’re using the right version for each test and production environment? That someone else would get the same results if they ran the process instead of you?
  • You should have a process for upgrading your deployment process in a given environment from one version to another (in other words, a way of deploying your deployment practice–see what I mean now?).



I’m lucky enough to work at a company that takes deployments seriously, and I have had a chance to design and implement the process. Here are some elements.

We use automation as much as possible. This makes processes repeatable (and faster). Powershell is the language of choice, although there are still a couple portions that use other technologies–for now.

The Powershell scripts have automated tests, just as with the project code being deployed. Since they involve file systems, multiple databases, and multiple server APIs, development of the tests involves automated reconfiguring the environment to set up each test. Unfortunately, developing the tests constitutes the vast majority of the development time, probably by a factor of 10. BUT, I can feel very confident about making changes of any kind. Upgrading to Powershell 2.0 was not a worry, nor was upgrading Powershell Community Extensions. I just rerun all the tests and find out what broke! I’m frustrated with the extra time required to write the tests, but it has been worth it.

All deployment code is, of course, kept in source control. I’ve organized it all into a single .Net project. The project has an associated “build” in Team Foundation Server (TFS). But–oops–see below.

We use a dedicated “deploy server” for each environment. The server doesn’t need much power; the deployment team merely has to log on to it and run the deployment scripts from there. They don’t log into the servers hosting our company’s software; all the deployment is run from the deployment server. Exception: Biztalk. The Biztalk APIs are all local, meaning you have to remote into each of the servers and deploy from there. I’m investigating doing this via Powershell remoting just to see if it’s practical enough to worth doing. It would help me integrate BizTalk deployment with the rest.

In the development environment the “deploy server” is one of our “build servers” used by TFS. This not only saves a machine; it also makes Continuous Integration much easier to accomplish.

The intent is to have a simple script that will copy the TFS build’s output from the drop directory into the correct locations on the deploy server. The TFS build number would be the method of keeping track of the version of the deployment scripts. I haven’t gotten this part done yet–not following my own advice in this post–so for now I’m simply copying scripts from my machine onto the build server, and then when they’ve run successfully there copying them to the other environments. I rename the old scripts before copying the new, so if needed I can roll back to the prior version. I’ve forgotten to copy something a few times. Gotta get this part done.

So most of what I’m recommending is implemented and working well, with more yet to come. It has worked out really well so far, and the controlled use of these automated scripts in each environment, pushing new versions to higher environments when they’re satisfactory in lower environments, has helped make our deployments BORING. Just right.

Powershell + File names + Special Characters <> Frustration (Eventually)

Have you ever tried to work in Powershell with a file or directory name that has unusual charaters in it?  When I was new to Powershell I stumbled across this with the standard names my software-development group used for certain directories—they contained square brackets: [ and ].

If you haven’t already come across this, go ahead and try working with such a directory in Powershell.  Use Windows Explorer to create a directory named [PowershellTest].  Then, in Powershell, try to Set-Location (or CD) into your new directory.  No good.  You can try putting quotes around it too, but it won’t help.


Wasn’t that fun?  Frustrated yet?

Have no fear; there’s an answer.  Internally, Windows is still keeping track of the short file names in the “old fashioned” 8.3 format.  It seems that Powershell is using these internal names.

There are several good ways to get around the problem. 

  • Never use such names!  But that’s not always an option.

  • Use the short 8.3 name.  There’s a great article about this at:  Note that in this article there’s one thing not made clear.  It’s in the first bullet point, about stripping certain characters when Windows generates the short name.  It does NOT delete them; rather, it changes them to underscores.  So in my case (but it won’t ALWAYS be the same on your machine), the directory name translates into _POWER~1.  And this name works fine!





  • Use the –literalpath argument, available on some (but unfortunately not all) commands that work with file names.


  • Download Powershell Community Extensions from, and use the Get-ShortPath cmdlet.  Note that you’ll still need to use –literalpath with this cmdlet, but once you have the short name you can use it for any other operation you want.



Hopefully one of these options will work for you!