I’ve worked in plenty of environments in which code is compiled on a team member’s machine, then pushed to production. This raises some important questions:
- What source code was present when the compile happened? Was it the latest version? Did it include something that the developer hadn’t yet checked in to the source control system?
- What is installed on the developer’s machine? Are all relevant service packs at the same level as production? Are production servers missing any required frameworks or APIs that exist on the developer’s machine?
- Does the code used during the compile match the code that has been tested before going to production?
There are other issues, but hopefully you get the idea. The wrong answer to any of these questions can lead to production problems. And in this kind of environment I’ve found that no one actually knows the answer to the questions…
…which brings us to the practice that will resolve these issues: a formal, controlled build process. Now, by formal I don’t mean loaded with paperwork; rather, I mean that certain rules are established, communicated, and enforced.
The rules (there may be more for your environment; these are a good start):
- The “build” (compile and assemble the deployment artifacts—executables, web pages, etc.) occurs on a dedicated machine: the “build server”. This is not a developer’s box. It’s locked down, with only some team members having access to it. Those team members must know exactly what has been installed at any given time and how the configuration compares to all test and production environments.
- The “build” is automated to the extent that it is likely to be a very consistent process; i.e., not dependent on who or what triggers it, not subject to human errors, etc.
- All artifacts being assembled for deployment come from source code, either directly (such as a text file) or indirectly (such as source code, which then gets compiled during the build). No exceptions.
- The deployment artifacts generated/assembled during the build are placed in a secure location, probably a network share with limited access.
- Artifacts deployed to any test or production environment come only from the secure location updated by the build process.
- The development team does not have the network, database, or other permissions necessary to deploy to production, nor to any test environments other than perhaps a low-level shared development environment. Thus, it’s not possible to deploy anything even by accident.
These rules can be scaled up or down to fit the size and budget of most development teams. In a complex environment (such as government contracting) there will probably be much more formality included. In many businesses in the United States, government regulations dictate a clean separation between those who write the code and those who can deploy it. On the other hand, in a very small team the dedicated build server may be a virtual machine administered by the team. In this case the above rules still apply but enforcement may depend on team discipline rather than permissions.
Developer tools can help with some of this. If you have the budget for it, Microsoft Team Foundation Server (TFS) is an excellent system that integrates source control, the build process, and much more. You can define any number of build servers and have the TFS server(s) communicate with them. If this is out of your budget, CruiseControl can accomplish much of the same thing. It doesn’t offer source control or other features, but it integrates well with other source control system and does a good job managing builds—and it’s free. There are plenty of other tools out there, including those for non-Microsoft environments; these are just the ones I’m familiar with.
Follow these practices and you’ll reduce the number of unpleasant surprises that occur when deploying to production. We are shooting for boring deployments: Boring Is Good!