Calling Reporting Services’ CreateReport() from Powershell

The specification for CreateReport() provided by Microsoft is inaccurate, as far as I can tell.  In c# it won’t matter, but it does in Powershell, due to the oddities in how Powershell handles empty arrays.

The specification says that CreateReport() will return an empty array if there are no errors or warnings.  I believe that in SQL Server 2008 it actually returns a null.  Maybe later I’ll find a flaw in my thinking…but maybe not!

Look at this code:

Read the rest of this entry »

Credentials vs. WindowsIdentity in Powershell

I’ve been implementing some automated deployment processes using PS-Remoting, which means that I’ve also been learning about some security issues. This post is about how I solved a problem, and what I learned along the way.



My Windows login is in domain X, but the machine I run the deployment process from, and the machines being deployed to, are in domain Y. There’s a one-way trust set up, so that I have no troubles logging into the machines in the Y domain, belonging to the local administrator group on each machine, pushing files, etc., using my domain X login–but from a domain Y machine I cannot see or do anything with machines in the X domain. All as it’s meant to be.



The trouble was that remoting didn’t work in domain Y unless I used a domain Y login. We have multiple test environments, with the lower ones (including the lowest: my box) all in domain X, where everything worked fine. We finally got up to the first environment which completely mirrors the security configuration of production, and BOOM! I couldn’t get a remote connection to anything.



OK, first off: HOORAY for having a test environment that mirrors production security! I think that this is always worth doing, and the latest issue is an excellent example. It would not have been fun to catch this on production deploy night.

But now: how did I solve it?

I have an "infrastructure" team at my organization that takes care of the network infrastructure. Working with them, we had a hard time figuring out what to do to resolve the remoting problem. We tried three things:

  1. Figure out why the trust that is setup doesn’t support what we’re trying to do with remoting. We struck out after hours of research. Maybe we’ll get an answer some day.
  2. Short term, we created a special login in the Y domain. I logged in using that Y login, then was able to run the deployment process successfully, remoting to different machines as needed. So this worked, but was not something we wanted to support long term. From a network security standpoint, I am domainX\username, and that’s the only way I should be identified.
  3. A combination of two things gave us a satisfactory long-term solution:
    1. From the domain Y server where we wanted to initiate remoting, we added the target servers to the list of TrustedHosts: Set-Item wsman:\localhost\client\trustedhosts *.domainY -Force. (Note: “*.domainY” means “all machines in the domainY domain”. For this to work, you then need to specify the full domain name of machinename.domainY when creating a remote session.)
    2. We prompt for the deployer’s credentials using Get-Credential, then explicitly pass the credentials, using the -credentials parameter, whenever setting up a remoting session.

Option three worked for us, although we did notice that the authentication mechanism of the remote session used NTLM. We were unable to make it use Kerberos. I’ll bet that if we could figure that out, we wouldn’t have to use TrustedHosts at all.


What I l Learned

Incidentally, I used [Security.Principal.WindowsIdentity]::GetCurrent() to find out about the type of authentication being used. Try this out yourself:


So then the problem became how to get the credentials in the first place. At first I couldn’t figure out why you can’t just automatically (no prompting) get a credentials object for the current user. I’m new to a lot of this and was pretty ignorant about it. But now I understand some of it.

A System.Security.Principal.WindowsIdentity object, which is what you get from a call to [Security.Principal.WindowsIdentity]::GetCurrent(), gives you information about your authenticated identity on the network. As such, we would hope that it’s highly secure. It doesn’t even contain your password.

However, a System.Management.Automation.PSCredential object, which is what you get from a call to Get-Credential, is barely more than a simple data structure. You can enter any kind of user ID and password–no authentication is done when Get-Credential is called. The PSCredential object has a method called GetNetworkCredential(). You might think from its name that it gives you a WindowsIdentity object, but no, it simply breaks your user name into separate Domain and UserName strings and hands you the credential password in clear text. Try it out with a dummy login and password–it works fine. No authentication is done, and although you’ll see a secure string for the password in the PSCredential object, you can see it in clear text by calling GetNetworkCredentials().

Clearly, the intended use of WindowsIdentity and PSCredential objects are very different. One contains secure information about a live, authenticated user on the network. The other is an object that is secure enough to pass over a network connection (since the password is a secure string), and which will be USED LATER to authenticate someone at the time a remote session is created.

Once I got that straight I could easily see why there’s no support for converting from one to the other.

I really do have to prompt for credentials in my deployment process, then pass those credentials when opening a remote session, in order to make remoting work in a mixed-domain environment. I wish I didn’t.  But at least I understand the difference between the two objects!


And remoting is working fine!

New Book May Change My Thinking!

A while ago a friend of mine recommended a new book from Martin Fowler’s signature series: 

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation  (At

I finally got a chance to spend a bit of time with it and have now ordered it.  I’ve been blogging about principles of successful deployments and it looks like this covers a lot of those topics and a lot more—and it’s sure to be better written.

I’ll keep blogging on the subject here since it helps clarify my own thinking.  But as I get a chance to read through the book I expect to further develop the things I’ve learned over the years and may even change my mind on some things.  I’ll be sure to mention the book when it has taught me something new.

Thanks, friend, for the recommendation, and thanks, Jez Humble and David Farley for writing the book—it looks fantastic!

Good Software Deployment Practices, Part VI: “Know what you’ve deployed”

This is part of a series of posts on how to make software deployments boring, because Boring is good!  Today I’ll start with an old nursery rhyme I learned as a kid:

There was an old woman who lived in a shoe.

She had so many children, she didn’t know what to do;

She gave them some broth without any bread;

Then whipped them all soundly and put them to bed.

Sometimes it comes to mind when I’m preparing to deploy an update of our software.  I’ll tell you why.

The problem

We have 17 primary components in our software, plus about a dozen (potentially a hundred) sub components.  We deploy each of these independently from the others, although version dependencies often require us to deploy a set of components together. 

Developers can create a dozen versions of each of these components each day, although usually it’s far less.  But in our two-week development cycles, we have between ten and one hundred different versions of each component to keep track of. 

We have five test environments plus our production environment.  It’s my job to ensure that the correct version (of possibly a hundred) of each component gets to each environment at the right time.

Can’t I just put them to bed?  No?

Well, then, I need a way to know what’s supposed to go where.


The Solution

My team uses two tracking mechanisms, one supplied by Microsoft Team Foundation Server (TFS) and one that I created.

When a developer checks in a code change, TFS automatically builds the software, assigning the build a unique identifier (the "build number") for tracking.  You can list all your builds in Team Explorer, which is an add-on to Visual Studio for working with TFS.  Each build in the list can be assigned a "build quality".  You can create your own list of build qualities, then choose from that list to assign a particular quality to a particular build.

We use build qualities such as "Move to QA", "QA", "Move to Production", "Production".  You’ll see that some of these express an intent:  we want to move version X of component Y to Production.  Other qualities express the current state:  version W of component Y is currently in Production.

The build quality gives us half of what we need.  We use it to communicate what "should be" in each environment.  It’s great–it’s quick and easy, and it can give you a single, at-a-glance view of which version of what component should be in which environment.  We especially like the qualities that express intent (move version X to Production).  It’s our primary mechanism for communicating between our quality assurance team and our deployment team.

There’s only one problem–build qualities are set manually.  I can’t tell you the number of times I’ve forgotten to update the build quality to indicate that version X HAS BEEN moved to production.

That’s why we have a second mechanism.  At the beginning of each deployment, the automated process updates a file on each server, indicating which version of which component is being deployed to that server.  When deployment is complete, the automated process updates the status and also indicates when the process finished.  This enables us to see, dependably, what really IS on each server in each environment, not just what we intend to have in each environment.

You might think it odd to have both mechanisms, but I like it.  One quickly and easily expresses intent and can be updated by our quality assurance and operations teams from within Visual Studio.  It’s a nice communication tool.  The other is a highly controlled indicator of exactly which version of any component is installed on a particular server in a given environment.  It’s quickly accessible (to read only) and it is always right.

These two methods combined make it possible to have conversations like this:


OK, we’re ready to go to Production!  
  What’s going?
They’re all marked "Move to Production."  
  Hmm…hey, version X for component component Y has a newer build than the one you’ve marked.  Did you mean to include it?
No, we’re holding that one until a developer checks in another fix.  



Or this:

Did that bug fix get out to our Acceptance Test environment?  
  Yes, it was deployed last Tuesday.
Well, the bug is still there, but it’s intermittent.  Are you sure you didn’t skip a web sever?  
  Our process should prevent that, but let me check…no, all the servers are at exactly the same build.
Are you sure that’s the right build for the fix?  
  The bug was fixed in version W (you can get this from TFS), and version X is in that environment, so yes, the fix is there.
How was it tested?  
  You’ll have to ask our QA group about that, but it was deployed to the test environment a week ago.

These kinds of conversations help ensure that the right version of each component gets to the right environment at the right time.  And when there’s a problem, it helps us put our finger on the problem a little quicker.  We can quickly determine whether or not it’s a versioning or deployment problem.

One more thing to make deployments boring.

Good Software Deployment Practices, Part V: "Deploy Your Deployment Process"

This is part of a series of posts on how to make software deployments boring, because Boring is good!

The phrase “deploy your deployment process” is, I admit, a little odd. Let me explain.

Deployment of software entails risk of messing something up, and this whole series is a set of guidelines that can help minimize the risk. Minimizing deployment risks means that the process itself, whatever it may be, needs to be tested to ensure that it works as expected.



The logical place to test your deployment process is in your test environments–which means using the same deployment process in test as you do in production. If you use your process in a Continuous Integration environment, your process will be tested every time someone checks in their source code. Nice!



Just as with the software you’re deploying, you need to know that the process you’re using in production is exactly the same one that you’ve tested–which means that you need a means of controlling it. Controlling it means that:

  • You should ensure that any automation code used is in source control.
  • You should version the process.
  • You should know which version of the deployment process is in each environment, and ensure that only that version is used for that environment.
  • You probably will not be able to run the deployment process from your own machine. If you did, how would you ensure that what you’re using has been checked in? That you’re using the right version for each test and production environment? That someone else would get the same results if they ran the process instead of you?
  • You should have a process for upgrading your deployment process in a given environment from one version to another (in other words, a way of deploying your deployment practice–see what I mean now?).



I’m lucky enough to work at a company that takes deployments seriously, and I have had a chance to design and implement the process. Here are some elements.

We use automation as much as possible. This makes processes repeatable (and faster). Powershell is the language of choice, although there are still a couple portions that use other technologies–for now.

The Powershell scripts have automated tests, just as with the project code being deployed. Since they involve file systems, multiple databases, and multiple server APIs, development of the tests involves automated reconfiguring the environment to set up each test. Unfortunately, developing the tests constitutes the vast majority of the development time, probably by a factor of 10. BUT, I can feel very confident about making changes of any kind. Upgrading to Powershell 2.0 was not a worry, nor was upgrading Powershell Community Extensions. I just rerun all the tests and find out what broke! I’m frustrated with the extra time required to write the tests, but it has been worth it.

All deployment code is, of course, kept in source control. I’ve organized it all into a single .Net project. The project has an associated “build” in Team Foundation Server (TFS). But–oops–see below.

We use a dedicated “deploy server” for each environment. The server doesn’t need much power; the deployment team merely has to log on to it and run the deployment scripts from there. They don’t log into the servers hosting our company’s software; all the deployment is run from the deployment server. Exception: Biztalk. The Biztalk APIs are all local, meaning you have to remote into each of the servers and deploy from there. I’m investigating doing this via Powershell remoting just to see if it’s practical enough to worth doing. It would help me integrate BizTalk deployment with the rest.

In the development environment the “deploy server” is one of our “build servers” used by TFS. This not only saves a machine; it also makes Continuous Integration much easier to accomplish.

The intent is to have a simple script that will copy the TFS build’s output from the drop directory into the correct locations on the deploy server. The TFS build number would be the method of keeping track of the version of the deployment scripts. I haven’t gotten this part done yet–not following my own advice in this post–so for now I’m simply copying scripts from my machine onto the build server, and then when they’ve run successfully there copying them to the other environments. I rename the old scripts before copying the new, so if needed I can roll back to the prior version. I’ve forgotten to copy something a few times. Gotta get this part done.

So most of what I’m recommending is implemented and working well, with more yet to come. It has worked out really well so far, and the controlled use of these automated scripts in each environment, pushing new versions to higher environments when they’re satisfactory in lower environments, has helped make our deployments BORING. Just right.

Good Software Deployment Practices, Part IV: “Deploy The Same Bits”

This is part of a series of posts on how to make software deployments boring, because Boring is good!

In earlier posts I’ve said that everything—everything—getting deployed should come from the output directory of the build.  No exceptions; include even environment-specific configuration information.

Doing so gives us another way to reduce risk.  We do this by deploying the same bits in every environment.  Allow me to explain.


The Bad?

I once worked at a company where standard practice was to recompile the source code before deploying into each environment.  I was always surprised at this and didn’t understand it…at the time I didn’t challenge the practice because I was still new to the industry at that point.  But I still don’t understand it.  Maybe someday I’ll get an explanation. 

The Worse

I’ve since worked, quite often, in organizations where recompiling was not a formal practice but was often done by default.  (These organizations didn’t have a controlled build environment.)  I think it’s a weak practice, at least for the kind of software and operating environments we tend to have now.

So here’s what to do instead: 

The Good

Every time the software is built, its output should be uniquely identified.  If you have something like Microsoft Team Foundation Server or CruiseControl, this identifier is generated for you.  (If not, you’ll have to do it for yourself.)  Then, keep track of which specific build has been deployed in the test environments.  At my current workplace we have a number of test environments, and we know exactly which build numbers have been deployed in each environment.  This means we know exactly what is being tested…

…which means that if we deploy the same build number into production, then we know that those exact bits have been tested.  We don’t risk introducing errors by recompiling or reassembling the deployment artifacts.

Having confidence that we’ve deployed exactly what we’ve tested requires having confidence in these things:

  • Every artifact is in the build output directory (also frequently called the drop directory).
  • The drop directory is controlled—once the artifacts of a build are put in the drop directory, very few people should be able to alter the directory’s contents.
  • We are properly tracking build numbers so we know that the build number we’re currently deploying into production is the one we deployed into our test environments earlier.

This is one more place in which to reduce risk, as we travel on our journey to make deployments boring.

Good Software Deployment Practices, Part III: More on “The Build”

This is part of an ongoing series of posts dealing with the practices that will make software deployments boring.  Because boring is good!

So far I’ve been saying some pretty standard stuff but this time I want to state a practice that is less-often done well, at least in my opinion.


What to do with configuration files?

I’ve noticed that many people generate configuration information during the deploy.  For example, the build output might include a development or master copy of a web.config file, and during deployment the environment-specific stuff gets replaced with the correct information.

I think there’s a better way…

…and that better way is to include the config files for ALL environments in the build.  Either keep them all in source code, or go through the “replace elements in the master with the correct values” process, once for each environment.  I would tend to go for the latter, since most config files have a mix of environment-specific and non-environment-specific information, and we don’t want to have to maintain multiple copies of the non-environment-specific information.  But either way, the build output should contain all the configs.


Earlier I said that every deployment artifact—every artifact—must be generated or assembled during the build and be placed in the output directory of the build.  Handling config files in the manner I’ve described is just an example of that.  And it comes with real benefits:

  • Risk.  Selecting the right config file for an environment and putting it in place is in my opinion less risky than walking through a config file making changes to it.  So we can do the harder work that may be more error prone during the build, and do the less-risky portion during the deploy.  We push risk into an earlier step than deployment.
  • Testing/inspection.  This is related to managing risk.  We should test both the build process and the deployment process, but if we generate config information during the deploy, then the first time we can verify the generation of correct production values is after production deployment!  If we generate the configuration information during the build, we can inspect the result at our leisure, as often as we want, with any process we want, before the actual deployment takes place.


In other words, handle configuration files, and any other environment-specific items, just as you would anything else.

…which means we follow the practice of assembling ALL deployment artifacts during the build.  Everything that will get deployed should be in the build’s output directory.  The fewer exceptions we make to that rule the more we make our deployments boring.  And that’s good!

Good Software Deployment Practices, Part II: “The Build”

I’ve worked in plenty of environments in which code is compiled on a team member’s machine, then pushed to production.  This raises some important questions:

  • What source code was present when the compile happened?  Was it the latest version?  Did it include something that the developer hadn’t yet checked in to the source control system?
  • What is installed on the developer’s machine?  Are all relevant service packs at the same level as production?  Are production servers missing any required frameworks or APIs that exist on the developer’s machine?
  • Does the code used during the compile match the code that has been tested before going to production?

There are other issues, but hopefully you get the idea.  The wrong answer to any of these questions can lead to production problems.  And in this kind of environment I’ve found that no one actually knows the answer to the questions…

…which brings us to the practice that will resolve these issues:  a formal, controlled build process.  Now, by formal I don’t mean loaded with paperwork; rather, I mean that certain rules are established, communicated, and enforced.

The rules (there may be more for your environment; these are a good start):

  • The “build” (compile and assemble the deployment artifacts—executables, web pages, etc.) occurs on a dedicated machine:  the “build server”.  This is not a developer’s box.  It’s locked down, with only some team members having access to it.  Those team members must know exactly what has been installed at any given time and how the configuration compares to all test and production environments.
  • The “build” is automated to the extent that it is likely to be a very consistent process; i.e., not dependent on who or what triggers it, not subject to human errors, etc.
  • All artifacts being assembled for deployment come from source code, either directly (such as a text file) or indirectly (such as source code, which then gets compiled during the build).  No exceptions.
  • The deployment artifacts generated/assembled during the build are placed in a secure location, probably a network share with limited access.
  • Artifacts deployed to any test or production environment come only from the secure location updated by the build process.
  • The development team does not have the network, database, or other permissions necessary to deploy to production, nor to any test environments other than perhaps a low-level shared development environment.  Thus, it’s not possible to deploy anything even by accident.

These rules can be scaled up or down to fit the size and budget of most development teams.  In a complex environment (such as government contracting) there will probably be much more formality included.  In many businesses in the United States, government regulations dictate a clean separation between those who write the code and those who can deploy it.  On the other hand, in a very small team the dedicated build server may be a virtual machine administered by the team.  In this case the above rules still apply but enforcement may depend on team discipline rather than permissions.

Developer tools can help with some of this.  If you have the budget for it, Microsoft Team Foundation Server (TFS) is an excellent system that integrates source control, the build process, and much more.  You can define any number of build servers and have the TFS server(s) communicate with them.  If this is out of your budget, CruiseControl can accomplish much of the same thing.  It doesn’t offer source control or other features, but it integrates well with other source control system and does a good job managing builds—and it’s free.  There are plenty of other tools out there, including those for non-Microsoft environments; these are just the ones I’m familiar with.

Follow these practices and you’ll reduce the number of unpleasant surprises that occur when deploying to production.  We are shooting for boring deployments:  Boring Is Good!

Don’t Clog The Production Pipeline

Here’s a situation that you don’t want your team to be in:

  1. A low or medium priority (but still significant) bug gets reported in your production environment.
  2. A fix for the bug is checked into the branch of source code that represents production.
  3. The fix is not tracked well, and it languishes in a test environment for a while.
  4. A really hot production bug (undoubtedly written by someone else, not you!) gets reported and must be fixed and placed into production right away.  The new bug is in the same general area of the software as the first bug.



The problem, if you haven’t seen it, is that you can’t easily release the critical fix (bug #2) until you know that the less-critical fix (bug #1) is good.  Has it been tested yet?  Is the problem fixed?  Have regression tests shown that it didn’t break anything?  If not, then what do you do?

The bug fix that sat around in the pipeline to production has become like sludge, potentially blocking the way for something more important.  Either bug #2 has to wait until bug #1 is ready for release, or bug #1 has to be backed out.  Or worse yet, perhaps in the rush no one remembers bug #1, so it gets released in an unknown state—pushing the sludge through the pipeline and right into the machinery.  You may find out too late what you’ve broken.

Keep it moving

Make sure that someone in your organization is monitoring the bug list closely.  Make sure they understand the importance of keeping things moving.  If a fix can’t be put into release fairly soon, once checked into the code branch it should still be worked on until it is fully ready for release.  If there are not resources to get it production-ready quickly, it should not be checked into the production code branch.  Maybe it should be assigned to the next general release instead.

But don’t assume that someone else is responsible for this.  I currently work in a pretty small shop.  Besides the developers we have a handful of Quality Assurance people.  They’re an excellent group, and I mistakenly thought that we wouldn’t have this problem.  But we did, despite good people and good tracking software.

So, as the “keeper” of the source code repository, I instituted a very simple policy that has made a contribution toward keeping the pipeline moving:  I lock down permissions on a branch once it gets close to production, and I don’t open it up for anyone until a member of the Quality Assurance group makes the request (usually just verbally).  That way I know that the developer and a member of QA are talking with one another, and QA understands that when the branch is opened, they’re responsible for staying on top of the fix until it is ready to go out the door.

Such a policy is not for everyone, and wouldn’t scale well to a big team.  But it adds one more opportunity to ensure the we keep the pipeline clear.  The point is to make your own contribution.  What are you doing?


Good Software Deployment Practices

I earlier mentioned my software deployment goal: Boring Is Good!  Over several posts I will cover some of the things that can help accomplish that goal, starting with the very basics.


Basic practice number one, coming up!

Use Source Control

This one seems obvious to many developers but not to all, especially as we change the discussion from standard (Java, c#) code into database schemas and into other technologies where development is done via point-and-click rather than writing code.  There’s often a “publish” button in the Integrated Development Tool that looks so tempting…and it’s great for local testing and for small projects, which is what it was intended for.  But for larger projects, the first rule is unbreakable:  other than on your own machine, the deployment process begins in the source code library.  Nothing goes to production that didn’t originate there.  That means nothing!

“Do you really mean that if Brad, our technical writer, needs to update our release notes (a .pdf file), he has to email it to someone who can check it into source control before you will deploy it?  Isn’t deployment in this case just a file copy to a web server?”


Read the rest of this entry »