Agile Development: The Wrong Analogy (Again)? How about this one?

Analogies are very powerful, either to mislead or to instruct.  For many years people (including me for a short while) have used the analogy of building a house—or a bridge, or a skyscraper, or anything else—to understand how to properly build software.  I recently came across another example at one of my favorite database-related web sites, SQL Server Central.  Here it is, followed by what I think is a better one:

Read the rest of this entry »

Have You Forgotten A Critical Attribute Of Your Development Project?

WHITESPACE_HERE

Forget about what customers want—or at least, what they say they want—for a minute.

People with software development experience will want to talk about non-functional attributes, such as performance, scalability, capacity, usability, flexibility, maintainability, etc.  They may not be familiar with Tom Gilb’s "Principles of Software Engineering Management,” a favorite book of mine (see Amazon.com), but they would probably agree with his statement on the importance of managing critical, non-functional attributes:

WHITESPACE_HERE

The Achilles’ Heel Principle:

Projects which fail to specify their goals clearly, and fail to exercise control over even one single critical attribute, can expect project failure to be caused by that attribute.

WHITESPACE_HERE

Software architects often struggle to convince other stakeholders that strict attention to non-functional attributes is vital to project success.  But even architects and senior developers often miss one of the critical attributes.  In fact their regard for it sometimes resembles the sales team’s regard for something as meaningless (to them) as maintainability.

The neglected nonfunctional requirement? 

WHITESPACE_HERE

Operability:  i.e., the ability to actually support the software after it has been launched.

WHITESPACE_HERE

To consider whether operability is being overlooked on your project, think about your answer some questions—there are more, but here’s a starter:

  • Are operations specialists (network and database administrators, customer support teams, any others who will be running the show when it’s in the hands of its users) involved throughout the entire development process?
  • Do they gain frequent experience with the software by deploying it into production-like environments?
  • Are you delivering functionality to customers in small increments? (Just as “big bang” doesn’t work well for customers, it’s usually a disaster for operations.)
  • Do developers need permission in the production environment to see data, logs, etc., on a regular basis (or, heaven forbid, to “fix” data or configurations)?
  • Is deployment to the production a nail-biting experience?
  • Does customer support know how to use the software?
  • If customers need training, has it been done, and using what materials?  Has the customer support team (and the development team) gotten feedback feedback from these sessions?

WHITESPACE_HERE

Like most of the other nonfunctional requirements, operability provides no "business value" to the marketing team.

WHITESPACE_HERE

Until you have actual customers, that is.

Then we find that everyone wants operability.

New Book May Change My Thinking!

A while ago a friend of mine recommended a new book from Martin Fowler’s signature series: 

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation  (At Amazon.com)

I finally got a chance to spend a bit of time with it and have now ordered it.  I’ve been blogging about principles of successful deployments and it looks like this covers a lot of those topics and a lot more—and it’s sure to be better written.

I’ll keep blogging on the subject here since it helps clarify my own thinking.  But as I get a chance to read through the book I expect to further develop the things I’ve learned over the years and may even change my mind on some things.  I’ll be sure to mention the book when it has taught me something new.

Thanks, friend, for the recommendation, and thanks, Jez Humble and David Farley for writing the book—it looks fantastic!

Good Software Deployment Practices, Part II: “The Build”

I’ve worked in plenty of environments in which code is compiled on a team member’s machine, then pushed to production.  This raises some important questions:

  • What source code was present when the compile happened?  Was it the latest version?  Did it include something that the developer hadn’t yet checked in to the source control system?
  • What is installed on the developer’s machine?  Are all relevant service packs at the same level as production?  Are production servers missing any required frameworks or APIs that exist on the developer’s machine?
  • Does the code used during the compile match the code that has been tested before going to production?

There are other issues, but hopefully you get the idea.  The wrong answer to any of these questions can lead to production problems.  And in this kind of environment I’ve found that no one actually knows the answer to the questions…

…which brings us to the practice that will resolve these issues:  a formal, controlled build process.  Now, by formal I don’t mean loaded with paperwork; rather, I mean that certain rules are established, communicated, and enforced.

The rules (there may be more for your environment; these are a good start):

  • The “build” (compile and assemble the deployment artifacts—executables, web pages, etc.) occurs on a dedicated machine:  the “build server”.  This is not a developer’s box.  It’s locked down, with only some team members having access to it.  Those team members must know exactly what has been installed at any given time and how the configuration compares to all test and production environments.
  • The “build” is automated to the extent that it is likely to be a very consistent process; i.e., not dependent on who or what triggers it, not subject to human errors, etc.
  • All artifacts being assembled for deployment come from source code, either directly (such as a text file) or indirectly (such as source code, which then gets compiled during the build).  No exceptions.
  • The deployment artifacts generated/assembled during the build are placed in a secure location, probably a network share with limited access.
  • Artifacts deployed to any test or production environment come only from the secure location updated by the build process.
  • The development team does not have the network, database, or other permissions necessary to deploy to production, nor to any test environments other than perhaps a low-level shared development environment.  Thus, it’s not possible to deploy anything even by accident.

These rules can be scaled up or down to fit the size and budget of most development teams.  In a complex environment (such as government contracting) there will probably be much more formality included.  In many businesses in the United States, government regulations dictate a clean separation between those who write the code and those who can deploy it.  On the other hand, in a very small team the dedicated build server may be a virtual machine administered by the team.  In this case the above rules still apply but enforcement may depend on team discipline rather than permissions.

Developer tools can help with some of this.  If you have the budget for it, Microsoft Team Foundation Server (TFS) is an excellent system that integrates source control, the build process, and much more.  You can define any number of build servers and have the TFS server(s) communicate with them.  If this is out of your budget, CruiseControl can accomplish much of the same thing.  It doesn’t offer source control or other features, but it integrates well with other source control system and does a good job managing builds—and it’s free.  There are plenty of other tools out there, including those for non-Microsoft environments; these are just the ones I’m familiar with.

Follow these practices and you’ll reduce the number of unpleasant surprises that occur when deploying to production.  We are shooting for boring deployments:  Boring Is Good!

Don’t Clog The Production Pipeline

Here’s a situation that you don’t want your team to be in:

  1. A low or medium priority (but still significant) bug gets reported in your production environment.
  2. A fix for the bug is checked into the branch of source code that represents production.
  3. The fix is not tracked well, and it languishes in a test environment for a while.
  4. A really hot production bug (undoubtedly written by someone else, not you!) gets reported and must be fixed and placed into production right away.  The new bug is in the same general area of the software as the first bug.

 

Sludge

The problem, if you haven’t seen it, is that you can’t easily release the critical fix (bug #2) until you know that the less-critical fix (bug #1) is good.  Has it been tested yet?  Is the problem fixed?  Have regression tests shown that it didn’t break anything?  If not, then what do you do?

The bug fix that sat around in the pipeline to production has become like sludge, potentially blocking the way for something more important.  Either bug #2 has to wait until bug #1 is ready for release, or bug #1 has to be backed out.  Or worse yet, perhaps in the rush no one remembers bug #1, so it gets released in an unknown state—pushing the sludge through the pipeline and right into the machinery.  You may find out too late what you’ve broken.

Keep it moving

Make sure that someone in your organization is monitoring the bug list closely.  Make sure they understand the importance of keeping things moving.  If a fix can’t be put into release fairly soon, once checked into the code branch it should still be worked on until it is fully ready for release.  If there are not resources to get it production-ready quickly, it should not be checked into the production code branch.  Maybe it should be assigned to the next general release instead.

But don’t assume that someone else is responsible for this.  I currently work in a pretty small shop.  Besides the developers we have a handful of Quality Assurance people.  They’re an excellent group, and I mistakenly thought that we wouldn’t have this problem.  But we did, despite good people and good tracking software.

So, as the “keeper” of the source code repository, I instituted a very simple policy that has made a contribution toward keeping the pipeline moving:  I lock down permissions on a branch once it gets close to production, and I don’t open it up for anyone until a member of the Quality Assurance group makes the request (usually just verbally).  That way I know that the developer and a member of QA are talking with one another, and QA understands that when the branch is opened, they’re responsible for staying on top of the fix until it is ready to go out the door.

Such a policy is not for everyone, and wouldn’t scale well to a big team.  But it adds one more opportunity to ensure the we keep the pipeline clear.  The point is to make your own contribution.  What are you doing?

 

Are You Writing Fossilized T-SQL Code Every Day?

There’s a long list of SQL Server features that are deprecated; i.e., will not appear in some future version—such as perhaps the next version.  See http://msdn.microsoft.com/en-us/library/ms143729.aspx for this list.

Ignoring the list is one of those things that can lock an organization into staying with an old, and eventually unsupported, version of SQL Server—or into retiring a system earlier than they would like, or into spending precious resources rewriting code.

Many items on the list will hit someone, forcing them to revise existing code.  But there are a few features that are so widely used that when they are removed the impact will be enormous.  I think Microsoft will be pressured not to remove them, but in order for the product to move forward sometimes it needs to be done. 

The Short List

Here are the ones from the list that I think will cause the most wide-spread problems:

  • Use of the SQL 2000 and earlier system tables—sysobjects, sysindexes, etc.  Microsoft always warned against using these but I’ve never worked in an organization that didn’t use them somewhere in their code.
  • DBCC DBREINDEX
  • SQL-DMO
  • And now the big one:  not ending T-SQL statements with a semicolon.  Think of all the lines of SQL code you have written, and that your colleagues have written, that are still in use.  What percentage have a semicolon at the end of each statement?

Coding habits die slowly.  I’ve found that even when presented with this warning from Microsoft, I have had great difficulty in getting programming groups to even seriously consider adding a semicolon at the end of each statement.  It’s not hard; it could probably become a habit within a few weeks for people who write a lot of T-SQL.  (That’s about how long it took for me to start typing it automatically).  And I think it actually makes the code easier to read.  But most people don’t want to fight the inertia.

How about you?

Moldy Code

“Finished” but unreleased (maybe even untested) code is just like that little container of baked beans that you put in the fridge three or four weeks ago. 

What?  You don’t remember the baked beans?  Well, let’s go look in the fridge…they’re there.  Way in the back.

Now look underneath the beans.  Hmmm, not sure what’s in that container.  And we’re not sure we want to open it.  But since we don’t have time to deal with it right now…well…oh, let’s just leave it there for now.  (If you can’t relate to this, you clearly have NOT been to college!)

Unreleased code is just like those beans (and that, um, whatever underneath them).  It goes bad on you.

When the code was new, chances are that your team knew the requirements.  That you felt confident that the requirements were met by the code.  That the code fit into the rest of the project.  That you knew what assumptions were being made (there are always assumptions).  And that someone could understand the code.

Now that it’s been sitting there for a while, you’re not so sure.  The developer who wrote it doesn’t remember as much about it.  Even if it “works”, does it meet the requirements?  They might have changed by now—including those that were not perfectly documented.

Let that code sit long enough and chances are that no one wants to open it up and have a look.  So it sits even longer.  Eventually the only reasonable thing to do is throw it out.

One of the great things about Agile Development is its emphasis on frequent releases.  Agile Development (specifically, Extreme Programming) also likes us to try to use metaphors whenever useful.  So how about this one:  releasing software is like cleaning out your fridge!