Talk About Quality

Tom Harris

Making Continuous Integration Work

with 3 comments

It’s not hard to find a list of the better-known prerequisites for Continuous Integration. See, for example, Take Advantage of Continuous Integration, in a 2006 issue of Visual Studio magazine. They list:

  • source code repository
  • fully automated build process
  • automated tests

But slap those together and turn it on, and you may still be left with a build dashboard full of “FAIL” messages, and nobody quite sure how to fix them and keep them fixed.

From my recent experience, there are at least two additional requirements. As usual, it’s the implicit requirements–the ones everyone assumes are alread met but really aren’t–that make all the difference. So, continuing the list,

  • reliable builds, identical between developer’s workstation and the central build system

Sounds obvious, right? It’s really two requirements that go together, and neither is so obvious in practice.

Reliable: when a build doesn’t always work exactly the same way on a developer’s machine, s/he can re-run it, and may never find the time to troubleshoot why it doesn’t work 100% of the time. But when a central build system runs it, everyone expects a passing build to pass every time. Fail has to mean “something’s wrong with the code”, not, “oops, I guess that build didn’t work this time.”

Identical: the build results obtained by the developer and by the build system must match. Nothing saps faith in a build system like the “WOMB” (“works on my box”) response. (Thanks to Don Hass for that acronym, in his post Who cares if the build is broken?)

And speaking of which, Don was actually commenting on Derrick Bailey’s original notes, The Build Is Broken… Who Cares? Our last prerequisite for today:

  • Somebody has to care that the build is broken

Derrick concludes that

“The entire team cares.” If we create an atmosphere where a broken build is a bad thing, then we have created a more productive team.

But to create that atmosphere in the first place, there has to be a first person who cares. Someone who looks at the entire build system display, often with multiple components and platforms, identifies who needs to do what to get each build passing, and follows up daily to get the display all green. It can be a thankless job, but it’s well worth it, to ensure that all the builds are reliable and passing, create that self-sustaining teamwork atmosphere, and make Continuous Integration really work.

Written by Tom Harris

August 18, 2008 at 4:05 am

Posted in Agile

Tagged with

3 Responses

Subscribe to comments with RSS.

  1. I wish I could take full credit for the WOMB acronym, but it was a group effort. Nothing worse than working on a team, (doesn’t matter if it is 2 or 200 people) when you don’t have full buy in to automated builds and unit testing.

    When half of a team is 100% behind the build process, but there are a few hold outs (it only takes 1) that just see it as “too much work”. In my experience those are the people that just slap code together anyhow and don’t see coding as an art and something you might like to do.

    Hold outs are also the one’s all too often, that will never refactor a single line of code in their life. Their friend is copy and paste all day long.

    My favorite experience on automated builds and testing is when a developers local build stopped working. So they commented out the unit test that was failing, just so they could check in their code. Two weeks later it came back to bite them in the backside! So much for thinking they were smarter than the build!

    Automated builds are also (IMO) important even on a team of one. As long as you can minimize time and effort for setup the benefits will save you in the end. I know being able to tell a non developer they can build a new version of an app by clicking on the build systems web interface and they get a new WAR (java world)… that can be priceless. I have updated code remotely and had others (non developers) rebuild and deploy applications all because there was a build system in place.

    Lastly, there is not enough Professionalism in our field anymore. When it comes to creating software, too many “developers” expect tools to create the code. They don’t understand the code and don’t see anything wrong with that process. Some days it makes a coder at heart sad!

    Don Hass

    August 18, 2008 at 4:29 am

  2. Giving credit to developers, I have to say that my experience with the team I work with is quite different. (It can be good out there!)

    Developers care about their code, and have been onboard with continuous integration from the start. No cases of intentionally maladjusting the code to make it pass!

    But they are busy and under pressure to get features completed, so have little time and energy to ensure that the build system matches their results. They just demand that it work. In return for build support’s efforts making it all work, developers have a greater willingness to believe the system and fix broken builds quickly.

    There’s another thing I didn’t mention: cross-component build-breaks. When component A’s build fails because component B has a warning or error, and component B is developed at another site in a multinational company. What’s worked there is another simple solution: using the defect tracking system to request the fix. When developer on component B sees a short, well-written defect entry saying, “please fix your file X because it’s failing my build”, they generally respond pretty quickly. Either out of professional respect, or at least to get the defect off their defects list!

    talkaboutquality

    August 18, 2008 at 4:39 am

  3. The first place I saw this advice highlighted was at CodingHorror:

    http://www.codinghorror.com/blog/archives/000988.html

    We both know of examples where the build process depends on so many things; a perfectly identical version of cygwin, a network file system based source control system being up and replicated correctly, the precise proper command, a clean command that actually cleans, and a make utility (whethere make itself or something else) that actually stops on failures, so as not to use the previous (uncleaned) results.

    If any of these fail, worse, if all of them are broken… you can imagine (or see first-hand) the chaos that results.

    That said, I think your point about the importance of involving developers is an excellent one. People have to have some pride in their code, and feel part of a team. No code I release is delivered until I’m certain, to the best of my knowledge, that it works. If I don’t approve of it, it doesn’t go out with my name of the commit line, regardless of who tells me that they believe it’s OK, etc. As a result, I have a natural inclination to make sure that my code is not the code that breaks something.

    Mike Miller

    August 18, 2008 at 7:32 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s