Archive for September 2006
When people talk about quality improvement, they tend to talk about process, and to a lesser extent, methods and tools. Let’s put some order in these terms, and see what’s missing in many conversations about quality.
In the software development domain:
Process — How people do things together. Who does what, and in what order. Inputs, outputs, expectations.
Method — Specific way of doing something. E.g., Using UML to express a software design. Or refactoring to improve it.
Tool — Usually software. E.g. , an IDE add-on, or a source control system.
Quality management people try to capture and improve the process. ISO 9001:2000 can be good for that: ISO got with it, dumped a lot of paperwork, and basically preaches “Say what you do, do what you say, look at your results, and make changes accordingly.”
Today’s software development community is also abuzz with methods. Sure there’s Agile, but that’s mostly at the team level. At the individual developer level, refactoring and unit testing are some good examples.
And where would we be without tools, which, for developers, usually means the best IDE and its add-ons. Rightly so — in the end, a developer produces code, and needs the best environment on his or her computer to do it.
So what’s missing?
All of software quality improvement is about getting software released as fast as possible, working correctly. Once people know what they’re supposed to be doing, and have the basic methods and tools, wouldn’t you want people who are really good at it?
To be specific:
In an otherwise well-organized company, the biggest improvement will come from having developers get better at design and coding, every day.
How to do it? Not just courses on technologies, or even on methods. Rather, the way people learn for good: by finding someone better at it and getting guidance on their daily work.
So if you’ve got some really good developers at your company, don’t hide them away: let them work with the rest, to improve everyone’s skills.
On Slow Leadership today they’re talking about Occam’s Razor. Choosing the simplest explanation. And the dangers of setting numerical targets.
But metrics are so easy to collect these days. The temptation to count things is great.
Let’s say you are deploying a new process, or tracking progress on an existing one. What harm could there be in counting, looking at the numbers, and making decisions based on them? Isn’t that sound management?
The problem is assuming that by counting something you learn more about it.
Actually all you learn is how many of something there were.
You don’t know anything about those somethings, nor why the count came out as it did.
When you count deployment events, there is no positive integer count that tells you much by itself.
Even zero doesn’t tell you unequivocally what happened. I can think of at least 3 possibilities about the events:
a) Didn’t happen
b) Happened but insubordination or fear among the counters
c) Happened but misunderstanding about what to count
Things get worse if the numbers are positive. How do you know whether a number is large or small? More, or less?
For example, trying to measure software development, let’s say you counted 24 high-severity defects last month. Is that high or low? Is it more or less than the month before? Kind of depends, doesn’t it, on how much code you wrote, how much testing was done, how good the testing was, which defects were recorded, and so on.
Forget distributions — pie charts and the like. How do you know if people understood which category an event belonged in?
What does all this leave us with? No graphs? No counting? No decision-making based on numbers? Back to the Stone Age?
No. It just means something counter-intuitive in today’s modern world of computers: knowledge before counting.
First learn the qualitative details of your events. What is a satisfactory defect entry? A relevant test result? A productive code review? Make sure all participants can recognize them.
Then you can start counting.
1, 2, 3 …
P.S. Want extra credit? Read just the first page of What is Mathematics? by Courant and Robbins. The section is called “The Natural Numbers: Introduction”.
In my previous post, Code Review Starter Kit, a reader pointed out that even some of the “minimum” items on my list — the ones that require written records — are more than what people may want to do. That is, if a method requires additional writing beyond the code, it’s too much.
Try Putting Code Review Results In the Code
Code review output is sometimes called “issues”, or even “defects”, but I called them “comments”. Why?
It’s true that external tracking tools (issue trackers, spreadsheets etc.) are more powerful for recording and tracking code reviews. But what if you want to keep everything close to the code — a one-stop shop in your source control system?
Try this: have code reviewers put their short, concise issues and questions right in the code as comments, with the “//” or “/* … */” or other appropriate comment marker.
In a 1:1 or group meeting, the reviewer and author go over the comments. Any comments that the reviewer retracts or the author rejects are deleted. All the accepted comments remain in the code which is then checked in.
Now the code author can work through the code review comments one by one, removing them as s/he updates the code, or in some cases, updates the regular comments.
Give all the reviewers an IDE macro which marks their comments as “code review” with name- and time-stamp, and you’ve got almost painless tracking of code reviews.
Starting to review your code? Want that 10-to-1 payback for finding problems early, while they’re easier to fix, but concerned about the time investment?
Whatever the method, here’s what effective code reviews have in common:
- Everyone knows which code to review
- Reviewers are familiar with requirements and design
- Purpose of review is clear
- Reviewers review all the code by themselves first
- Reviewers record comments in writing
- Code author hears and accepts comments in person
- Code author tracks comments through to fix
- Everyone saves time-stamped records to see how they’re doing
Yes, there’s more to it than that, but don’t start with less.