Archive for July 2006
Here’s Ron Porter commenting on that 30-to-1 ratio of productivity we hear about. He talks about the top performers bringing the others up to speed. That’s not a way things work — it’s the only way.
In high school I was always on the “B” team in soccer. (Back then I was a 5 out of 30 at soccer.) My best days were when they mixed us with the “A” team — our defense with their forwards and goalie vs. their defense with our offense and goalie. Playing defense with an “A” team around me made me play better.
There are two reasons to try to make the “A” team; together they offer an opportunity for everyone. Either you are really good so you get on the “A” team, or you want to be good so you need to get on the (bottom of the) “A” team so you’ll improve.
1. Stand up to superiors
2. Be willing to suffer ridicule
3. Keep records of everything
That’s my paraphrase — read how he said it: How to mentor a welder.
Ron, you’ve got it exactly right. I can assure you that it is “directly transferable to … other kinds of jobs.” Certainly not just programming. I sometimes think that programming (now called “software development”) is the only profession that thinks it’s so, so different. Yes software is different, but people in it are not.
When I’m not reading or working, I’m learning to roller blade. A pair of skates, a video here, a website there, and some practice. Gets me to about 1 out of 30. Apply Ron’s rule #2 though, and I go out to the park where all those kids skate circles around me. They’re all 10-out-of-30 going on 20-out-of-30 skill-wise. I watch them and even ask them questions. They’re fascinated for a moment that an adult might ask a kid how to do something. But they get over it quickly — acting like natural mentors. Which they are. And how about that — now I can actually skate around the neighborhood.
Mentoring is no mystery. It is kids’ stuff.
I’ve read any number of coding standards. They’re usually long, hard to imagine any developer memorizing them, and not automatically testable. So I was pleased to see Gerard Holzmann’s attempt at something much shorter, in the June 2006 issue of Computer Magazine: The Power of 10: Rules for Developing Safety-Critical Code.
Holzmann is the lead technologist at JPL’s Laboratory for Reliable Software, so his proposal is definitely worth reading.
But coding standards always bring up arguments about what should or shouldn’t be included, as well as doubts about how, or how costly, to implement.
There’s even a question of whether, in a project that has neither code reviews nor static analysis, which to start first.
In that spirit, then, I offer this Very Short Coding Standard (VSCS) as a possible starting point for projects that need to “do something”. Just two items: one from the human side of things — requires code review to verify, and one based on a static analysis tool that everyone already has installed.
VSCS Rule #1: Use Meaningful Entity Names
VSCS Rule #2: Maintain Zero Compiler Warnings
My experience tells me that you get a big improvement in readability if you can come up with entity names that are obvious to reviewers as well as to yourself. For the power of compiler warnings, read Holzmann’s Rule 10.
Failure to meet either should break the build and be handled immediately by refactoring.
VSCS is short, but not as easy to deploy as it might seem.
If your organization thinks it should do much more, first get this working and then go further.
Software is all design, with construction being automated or at least automatically tested. So said Jack W. Reeves in his article What is Software Design back in 1992. James Shore followed up and defined good design as one which:
“… minimizes the time required to create, modify, and maintain the software while achieving acceptable run-time performance.”
Always Ready to Run (and Install)
Continuous Integration with Test-Driven Development (and other automated tests on the build) ensures that functionality is always ready. Include automated build of installation script and the software is always ready to install and run.
The automated build system part of continuous integration knows how to return the essential “broke the build” or “build OK” message. The first follows the “fail early” principle. The second indicates perfect functional quality of the features that are in the build; all functional discussions can focus on which features are ready and whether they are as desired.
What about changes and new features?
What ensures, though, that the software is always ready to easily modify and extend?
Static analysis tools and complexity measures can help, but there’s a lot of important stuff that no machine can tell you.
For example, just one key attribute of easy-to-change code is:
Meaningful entity names
Write me a rule that finds the ones that aren’t.
A much easier way to find, and recommend improvements to, such people-oriented code attributes is code review.
On top of a continuous integration system with automated analysis and test, and a developer practice of TDD and constant refactoring (at the lowest level — not turning the entire design upside-down every week), all new code, and the edges of the existing code it interfaces with, should be reviewed.
Automated static analysis should find the language or construction-related problems, leaving human code reviewers to focus on things like readability, simplicity, and other design principles that only a person can find.
What kind of code review?
Pre-check-in review is an absolute minimum so that code review issues prevent check-in and thus prevent a build that runs, but whose code (design) isn’t ready to work with.
Pair programming is a nice idea but must need good pairing and people who are very comfortable with each other to make a difference.
Weekly desk review (the reviewer’s desk, by him or herself first) and discussion of all code written that week is best: it allows reviewer to see and advise on the bigger picture.
It’s the only way to keep the design which is in your code, as well as its functionality, Always Ready.
What will lead developers (and managers) to start practicing the best of what is known about software development? I don’t think that just being told will do it. People have to learn for themselves. But how? Where to start?
One answer is doing, and learning by trial and error. I’m all for learning by doing, but adding reading to that cycle helps too. The Web makes it a lot easier, if you know what you’re looking for. So I thought I’d share my reading path from this evening. Not just what I read, but the choices I made while getting there.
Google: Because it’s there
Google > more >> : I never clicked “more” before!
Blog Search: Vanity — look for my blog
Software Quality: My topic and profession
What Quantifies Good Software Design: Looked interesting
Quality With A Name: Recommended by first article
Other links I read from there though I came back to James Shore’s page:
What is Software Design?: Shore called it a “famous essay”
(later I found Reeves’ series of 3 articles with update)
Using PDL …: Articles mentioned “PDL”
Clean Code: Args …: Referenced in one of the articles
Design by Contract: Appeared in one of the articles
Does this mean that reading James Shore is the path to software quality?
Well, it’s a path.
More important is to search the web purposefully, find what means something to you, read it, and then try it out.
I’m still thinking about the question “What is software like?” which I tried to deal with last week in Not Like Anything You’ve Ever Seen. Today’s thought: pool. Not the swimming kind, but the kind with the pool table, sticks, and balls. Watch a professional play and you’ll see what differentiates serious pool players from the rest of us:
Position play: This is the act of hitting the cue ball with the necessary speed (and spin, if needed) to get the cue ball to end up someplace on the table where you’ll be able to get the next ball in easily. In short, it’s the strategy of thinking ahead and setting up your next shot.
When writing code, it’s a whole new game if you write code not just to run, but to make it exactly ready for yourself (or someone else) to add the next feature or change.
Maybe the answer for how to lead people — in order to get good products out on time — is simpler than we thought.
So much energy (some even in these pages) spent on figuring out how to fix defects, train people better, and improve quality.
How about this: focus all of your people’s energies on learning (as opposed to production) as the goal of their work, and high-quality production will follow by itself.
Just look around at your best people and ask if it isn’t so.
A magnifying glass brings us closer to the subject — allows us to examine the details. A prism, on the other hand, separates the rainbow of colors that are otherwise hidden in white light.
What if testing were separated from coding?
While there’s a popular trend these days towards combining them again in a new way — Test-Driven Development — many software organizations still have separate testing groups at the product or system level. We can ask if that’s good or bad, and why? But that’s an ongoing argument.
I’d like to ask a less familiar “what if” question.
What if debugging were separated from coding?
Imagine a software development organization where developers write code, another group tests it (that’s QC), and then the developers fix what QC has found. But that a separate group does all the debugging. All the figuring out what’s wrong, setting breakpoints, reading logs, doing traces etc. In short, the troubleshooting. Developers are not allowed to do that. The debugging team are experts in the codebase, and narrow down the problems, all the way to identifying where to fix the code and how. Then they return their results to the original coders for use.
No, I’m not suggesting such a process. Rather, I’ve been surprised to see so many books about professional programming say that debugging is an essential skill for a programmer. If it’s such an important activity, maybe it should have its own professional team to do it.
On the other hand, what does the need for such an activity say about the code?
Just the other day I saw my own coding project get stuck in debugging. The irony is that I was working on error-handling, and I was debugging an assert statement that didn’t seem to be working. I felt several degrees off from forward motion in coding.
Maybe we have to go back to history.
The First Bug
Many people are familiar with the story. The first computer “bug” actually walked into the machine and caused a failure. A hardware failure.
But do bugs walk into code? So that we should spend so much time, tools, and energy on searching for them, capturing them, and getting rid of them?