Archive for April 2009
Consider a function in software code, and how to write it properly so that it meets its requirements and doesn’t fail. The standard way of looking at it is that a function has an API — a signature — of input and output variables and types. The code for the function has a pretty standard form. There are checks for invalid input, and a function body which implements a mini-flowchart of conditional checks, and maybe a state machine if needed. All this to take every given input vector to the correct output.
Unit tests, whether they’re written before coding (test-driven development), or after, apply test input vectors to the function and check if the expected output is produced. The same story applies to a module: a collection of functions or classes that implements a feature. There are still valid inputs to check if they give correct output, and invalid inputs that must be rejected. Best-practice test design methods, such as boundary-value checking and equivalence classes, help the developer choose the best test vectors from the sea of possibilities. The ones that will produce good tests that cause failures early, while the code is still in the hands of the developer.
So why do we still have so many “surprise” failures? Unit-tested code that nevertheless comes back from system testing with failures? And the Steps to Reproduce — so simple — like, “I left the system running overnight and when I came back in the morning it had crashed.”
The answer may be in the flow of data. Functions and modules in embedded software don’t just deal with single inputs one by one. Generally they are expected to process streams of data in real time. For example, a modern television set-top box, with a built-in disk, has to process multiplexed video data and metadata, the same from the hard disk, as well as a much slower stream of user input via the TV remote control. If the code slips up, or leaks memory, sooner or later, you get a wrong, stuck, or crash situation that QC proudly reports.
Where have we seen this before? How about those robot car competitions that university engineering departments often hold? Contests which have expanded outward to worldwide challenges to design, for example, an auto-piloted car. (See “The DARPA Urban Challenge“). These contests demand software that will keep a driverless car on track for long periods of time, avoiding stationary and moving obstacles (other cars).
Could it help, then, to see a lowly function in embedded software not as a flowchart of if-then-else statements, but as a little car — or perhaps delivery truck– that must be kept on track, avoiding obstacles in the data stream while delivering packages to the right place? How would that view affect code layout, code review, and unit test design? See you at the track!
“Britain’s Ugly Duckling Breaks Out in Song” I slowly translated off the showbiz page of a foreign-language newspaper. I had to work at it to figure out that it said “Susan Boyle” and then look up her appearance on YouTube from a week ago. Anyone who wants to be moved by song, and doesn’t mind (or enjoys) the contrast with “beautiful people” celebrity judges putting feet in their mouths, should stop and listen.
I thought I would have nothing to add to the commentary on a musical appearance that saw over two million views (and that’s just on one upload, let alone the original TV broadcast). But after reading the 5-day-old Wikipedia entry, and some of the newspaper articles in the references, I wondered why nobody offered the obvious. Good singing comes from interpretive ability, soul, and poise. Anyone who has enjoyed opera would have no reason to be surprised–and every reason to be moved–by Boyle’s voice. Similarly by that of her “predecessor” Paul Potts. It is the furthest thing from coincidence that Potts sang opera, and Boyle sang from a musical, both genres that are formally performed live without a microphone.
Something that nags at me in the background is time-suckers. E-mail for example. A while back I made myself a list of rules for e-mail handling:
- Delete without reading
- Read and delete
- Read and file
- Read and reply
But it’s got some problems. First, it’s hard to keep to rules when my e-mail is always there, tempting me away from harder, though more productive, activities. Second, the hidden assumption is that e-mail is a satisfactory way to handle every task which is introduced in e-mail. (Wrong!)
I tried a little experiment, unplanned, on a day when I was off work. I told myself I would process my e-mail (so there wouldn’t be a mountain of it when I returned) but not reply to any of it. In other words, I would apply options ##1 – 3 above, but not #4.
I was off for 2 days, and during that same period, my local worksite was closed, so nobody local was sending me any e-mail. Still, when I logged on, I had almost 100 new e-mails! Some were automatically generated, and the rest originated at another worksite that was not closed. Most of the e-mails were simple text, but a few had attachments. Including reading those, I “processed” all but 5 e-mails, which seemed that they would merit a response or action upon my return to the office.
What’s interesting is that, on second look, none of the remaining e-mails really require a response. My replies might help the sender a bit, or suggest to them something that could help me slightly in the future, but nothing big. Probably my responses, if they are still helpful by the time I do return and respond, will only generate more e-mail.
Meanwhile, I just noticed that the only e-mail that actually requested something of me alone (that is, if I don’t do it, nobody will) was not among the 5, but was an additional e-mail left behind from the first of the two days’ e-mail I reviewed. And the best way to deal with it will be to visit someone in person to discuss it.
So what’s the best way to save time on e-mail? Use it less. Receive less, by tailoring subscriptions carefully. Send less, by e-mailing only to request or provide an answer that must be in writing.
I’ll try that.