Talk About Quality

Tom Harris

Archive for the ‘Exception handling’ Category

Cancel that!

leave a comment »

A medical radiation machine operator types the letter ‘x’, realizes it’s an error, backspaces, types ‘e’, and continues. Consequences of the error and related defects lead to the patient’s untimely death.

Ten years later, a Japanese stock broker mistakenly switches the share price and amount on the sell order of a new stock. He tries to cancel the order, but fails. His employers lose $225 million, and are involved in lawsuits for years.

A blogger clicks the “back arrow” by mistake, gets a warning message, concludes he meant to continue writing, and clicks “OK” to continue. He loses his work, and has to rewrite it.  Here, the consequences are annoying, if trivial by comparison.

What do all these cases have in common? Canceling a request. If it’s not hard enough to program computers to do what we want them to do, who would have thought that telling them not to do it would be hard too?

The cancellation scenario — or “use case” as it’s called in software design — is the silent partner of every positive request supported by a piece of software. It has to cover giving the user clear options, executing the cancellation, and rolling back any partial results. Things get more complicated if authorization is required, or if the transaction has already gone through (both of those requirements figured into the Japanese broker error story). Cancellation and rollback are also part of automatic requests that may occur if one software module (the “server”) cannot complete a request by another (the “client”) and has to make sure to put everything back the way it was, and send the proper response code.

So the next time you’re designing a piece of software, no matter how simple, think what it’s supposed to do, but also what it will do if the user or client calls out, “Cancel that!”

Advertisement

Written by Tom Harris

July 11, 2009 at 11:02 pm

Posted in Exception handling

Errors are always cumulative

leave a comment »

Errors are always cumulative
Nobody likes to write error-handling code, but at least it’s easy (if boring): check inputs and results with “if” statements, and reject or recover on failure. But is it really that simple?
A little thought shows that errors are cumulative, and that failure is always the gathering or intensification of some faulty condition. Let’s prove that by contradiction. One of the simplest and most common cases of error handling is input data checking in a user interface. For example, password checking for your bank account login. The simple error-handling code is:
if username.password <> password then reject login
That should work fine, right? Nothing cumulative there. Every time I submit a wrong password, it tells me “wrong password” and prompts for a retry. But software developers (and many bank website users) will recognize the problems with that solution, among them:
1. Unbounded retry loop — if I keep getting it wrong, I can’t escape login
2. Denial of service — bring down the server by overwhelming it with bad logins
3. Eliminating wrong passwords — if it tells me “wrong”, I cross that try off my list
All of these real outcomes have something cumulative in them:
1. Time — user may run out of patience
2. Load — too much for server to handle
3. Learning — revealing more and more information about correct password
Take another common example: the elevator. Would simple limit-checking work for stopping at the right floor? Let’s try.
if floor.location <> floor.desired then keep descending
Bang! I wouldn’t want to be on that elevator. What’s cumulative? The momentum of the elevator, and the decreasing distance between current and desired location. (Even though nothing is wrong, that distance is commonly called “error”.) But a better example is the elevator’s door-closing protection. It started out with a simple:
if not (door.can_close) then door.re-open
Wasn’t it fun back then to keep waving your hand in the door and watch it open again, and keep everyone on the other floors waiting? Quickly, though, elevator software designers realized that even if it’s totally unacceptable to ignore a deliberate foot in the door and start moving, it’s equally wrong to go on closing and re-opening forever. So they added that unpleasant buzzing sound, triggered when the retries reach a certain number or amount of time. Cumulative again.
Real-life error-handling, then, has to do more than test for the limit and reject it. It has to recognize faults, count or measure them, and prevent them from growing and leading to failure. In fact, since a fault may be just the limit of an otherwise acceptable condition (e.g. buffer almost full — OK; buffer overflow — fault), error prevention requires identifying and tracking resources even before they reach their limits.

Nobody likes to write error-handling code, but at least it’s easy: check inputs and results with “if” statements, and reject or recover on failure. But is it really that simple?

A little thought shows that errors are cumulative, and that failure is always the gathering or intensification of some faulty condition. Let’s prove that by contradiction. One of the simplest and most common cases of error handling is input data checking in a user interface. For example, password checking for your bank account login. The simple error-handling code is:

if username.password <> password login.reject(“wrong password”)

That should work fine, right? Every time I submit a wrong password, it rejects the attempt and prompts for a retry. But software developers (and many bank website users) will recognize the problems with that solution, among them:

  1. Unbounded retry loop — if I keep getting it wrong, I can’t escape login
  2. Denial of service — bring down the server by overwhelming it with bad logins
  3. Eliminating wrong passwords — if it tells me “wrong”, I cross that try off my list

All of these real outcomes have something cumulative in them:

  1. Time — user may run out of patience
  2. Load — too much for server to handle
  3. Learning — revealing more and more information about correct password

Take another common example: the elevator. Would simple limit-checking work for stopping at the right floor? Let’s try.

if floor.location <> floor.desired elevator.descend

Bang! I wouldn’t want to be on that elevator. What’s cumulative? The momentum of the elevator, and the decreasing distance between current and desired location. (Even when nothing is wrong, that distance is commonly called “error”.) But a better example is the elevator’s door-closing protection. It started out with a simple:

if not (door.can_close) door.re-open

Wasn’t it fun back then to keep waving your hand in the door and watch it open again, and keep everyone on the other floors waiting? Quickly, though, elevator software designers realized that even if it’s totally unacceptable to ignore a deliberate foot in the door and start moving, it’s equally wrong to go on closing and re-opening forever. So they added that unpleasant buzzing sound, triggered when the retries reach a certain number or amount of time. Cumulative again.

Real-life error-handling, then, has to do more than test for the limit and reject it. It has to recognize faults, count or measure them, and prevent them from growing and leading to failure. In fact, since a fault may be just the limit of an otherwise acceptable condition (e.g. buffer almost full — OK; buffer overflow — fault), error prevention requires identifying and tracking resources even before they reach their limits.

Written by Tom Harris

July 6, 2009 at 9:26 am

Posted in Exception handling

The Tip of the Iceberg

leave a comment »

We all like to think that functional requirements are the main thing, and successfully designing and coding to them is enough. Who wants to worry about all the surprises from users, data, and even hardware?

But as Professor Behrooz Parhami shows, in a short (2-page!) article, Defect, Fault, Error,…, or Failure? (pdf), the “Ideal” state that we focus on is just one of 7 common possibilities. The other 6, descending into unpleasantness, are Defective, Faulty, Erroneous, Malfunctioning, Degraded, and Failed.

Our job is really twofold:

  1. Meet the functional requirements of the ideal state
  2. Keep the system in that ideal state, and avoid failure

Does failure avoidance have to take 86% (6/7) of the code? I don’t know. But it certainly sounds like the bottom half of an iceberg–a lot more than half is underwater.

Written by Tom Harris

May 10, 2009 at 8:10 pm

Posted in Exception handling

Don’t get stuck

leave a comment »

Having a standalone consumer application get stuck or crash, requiring reboot, is not the worst thing that can happen. (Worse is incorrect behavior that causes data loss or physical harm.) But requiring a reboot is the most annoying failure in non-safety-critical systems.

If there’s any good news, it’s that the list of fault modes is short:

  • System resources exhausted
  • Mistakenly idling
  • Waiting for acknowledgement that never comes
  • Deadlock

Did I miss any?

Only exception-safe code can avoid these undesired end states.

Design by Contract (DbC) is one way to exception safety.

Failure mode and effects analysis (FMEA) helps you plan a path to get there.

Written by Tom Harris

May 7, 2009 at 10:16 pm

Posted in Exception handling

Keeping Embedded Software on Track

leave a comment »

Consider a function in software code, and how to write it properly so that it meets its requirements and doesn’t fail. The standard way of looking at it is that a function has an API — a signature — of input and output variables and types. The code for the function has a pretty standard form. There are checks for invalid input, and a function body which implements a mini-flowchart of conditional checks, and maybe a state machine if needed. All this to take every given input vector to the correct output.

Unit tests, whether they’re written before coding (test-driven development), or after, apply test input vectors to the function and check if the expected output is produced. The same story applies to a module: a collection of functions or classes that implements a feature. There are still valid inputs to check if they give correct output, and invalid inputs that must be rejected. Best-practice test design methods, such as boundary-value checking and equivalence classes, help the developer choose the best test vectors from the sea of possibilities. The ones that will produce good tests that cause failures early, while the code is still in the hands of the developer.

So why do we still have so many “surprise” failures? Unit-tested code that nevertheless comes back from system testing with failures? And the Steps to Reproduce — so simple — like, “I left the system running overnight and when I came back in the morning it had crashed.”

The answer may be in the flow of data. Functions and modules in embedded software don’t just deal with single inputs one by one. Generally they are expected to process streams of data in real time. For example, a modern television set-top box, with a built-in disk, has to process multiplexed video data and metadata, the same from the hard disk, as well as a much slower stream of user input via the TV remote control. If the code slips up, or leaks memory, sooner or later, you get a wrong, stuck, or crash situation that QC proudly reports.

Where have we seen this before? How about those robot car competitions that university engineering departments often hold? Contests which have expanded outward to worldwide challenges to design, for example, an auto-piloted car. (See “The DARPA Urban Challenge“). These contests demand software that will keep a driverless car on track for long periods of time, avoiding stationary and moving obstacles (other cars).

Could it help, then, to see a lowly function in embedded software not as a flowchart of if-then-else statements, but as a little car — or perhaps delivery truck– that must be kept on track, avoiding obstacles in the data stream while delivering packages to the right place? How would that view affect code layout, code review, and unit test design? See you at the track!

Written by Tom Harris

April 23, 2009 at 10:42 am

Posted in Exception handling