Explain it to a Computer
For most people, “talking to a computer” conjures up scenes of being trapped in a low-budget retailer’s voice mail system. But today I was reminded of when it can be helpful to talk to a computer, or specifically, to explain something to a computer.
I saw a great demonstration today of an automated test system for on-screen TV guide software. We got to try it out in a hands-on tutorial. I learned the most, though, from just one question and answer.
We were using a screen capture feature with text recognition to set up a test of whether the right words appeared at the top of a menu on screen. All visual, drag-and-drop design. You draw a rectangle on the screen around the area whose text you want to check.
How do I know how much blank space to include around the text for capture? Makes sense to just mark the text, but maybe I should mark more? What if the right text appears, but aligned on the other side of the screen? Is that an error, or not?
Automated testing is very black and white. In the middle of the night, the test will either pass or fail. If you’re not sure what should be considered an error and what not an error, talk to the person who wrote the requirement. Maybe s/he didn’t think of the question either, but will now.
I always thought of automated testing, even with its setup costs, as a way to save time, avoid boring, repetitious work, build in self-test, and run reliable regression testing.
But the real benefit may be in its power to clarify requirements. Put a computer on your test team and it will ask all sorts of detail questions that a human reviewer didn’t think of.