Saturday, September 8, 2012

Classic Principles for Test Design

A good test case should have an expected output or result. Otherwise, testing becomes what Beizer calls “kiddy pool,” where whatever pocket the ball happens to end up in is declared the intended pocket.
Your tests should cover both valid, expected conditions and invalid, unexpected conditions. This principle perhaps seems almost too obvious for words, but it’s amazing how often people don’t adequately cover the error conditions. There are many reasons, including the fact that for many systems there are considerably more error conditions than non-error conditions.
You are most likely to find bugs when testing in those areas where you’ve already found the most bugs. This may surprise you, but this rule is borne out again and again. Bugs cluster due to the complexity of certain regions in the system, communication problems between programmers, programmer ability problems, and the gradual accumulation of too many hasty patches on a buggy area.
You should consider a test case good when it is likely to detect a bug you haven’t seen yet, and you should consider a test case successful when it does so. In other words, try to create tests that are likely to find a wide variety of bugs, and be happy when your tests do find bugs. I would generalize these statements to say that you should design test cases so that you can learn something you didn’t already know from each test case, and you should be happy when you do learn something from your tests. Furthermore, you should ideally learn the scariest, most dangerous facts first whenever possible. These outlooks help shape your work designing and creating tests.
Myers wrote that the test set should be the subset of all possible test cases with the highest probability of detecting errors. Of course, the set of all possible tests is infinite for any real-world system. Thus, there are infinite possible subsets too. Myers says that we should pick subsets based on the likelihood of bugs.

1 comment: