The pesticide paradox: how to keep your tests relevant

Almost 20 years ago Boris Beizer stated what became known as the Pesticide Paradox:
“Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

In plain English this means that as you run your tests multiple times, they stop been effective in catching bugs. Moreover, part of the new defects introduced into the system will not be caught by your existing tests and will be released onto the field.

This principle (or paradox) came up during my conversations a couple of times lately.
Once when evaluating a company’s approach to automation, where they created a large suit of tests and assumed it would continue catching all new bugs for eternity. Another time, working with a different team who was looking for the reasons their existing manual testing suit was not detecting all the bugs before releasing their product to the field.

The truth is that test suits require constant maintenance and updating, regardless if they are automated or manual.

There are a number of reasons a perfectly good suit of tests will stop been effective over time:

1. The Practical Impossibility of Testing all Possible Scenarios.
Even simple applications require an exaggerated and impractical number of tests in order to verify all possible scenarios and data combinations. This is why we use the help of methodological tools such as equivalent partitioning and model-based testing, but still this is not enough.
In the end of the day most teams will use a risk-based testing approach to create a sub-set of scenarios and data-sets, and then use the escaping defects found in the field after the initial release in order to calibrate and patch any holes that may have been left the suit.
This means that at the end of the day we don’t test everything.

2. The functionality of the application changes over time.
If we introduce new features to the

product it may seem trivial that we need to write tests for them. It is less trivial to remember that we also need to modify the tests for the existing features, even if they are only slightly modified by the new additions.

3. We (humans) tend to be especially careful only on places where we feel imminent danger.
What does this mean?
Simply that developers will be extra-careful in places where testers found bugs beforehand, but on the other hand they might not be so careful in places they “feel comfortable” with.

So what do we do with this? How do we assure we are working with an effective and efficient suit of tests?

The key rule is to be objective and to constantly keep reviewing the state of things. In practice I recommend the following:

1. Keep track of product changes and their indirect effects in your application.
The direct changes are trivial, but make an effort and do all the structural and functional connections, then think of the new scenarios you need to write to cover these changes.

2. Discontinue tests that are not effective.
Too many useless tests may be an overhead more than they can help your process.
For example, if you have 10 tests that cover the same area and none of them have detected bugs in an important number of cycles (the number is up to you!), then review them and cut their number down.
My rule of thumb is that if test has not reported a bug in the last 5 runs, I add it to my review list and I start verifying its importance and weighting whether I should keep it or move it to my test archive.

3. Modify your test data.
This one is trivial but we tend to forget it.


1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)



The pesticide paradox: how to keep your tests relevant