7 Principles of Testing. Part 3
Principle 5. Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. The defect clusters mentioned in Principle 4 tend to move. Why does it happen?
This analogy was suggested by Boris Beizer in 1983. He gave an example of applying pesticides. Pesticide will kill bugs, but spray the same field enough times with the same poison and the remaining bugs will grow immune. If you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works. The repeated application of the same tests and the same methodologies will eventually result in the product having defects which cannot be found with these tests and these methodologies.
To overcome this 'pesticide paradox', the existing test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the system. This will help with finding more defects.
Principle 6. Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
This principle is closely related to the notion of risk. What exactly is risk? Risk is a potential problem. In the future, a risk has a likelihood of happening of between 0% and 100%; and it has an impact, i.e. the negative consequences we fear. When analyzing a risk, we always weigh these two aspects – likelihood and impact. For example, whenever we cross the road, there is some risk that we'll be injured by a car. The likelihood depends on factors such as how much traffic is on the road, whether there is a safe crossing place, how well we can see, and how fast we can cross. The impact depends on how fast the car is going, whether we are wearing protective gear, our age and our health, etc. The risk for a particular person can be worked out and therefore the best road-crossing strategy can be found.
The same refers to software: different software systems carry different levels of risk and the impact of problems may vary greatly. Some of the problems are quite trivial, but others can be costly and damaging – with loss of money, time or business reputation – and even may result in injury or death.
The level of risk can influence the selection of methodologies, techniques, and types of testing.
Principles 7. Absence of errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations.
The customers for software – the people and organizations who buy and use it to aid in their day-to-day tasks – are not interested in defects or the number of defects, except when they are directly affected by the instability of the software. They do not care to what extent the software complies with the documented formal requirements. The people using software are more interested in the software supporting them in completing tasks efficiently and effectively. The software must meet their needs, and they assess it from that point of view.
Even if you have done all tests and no defects have been found, it is not a guarantee that the software will meet the users’ needs and expectations.
In other words, verification and validation.
Verification is concerned with evaluating a system to determine whether it meets the requirements set. Validation is concerned with evaluating a system to determine whether it meets the user needs and expectations, and whether it is fit for purpose.
A part of testing activities should be focused on verification, and another part should be focused on validation.
In theory, if requirements have been collected and analyzed correctly, and there were no distortions at the stage of architecture and code development, then there should not be any inconsistencies. But, alas, real life is far from being ideal.
Victoria Slinyavchuk
Consultant on Software Testing