A practitioner’s guide to software test design chapter 1 – the testing process

A Practitioner’s Guide to Softwar
Chapter 1: The Testing Process

Overview

The flock of geese flew overhead in a ‘V’ formation – not in an old-fashioned-looking Times New Roman kind of a ‘V’, branched out slightly at the two opposite arms at the top of the ‘V’, nor in a more modern-looking, straight and crisp, linear Arial sort of ‘V’ (although since they were flying, Arial might have been appropriate), but in a slightly asymmetric, tilting off-to-one-side sort of italicized Courier New-like ‘V’ – and LaFonte knew that he was just the type of man to know the difference. [1]

– John Dotson

[1]If you think this quotation has nothing to do with software testing you are correct. For an explanation please read “Some Final Comments” in the Preface.

Testing

What is testing? While many definitions have been written, at its core testing is the process of comparing “what is” with “what ought to be.” A more formal definition is given in the IEEE Standard 610.12-1990, “IEEE Standard Glossary of Software Engineering Terminology” which defines “testing” as:

“The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.”

The “specified conditions” referred to in this definition are embodied in test cases, the subject of this book.

Key Point At its core, testing is the process of comparing “what is” with “what ought to be.”

Rick Craig and Stefan Jaskiel propose an expanded definition of software testing in their book, Systematic Software Testing.

“Testing is a concurrent lifecycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software

being tested.”

This view includes the planning, analysis, and design that leads to the creation of test cases in addition to the IEEE’s focus on test execution.

Different organizations and different individuals have varied views of the purpose of software testing. Boris Beizer describes five levels of testing maturity. (He called them phases but today we know the politically correct term is “levels” and there are always five of them.)

Level 0 – “There’s no difference between testing and debugging. Other than in support of debugging, testing has no purpose.” Defects may be stumbled upon but there is no formalized effort to find them.

Level 1 – “The purpose of testing is to show that software works.” This approach, which starts with the premise that the software is (basically) correct, may blind us to discovering defects. Glenford Myers wrote that those performing the testing may subconsciously select test cases that should not fail. They will not create the “diabolical” tests needed to find deeply hidden defects.

Level 2 – “The purpose of testing is to show that the software doesn’t work.” This is a very different mindset. It assumes the software doesn’t work and challenges the tester to find its defects. With this approach, we will consciously select test cases that evaluate the system in its nooks and crannies, at its boundaries, and near its edges, using diabolically constructed test cases.

Level 3 – “The purpose of testing is not to prove anything, but to reduce the perceived risk of not working to an acceptable value.” While we can prove a system incorrect with only one test case, it is impossible to ever prove it correct. To do so would require us to test every possible valid combination of input data and every possible invalid combination of input data.


1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)



A practitioner’s guide to software test design chapter 1 – the testing process