Module 5_1
Module 5_1
M, AOP/SCOPE, VIT
Module 5
V A L I D AT I O N A N D
V E R I F I C A T I O N
Dr.Mehfooza. M, AOP/SCOPE, VIT
A Glimpse
• Strategic Approach to Software Testing
• Strategic Issues
• Fundamentals
Dr.Mehfooza. M, AOP/SCOPE, VIT
1. A STRATEGIC APPROACH TO
SOFTWARE TESTING
• A number of software testing strategies have been proposed in the literature. All provide you
with a template for testing and all have the following generic characteristics:
• • To perform effective testing, you should conduct effective technical reviews. By doing
this, many errors will be eliminated before testing commences.
• • Testing begins at the unit level and works “outward” toward the integration of the
entire computer-based system.
• • Testing is conducted by the developer of the software and (for large projects) an
independent test group.
• • Testing and debugging are different activities, but debugging must be accommodated
in any testing strategy.
Dr.Mehfooza. M, AOP/SCOPE, VIT
• Verification and validation includes a wide array of SQA activities: technical reviews,
quality and configuration audits, performance monitoring, simulation, feasibility study,
documentation review, database review, algorithm analysis, development testing,
usability testing, qualification testing, acceptance testing, and installation testing.
1.2 Organizing for Software
Dr.Mehfooza. M, AOP/SCOPE, VIT
Testing
• For every software project, there is an inherent conflict of interest that occurs as testing
begins. The people who have built the software are now asked to test the software.
• The software developer is always responsible for testing the individual units
(components) of the program, ensuring that each performs the function or exhibits the
behavior for which it was designed. In many cases, the developer also conducts
integration testing—a testing step that leads to the construction (and test) of the
complete software architecture. Only after the software architecture is complete does
an independent test group become involved.
• The role of an independent test group (ITG) is to remove the inherent problems
associated with letting the builder test the thing that has been built. Independent testing
removes the conflict of interest that may otherwise be present. The developer and the ITG
work closely throughout a software project to ensure that thorough tests will be conducted.
While testing is conducted, the developer must be available to correct errors that are
uncovered.
Dr.Mehfooza. M, AOP/SCOPE, VIT
• Considering the process from a procedural point of view, testing within the context of software engineering is
actually a series of four steps that are implemented sequentially. The steps are shown in following figure.
Initially, tests focus on each component individually, ensuring that it functions properly as a unit. Hence, the
name unit testing. Unit testing makes heavy use of testing techniques that exercise specific paths in a
component’s control structure to ensure complete coverage and maximum error detection.
• Next, components must be assembled or integrated to form the complete software package. Integration
testing addresses the issues associated with the dual problems of verification and program construction. Test
case design techniques that focus on inputs and outputs are more prevalent during integration, although
techniques that exercise specific program paths may be used to ensure coverage of major control paths. After
the software has been integrated (constructed), a set of high-order tests is conducted. Validation criteria must be
evaluated. Validation testing provides final assurance that software meets all informational, functional,
behavioral, and performance requirements.
• The last high-order testing step falls outside the boundary of software engineering and into the broader context
of computer system engineering. Software, once validated, must be combined with other system elements (e.g.,
hardware, people, databases). System testing verifies that all elements mesh properly and that overall system
Dr.Mehfooza. M, AOP/SCOPE, VIT
• One response to the question is: “You’re never done testing; the burden simply shifts from
you (the software engineer) to the end user.” Every time the user executes a computer
program, the program is being tested.
• Although few practitioners would argue with these responses, you need more rigorous criteria
for determining when sufficient testing has been conducted. The clean room software
engineering approach suggests statistical use techniques that execute a series of
tests derived from a statistical sample of all possible program executions by all
users from a targeted population.
• By collecting metrics during software testing and making use of existing software reliability
models, it is possible to develop meaningful guidelines for answering the question: “When
are we done testing?”
Dr.Mehfooza. M, AOP/SCOPE, VIT
2. Strategic Issues
Tom Gilb argues that a software testing strategy will succeed when software testers:
Although the overriding objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as
portability, maintainability, and usability.. These should be specified in a way that is measurable so that testing results are
unambiguous.
• State testing objectives explicitly. The specific objectives of testing should be stated in measurable terms.
• Understand the users of the software and develop a profile for each user category . Use cases that describe the interaction
scenario for each class of user can reduce overall testing effort by focusing testing on actual use of the product.
• Develop a testing plan that emphasizes “rapid cycle testing.” Gilb recommends that a software team “learn to test in rapid
cycles The feedback generated from these rapid cycle tests can be used to control quality levels and the corresponding test strategies.
• Build “robust” software that is designed to test itself. Software should be designed in a manner that uses anti bugging
techniques. That is, software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate
automated testing and regression testing.
• Use effective technical reviews as a filter prior to testing. Technical reviews can be as effective as testing in uncovering errors.
• Conduct technical reviews to assess the test strategy and test cases themselves . Technical reviews can uncover
inconsistencies, omissions, and outright errors in the testing approach. This saves time and also improves product quality.
• Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics
3. SOFTWARE TESTING
Dr.Mehfooza. M, AOP/SCOPE, VIT
FUNDAMENTALS
• The goal of testing is to find errors, and a good test is one that has a high probability of finding an error. Therefore,
you should design and implement a computer based system or a product with “testability” in mind. At the same
time, the tests themselves must exhibit a set of characteristics that achieve the goal of finding the most errors
with a minimum of effort.
• Testability. James Bach provides the following definition for testability: “Software testability is simply how easily
can be tested.”
• Controllability. “The better we can control the software, the more the testing can be automated and optimized.”
• Decomposability. “By controlling the scope of testing, we can more quickly isolate problems and perform smarter
retesting.”
• Simplicity. “The less there is to test, the more quickly we can test it.” The program should exhibit functional
simplicity , structural simplicity, and code simplicity
• Stability. “The fewer the changes, the fewer the disruptions to testing.”
• Test Characteristics. Kaner, Falk, and Nguyen suggest the following attributes of a
“good” test:
• A good test has a high probability of finding an error. To achieve this goal, the tester
must understand the software and attempt to develop a mental picture of how the
software might fail. Ideally, the classes of failure are probed.
• A good test is not redundant. Testing time and resources are limited. There is no point
in conducting a test that has the same purpose as another test. Every test should have a
different purpose.
• A good test should be “best of breed” In a group of tests that have a similar intent,
time and resource limitations may mitigate toward the execution of only a subset of these
tests. In such cases, the test that has the highest likelihood of uncovering a whole class of
errors should be used.
• A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects associated
with this approach may mask errors. In general, each test should be executed separately.
INTERNAL AND EXTERNAL VIEWS
Dr.Mehfooza. M, AOP/SCOPE, VIT
OF TESTING
• Any engineered product can be tested in one of two ways: (1) Knowing the specified
function that a product has been designed to perform, tests can be conducted that
demonstrate each function is fully operational while at the same time searching for
errors in each function. (2) Knowing the internal workings of a product.
• The first test approach takes an external view and is called black-box testing. The
second requires an internal view and is termed white-box testing.
• Black-box testing alludes to tests that are conducted at the software interface. A black-
box test examines some fundamental aspect of a system with little regard for the internal
logical structure of the software.