0% found this document useful (0 votes)
27 views

Module 5_1

The document outlines a strategic approach to software testing, emphasizing the importance of verification and validation (V&V) in ensuring software meets customer requirements. It discusses the roles of developers and independent test groups, the software testing process, and criteria for determining when testing is sufficient. Additionally, it highlights key characteristics of testable software and the differences between black-box and white-box testing methods.

Uploaded by

sachinkumar20090
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Module 5_1

The document outlines a strategic approach to software testing, emphasizing the importance of verification and validation (V&V) in ensuring software meets customer requirements. It discusses the roles of developers and independent test groups, the software testing process, and criteria for determining when testing is sufficient. Additionally, it highlights key characteristics of testable software and the differences between black-box and white-box testing methods.

Uploaded by

sachinkumar20090
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Dr.Mehfooza.

M, AOP/SCOPE, VIT

Module 5
V A L I D AT I O N A N D
V E R I F I C A T I O N
Dr.Mehfooza. M, AOP/SCOPE, VIT

A Glimpse
• Strategic Approach to Software Testing

• Strategic Issues

• Fundamentals
Dr.Mehfooza. M, AOP/SCOPE, VIT

1. A STRATEGIC APPROACH TO
SOFTWARE TESTING
• A number of software testing strategies have been proposed in the literature. All provide you
with a template for testing and all have the following generic characteristics:

• • To perform effective testing, you should conduct effective technical reviews. By doing
this, many errors will be eliminated before testing commences.

• • Testing begins at the unit level and works “outward” toward the integration of the
entire computer-based system.

• • Different testing techniques are appropriate for different software engineering


approaches and at different points in time.

• • Testing is conducted by the developer of the software and (for large projects) an
independent test group.

• • Testing and debugging are different activities, but debugging must be accommodated
in any testing strategy.
Dr.Mehfooza. M, AOP/SCOPE, VIT

1.1 Verification and Validation


• Software testing is one element of a broader topic that is often referred to as
verification and validation (V&V). Verification refers to the set of tasks that
ensure that software correctly implements a specific function. Validation refers to a
different set of tasks that ensure that the software that has been built is traceable to
customer requirements.

• Boehm states this another way:

• Verification: “Are we building the product right?”

• Validation: “Are we building the right product?”

• Verification and validation includes a wide array of SQA activities: technical reviews,
quality and configuration audits, performance monitoring, simulation, feasibility study,
documentation review, database review, algorithm analysis, development testing,
usability testing, qualification testing, acceptance testing, and installation testing.
1.2 Organizing for Software
Dr.Mehfooza. M, AOP/SCOPE, VIT

Testing
• For every software project, there is an inherent conflict of interest that occurs as testing
begins. The people who have built the software are now asked to test the software.

• The software developer is always responsible for testing the individual units
(components) of the program, ensuring that each performs the function or exhibits the
behavior for which it was designed. In many cases, the developer also conducts
integration testing—a testing step that leads to the construction (and test) of the
complete software architecture. Only after the software architecture is complete does
an independent test group become involved.

• The role of an independent test group (ITG) is to remove the inherent problems
associated with letting the builder test the thing that has been built. Independent testing
removes the conflict of interest that may otherwise be present. The developer and the ITG
work closely throughout a software project to ensure that thorough tests will be conducted.
While testing is conducted, the developer must be available to correct errors that are
uncovered.
Dr.Mehfooza. M, AOP/SCOPE, VIT

1.3 Software Testing Strategy—


The Big Picture
• The software process may be viewed as the spiral illustrated in
following figure. Initially, system engineering defines the role of
software and leads to software requirements analysis, where the
information domain, function, behavior, performance, constraints, and
validation criteria for software are established. Moving inward along the
spiral, you come to design and finally to coding. To develop computer
software, you spiral inward (counter clockwise) along streamlines that
decrease the level of abstraction on each turn.
Dr.Mehfooza. M, AOP/SCOPE, VIT
Dr.Mehfooza. M, AOP/SCOPE, VIT • A strategy for software testing may also be viewed in the context of the spiral. Unit testing begins at the vortex
of the spiral and concentrates on each unit of the software as implemented in source code. Testing progresses by
moving outward along the spiral to integration testing, where the focus is on design and the construction of
the software architecture. Taking another turn outward on the spiral, you encounter validation testing, where
requirements established as part of requirements modeling are validated against the software that has been
constructed. Finally, you arrive at system testing, where the software and other system elements are tested as
a whole.

• Considering the process from a procedural point of view, testing within the context of software engineering is
actually a series of four steps that are implemented sequentially. The steps are shown in following figure.
Initially, tests focus on each component individually, ensuring that it functions properly as a unit. Hence, the
name unit testing. Unit testing makes heavy use of testing techniques that exercise specific paths in a
component’s control structure to ensure complete coverage and maximum error detection.

• Next, components must be assembled or integrated to form the complete software package. Integration
testing addresses the issues associated with the dual problems of verification and program construction. Test
case design techniques that focus on inputs and outputs are more prevalent during integration, although
techniques that exercise specific program paths may be used to ensure coverage of major control paths. After
the software has been integrated (constructed), a set of high-order tests is conducted. Validation criteria must be
evaluated. Validation testing provides final assurance that software meets all informational, functional,
behavioral, and performance requirements.

• The last high-order testing step falls outside the boundary of software engineering and into the broader context
of computer system engineering. Software, once validated, must be combined with other system elements (e.g.,
hardware, people, databases). System testing verifies that all elements mesh properly and that overall system
Dr.Mehfooza. M, AOP/SCOPE, VIT

1.4 Criteria for Completion of


Testing
• “When are we done testing—how do we know that we’ve tested enough?” Sadly,
there is no definitive answer to this question, but there are a few pragmatic responses and
early attempts at empirical guidance.

• One response to the question is: “You’re never done testing; the burden simply shifts from
you (the software engineer) to the end user.” Every time the user executes a computer
program, the program is being tested.

• Although few practitioners would argue with these responses, you need more rigorous criteria
for determining when sufficient testing has been conducted. The clean room software
engineering approach suggests statistical use techniques that execute a series of
tests derived from a statistical sample of all possible program executions by all
users from a targeted population.

• By collecting metrics during software testing and making use of existing software reliability
models, it is possible to develop meaningful guidelines for answering the question: “When
are we done testing?”
Dr.Mehfooza. M, AOP/SCOPE, VIT

2. Strategic Issues
Tom Gilb argues that a software testing strategy will succeed when software testers:

• Specify product requirements in a quantifiable manner long before testing commences .

Although the overriding objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as
portability, maintainability, and usability.. These should be specified in a way that is measurable so that testing results are
unambiguous.

• State testing objectives explicitly. The specific objectives of testing should be stated in measurable terms.

• Understand the users of the software and develop a profile for each user category . Use cases that describe the interaction
scenario for each class of user can reduce overall testing effort by focusing testing on actual use of the product.

• Develop a testing plan that emphasizes “rapid cycle testing.” Gilb recommends that a software team “learn to test in rapid
cycles The feedback generated from these rapid cycle tests can be used to control quality levels and the corresponding test strategies.

• Build “robust” software that is designed to test itself. Software should be designed in a manner that uses anti bugging
techniques. That is, software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate
automated testing and regression testing.

• Use effective technical reviews as a filter prior to testing. Technical reviews can be as effective as testing in uncovering errors.

• Conduct technical reviews to assess the test strategy and test cases themselves . Technical reviews can uncover
inconsistencies, omissions, and outright errors in the testing approach. This saves time and also improves product quality.

• Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics
3. SOFTWARE TESTING
Dr.Mehfooza. M, AOP/SCOPE, VIT

FUNDAMENTALS
• The goal of testing is to find errors, and a good test is one that has a high probability of finding an error. Therefore,
you should design and implement a computer based system or a product with “testability” in mind. At the same
time, the tests themselves must exhibit a set of characteristics that achieve the goal of finding the most errors
with a minimum of effort.

• The following characteristics lead to testable software.

• Testability. James Bach provides the following definition for testability: “Software testability is simply how easily
can be tested.”

• Operability. “The better it works, the more efficiently it can be tested.”

• Observability. “What you see is what you test.”

• Controllability. “The better we can control the software, the more the testing can be automated and optimized.”

• Decomposability. “By controlling the scope of testing, we can more quickly isolate problems and perform smarter
retesting.”

• Simplicity. “The less there is to test, the more quickly we can test it.” The program should exhibit functional
simplicity , structural simplicity, and code simplicity

• Stability. “The fewer the changes, the fewer the disruptions to testing.”

• Understandability. “The more information we have, the smarter we will test.”


Dr.Mehfooza. M, AOP/SCOPE, VIT

• Test Characteristics. Kaner, Falk, and Nguyen suggest the following attributes of a
“good” test:

• A good test has a high probability of finding an error. To achieve this goal, the tester
must understand the software and attempt to develop a mental picture of how the
software might fail. Ideally, the classes of failure are probed.

• A good test is not redundant. Testing time and resources are limited. There is no point
in conducting a test that has the same purpose as another test. Every test should have a
different purpose.

• A good test should be “best of breed” In a group of tests that have a similar intent,
time and resource limitations may mitigate toward the execution of only a subset of these
tests. In such cases, the test that has the highest likelihood of uncovering a whole class of
errors should be used.

• A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects associated
with this approach may mask errors. In general, each test should be executed separately.
INTERNAL AND EXTERNAL VIEWS
Dr.Mehfooza. M, AOP/SCOPE, VIT

OF TESTING
• Any engineered product can be tested in one of two ways: (1) Knowing the specified
function that a product has been designed to perform, tests can be conducted that
demonstrate each function is fully operational while at the same time searching for
errors in each function. (2) Knowing the internal workings of a product.

• The first test approach takes an external view and is called black-box testing. The
second requires an internal view and is termed white-box testing.

• Black-box testing alludes to tests that are conducted at the software interface. A black-
box test examines some fundamental aspect of a system with little regard for the internal
logical structure of the software.

• White-box testing of software is predicated on close examination of procedural detail.


Logical paths through the software and collaborations between components are tested by
exercising specific sets of conditions and/or loops.

You might also like