5 Testing2
5 Testing2
Applications
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 1
What is a “Good” Test?
A good test has a high probability of
finding an error
A good test is not redundant.
A good test should be “best of breed”
A good test should be neither too
simple nor too complex
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 2
Internal and External Views
Any engineered product (and most other
things) can be tested in one of two ways:
Knowing the specified function that a product
has been designed to perform, tests can be
conducted that demonstrate each function is
fully operational while at the same time
searching for errors in each function;
Knowing the internal workings of a product,
tests can be conducted to ensure that "all
gears mesh," that is, internal operations are
performed according to specifications and all
internal components have been adequately
exercised.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 3
Software Testing
white-box black-box
methods methods
Methods
Strategies
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 4
White-Box Testing
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 6
CC computation formulae
CC = E-N+2
CC = P+1
CC = No of regions
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 7
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
modules
V(G)
Since V(G) = 4,
2 there are four paths
Path 1: 1,2,3,6,7,8
3 Path 2: 1,2,3,5,7,8
4
5 6 Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
7
8 9
Basis Path Testing Notes
you don't need a flow chart,
but the picture will help when
you trace program paths
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 10
Deriving Test Cases
Summarizing:
Using the design or code as a foundation,
draw a corresponding flow graph.
Determine the cyclomatic complexity of the
resultant flow graph.
Determine a basis set of linearly independent
paths.
Prepare test cases that will force execution of
each path in the basis set.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 11
Why Cover?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 12
Control Structure Testing
Condition testing — a test case design method
that exercises the logical conditions contained
in a program module
Data flow testing — selects test paths of a
program according to the locations of
definitions and uses of variables in the program
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 13
Data Flow Testing
The data flow testing method [Fra93] selects test
paths of a program according to the locations of
definitions and uses of variables in the program.
Assume that each statement in a program is
assigned a unique statement number and that each
function does not modify its parameters or global
variables. For a statement with S as its statement
number
• DEF(S) = {X | statement S contains a definition of X}
• USE(S) = {X | statement S contains a use of X}
A definition-use (DU) chain of variable X is of the
form [X, S, S'], where S and S' are statement
numbers, X is in DEF(S) and USE(S'), and the
definition of X in statement S is live at statement S'
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 14
Loop Testing
Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
Loops
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 15
Loop Testing: Simple Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 16
Loop Testing: Nested Loops
Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 17
Black-Box Testing
requirements
output
input events
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 18
Black-Box Testing
How is functional validity tested?
How is system behavior and performance tested?
What classes of input will make good test cases?
Is the system particularly sensitive to certain input
values?
How are the boundaries of a data class isolated?
What data rates and data volume can the system
tolerate?
What effect will specific combinations of data have on
system operation?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 19
Equivalence Partitioning
In this method the input domain data is divided
into different equivalence data classes.
This method is typically used to reduce the
total number of test cases to a finite set of
testable test cases, still covering maximum
requirements.
In short it is the process of taking all possible
test cases and placing them into classes. One
test value is picked from each class while
testing.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 20
Equivalence Partitioning
E.g.: If you are testing for an input box
accepting numbers from 1 to 1000 then there is
no use in writing thousand test cases for all
1000 valid input numbers plus other test cases
for invalid data.
Using equivalence partitioning method above
test cases can be divided into three sets of
input data called as classes. Each test case is
a representative of respective class.
So in above example we can divide our test
cases into three equivalence classes of some
valid and invalid inputs.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 21
Equivalence Partitioning
Test cases for input box accepting numbers between
1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single
value from range 1 to 1000 as a valid test case. If you
select other values between 1 and 1000 then result is
going to be same. So one test case for valid input data
should be sufficient.
2) Input data class with all values below lower limit. I.e.
any value below 1, as a invalid input data test case.
3) Input data with any value greater than 1000 to
represent third invalid input class.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 22
Equivalence Partitioning
So using equivalence partitioning you have
categorized all possible test cases into three
classes. Test cases with other values from any
class should give you the same result.
We have selected one representative from
every input class to design our test cases. Test
case values are selected in such a way that
largest number of attributes of equivalence
class can be exercised.
Equivalence partitioning uses fewest test cases
to cover maximum requirements.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 23
Equivalence Partitioning
user output FK
queries formats input
mouse
picks data
prompts
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 24
Equivalence Partitioning
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 25
Sample Equivalence Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 26
Boundary Value Analysis
user output FK
queries formats input
mouse
picks data
prompts
output
input domain domain
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 27
Boundary Value Analysis
It’s widely recognized that input values at the
extreme ends of input domain cause more
errors in system. More application errors
occur at the boundaries of input domain.
‘Boundary value analysis’ testing technique is
used to identify errors at boundaries rather
than finding those exist in center of input
domain.
Boundary value analysis is a next part of
Equivalence partitioning for designing test
cases where test cases are selected at the
edges of the equivalence classes.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 28
Boundary Value Analysis
Test cases for input box accepting numbers
between 1 and 1000 using Boundary value
analysis:
1) Test cases with test data exactly as the input
boundaries of input domain i.e. values 1 and
1000 in our case.
2) Test data with values just below the extreme
edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme
edges of input domain i.e. values 2 and 1001.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 29
Graph-Based Methods
To understand
the objects that object
#1
Directed link object
#2
(link weight)
are modeled in
software and Undirected link
Node weight
(value
the Parallel links
)
relationships object
#
that connect 3
context. It contains
background color: white
encompasses data document
text color: default color
tex
objects, traditional t or preferences
components (b)
(modules), and
object-oriented
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill elements of 2009 by Roger Pressman.
2009). Slides copyright 30
Comparison Testing
Used only in situations in which the reliability of
software is absolutely critical (e.g., human-
rated systems)
Separate software engineering teams develop
independent versions of an application using the
same specification
Each version can be tested with the same test data
to ensure that all provide identical output
Then all versions are executed in parallel with real-
time comparison of results to ensure consistency
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 31
Model-Based Testing
Analyze an existing behavioral model for the
software or create one.
Recall that a behavioral model indicates how
software will respond to external events or stimuli.
Traverse the behavioral model and specify the
inputs that will force the software to make the
transition from state to state.
The inputs will trigger events that will cause the
transition to occur.
Review the behavioral model and note the
expected outputs as the software makes the
transition from state to state.
Execute the test cases.
Compare actual and expected results and take
corrective action as required.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 32
Test Case Design
"Bugs lurk in corners
and congregate at
boundaries ..."
Boris Beizer