Book Ch13 Functional
Book Ch13 Functional
Functional Testing
1 In this chapter we use the term “program” generically for the artifact under test, whether
that artifact is a complete application or an individual unit together with a test harness. This is
consistent with usage in the testing research literature.
47
48 Functional Testing
Required Background
Chapters 14 and 15:
The material on control and data flow graphs is required to understand
section 13.7, but it is not necessary to comprehend the rest of the chap-
ter.
Chapter 27:
The definition of pre- and post-conditions can be helpful in understand-
ing section 13.8, but it is not necessary to comprehend the rest of the
chapter.
13.1 Overview
In testing and analysis aimed at verification2 — that is, at finding any dis-
crepancies between what a program does and what it is intended to do —
one must obviously refer to requirements as expressed by users and specified
by software engineers. A functional specification, i.e., a description of the ex-
pected behavior of the program, is the primary source of information for test
case specification.
Functional testing, also known as black-box or specification-based test-
ing, denotes techniques that derive test cases from functional specifications.
Usually functional testing techniques produce test case specifications that
identify classes of test cases and be be instantiated to produce individual test
cases.
A particular functional testing technique may be effective only for some
kinds of software or may require a given specification style. For example,
a combinatorial approach may work well for functional units characterized
by a large number of relatively independent inputs, but may be less effec-
tive for functional units characterized by complex interrelations among in-
puts. Functional testing techniques designed for a given specification nota-
tion, e.g., finite state machines or grammars, are not easily applicable to other
specification styles.
The core of functional test case design is partitioning the possible behav-
iors of the program into a finite number of classes that can reasonably ex-
pected to consistently be correct or incorrect. In practice, the test case de-
signer often must also complete the job of formalizing the specification far
enough to serve as the basis for identifying classes of behaviors. An impor-
tant side effect of test design is highlighting weaknesses and incompleteness
of program specifications.
Deriving functional test cases is an analytical process which decomposes
specifications into test cases. The myriad of aspects that must be taken into
2 Here we focus on software verification as opposed to validation (see Chapter 2). The prob-
lems of validating the software and its specifications, i.e., checking the program behavior and its
specifications with respect to the users’ expectations, is treated in Chapter 12.
Test cases and test suites can be derived from several sources of information, includ-
ing specifications (functional testing), detailed design and source code (structural test-
ing), and hypothesized defects (fault-based testing). Functional test case design is an
indispensable base of a good test suite, complemented but never replaced by by struc-
tural and fault-based testing, because there are classes of faults that only functional test-
ing effectively detects. Omission of a feature, for example, is unlikely to be revealed by
techniques which refer only to the code structure.
Consider a program that is supposed to accept files in either plain ASCII text, or
HTML, or PDF formats and generate standard PostScript. Suppose the programmer over-
looks the PDF functionality, so the program accepts only plain text and HTML files. Intu-
itively, a functional testing criterion would require at least one test case for each item in
the specification, regardless of the implementation, i.e., it would require the program to
be exercised with at least one ASCII, one HTML, and one PDF file, thus easily revealing
the failure due to the missing code. In contrast, criterion based solely on the code would
not require the program to be exercised with a PDF file, since all of the code can be exer-
cised without attempting to use that feature. Similarly, fault-based techniques, based on
potential faults in design or coding, would not have any reason to indicate a PDF file as a
potential input even if “missing case” were included in the catalog of potential faults.
A functional specification often addresses semantically rich domains, and we can use
domain information in addition to the cases explicitly enumerated in the program spec-
ification. For example, while a program may manipulate a string of up to nine alphanu-
meric characters, the program specification may reveal that these characters represent a
postal code, which immediately suggests test cases based on postal codes of various lo-
calities. Suppose the program logic distinguishes only two cases, depending on whether
they are found in a table of U.S. zip codes. A structural testing criterion would require
testing of valid and invalid U.S. zip codes, but only consideration of the specification and
richer knowledge of the domain would suggest test cases that reveal missing logic for
distinguishing between U.S.-bound mail with invalid U.S. zip codes and mail bound to
other countries.
Functional testing can be applied at any level of granularity where some form of spec-
ification is available, from overall system testing to individual units, although the level of
granularity and the type of software influence the choice of the specification styles and
notations, and consequently the functional testing techniques that can be used.
In contrast, structural and fault-based testing techniques are invariably tied to pro-
gram structures at some particular level of granularity, and do not scale much beyond
that level. The most common structural testing techniques are tied to fine-grain pro-
gram structures (statements, classes, etc.) and are applicable only at the level of modules
or small collections of modules (small subsystems, components, or libraries).
account during functional test case specification makes the process error prone.
Even expert test designers can miss important test cases. A methodology for
functional test design systematically helps by decomposing the functional
test design activity into elementary steps that cope with single aspect of the
process. In this way, it is possible to master the complexity of the process and
separate human intensive activities from activities that can be automated.
Systematic processes amplify but do not substitute for skills and experience
of the test designers.
In a few cases, functional testing can be fully automated. This is possible
for example when specifications are given in terms of some formal model,
e.g., a grammar or an extended state machine specification. In these (excep-
tional) cases, the creative work is performed during specification and design
of the software. The test designer’s job is then limited to the choice of the test
selection criteria, which defines the strategy for generating test case specifi-
cations. In most cases, however, functional test design is a human intensive
activity. For example, when test designers must work from informal speci-
fications written in natural language, much of the work is in structuring the
specification adequately for identifying test cases.
measure dependability, rather than finding faults so that they can be repaired.
While the informal meanings of words like “test” may be adequate for everyday con-
versation, in this context we must try to use terms in a more precise and consistent man-
ner. Unfortunately, the terms we will need are not always used consistently in the liter-
ature, despite the existence of an IEEE standard that defines several of them. The terms
we will use are defined below.
Independently testable feature (ITF): An ITF is a functionality that can be tested inde-
pendently of other functionalities of the software under test. It need not correspond
to a unit or subsystem of the software. For example, a file sorting utility may be ca-
pable of merging two sorted files, and it may be possible to test the sorting and
merging functionalities separately, even though both features are implemented by
much of the same source code. (The nearest IEEE standard term is “test item.”)
As functional testing can be applied at many different granularities, from unit test-
ing through integration and system testing, so ITFs may range from the function-
ality of an individual Java class or C function up to features of a integrated system
composed of many complete programs. The granularity of an ITF depends on the
exposed interface at whichever granularity is being tested. For example, individual
methods of a class are part of the interface of the class, and a set of related methods
(or even a single method) might be an ITF for unit testing, but for system testing the
ITFs would be features visible through a user interface or application programming
interface.
Test case: A test case is a set of inputs, execution conditions, and expected results. The
term “input” is used in a very broad sense, which may include all kinds of stimuli
that contribute to determining program behavior. For example, an interrupt is as
much an input as is a file. (This usage follows the IEEE standard.)
Test case specification: The distinction between a test case specification and a test case
is similar to the distinction between a program and a program specification. Many
different test cases may satisfy a single test case specification. A simple test spec-
ification for a sorting method might require an input sequence that is already in
sorted order. A test case satisfying that specification might be sorting the particular
vector (“alpha,” “beta,” “delta.”) (This usage follows the IEEE standard.)
Test suite: A test suite is a set of test cases. Typically, a method for functional testing
is concerned with creating a test suite. A test suite for a program, a system, or an
individual unit may be made up of several test suites for individual ITFs. (This usage
follows the IEEE standard.)
Test: We use the term test to refer to the activity of executing test cases and evaluating
their result. When we refer to “a test,” we mean execution of a single test case, ex-
cept where context makes it clear that the reference is to execution of a whole test
suite. (The IEEE standard allows this and other definitions.)
Accidental bias may be avoided by choosing test cases from a random dis-
tribution. Random sampling is often an inexpensive way to produce a large
number of test cases. If we assume absolutely no knowledge on which to
place a higher value on one test case than another, then random sampling
maximizes value by maximizing the number of test cases that can be created
(without bias) for a given budget. Even if we do possess some knowledge sug-
gesting that some cases are more valuable than others, the efficiency of ran-
dom sampling may in some cases outweigh its inability to use any knowledge
we may have.
Consider again the line-break program, and suppose that our budget is
one day of testing effort rather than some arbitrary number of test cases. If the
cost of random selection and actual execution of test cases is small enough,
then we may prefer to run a large number of random test cases rather than
expending more effort on each of a smaller number of test cases. We may in
a few hours construct programs that generate buffers with various contents
and lengths up to a few thousand characters, as well as an automated proce-
dure for checking the program output. Letting it run unattended overnight,
we may execute a few million test cases. If the program does not correctly
handle a buffer containing a sequence of more than 60 non-blank characters
(a single “word” that does not fit on a line), we are likely to encounter this
case by sheer luck if we execute enough random tests, even without having
explicitly considered this case.
Even a few million test cases is an infinitesimal fraction of the complete
input space of most programs. Large numbers of random tests are unlikely
to find failures at single points (singularities) in the input space. Consider,
for example, a simple procedure for returning the two roots of a quadratic
equation and suppose we choose test inputs (values of the
coefficients , , and ) from a uniform distribution ranging from
to
. While uniform random sampling would certainly cover cases in which
(where the equation has no real roots), it would be very unlikely to
test the case in which and , in which case a naive implementation
of the quadratic formula
will divide by zero (see Figure 13.1).
Of course, it is unlikely that anyone would test only with random values.
Regardless of the overall testing strategy, most test designers will also try some
“special” values. The test designer’s intuition comports with the observation
that random sampling is an ineffective way to find singularities in a large
input space. The observation about singularities can be generalized to any
characteristic of input data that defines an infinitesimally small portion of
the complete input data space. If again we have just three real-valued inputs
, , and , there is an infinite number of choices for which , but random
sampling is unlikely to generate any of them because they are an infinitesimal
part of the complete input data space.
- 1:
- 2:
- 3:
- 4:
- 5:
- 6:
- 7:
!"# !
- 8: $
%
& !
%%
%
- 9: &
'(
-10:
-11: $
-12:
-13:
-14: )$$*
%
&
&
* %+% %
-15: +
!
,
-16:
- . .
"# . / #
-17: - . /
-18: & 0 ' 11 2- '
-19: 3& "# 0 /
% &
%
-20:
-21:
- #
-22: - 4
%(
-23:
- '. #
-24:
- '. . #
-25: & --'
-26: %
% !
*
Figure 13.1: The Java class “roots,” which finds roots of a quadratic equation.
The case analysis in the implementation is incomplete: It does not properly
handle the case in which and
. We cannot anticipate all such
faults, but experience teaches that boundary values identifiable in a specifi-
cation are disproportionately valuable. Uniform random generation of even
large numbers of test cases is ineffective at finding the fault in this program,
but selection of a few “special values” based on the specification quickly un-
covers it.
would separate the input space into disjoint classes, the union of which is the entire space. Parti-
tion testing separates the input space into classes whose union is the entire space, but the classes
may not be disjoint.
different steps. Although most techniques are presented and applied as stand
alone methods, it is also possible to mix and match steps from different tech-
niques, or to apply different methods for different parts of the system to be
tested.
when dealing with specification at system level; but it can be another module of the system,
when dealing with specifications at unit level.
Functional Specifications
Independently
Features
Testable
Identify
Independently Testable Feature
Finite State Machine
Grammar
Algebraic Specification
fy
nti tive D
a M erive Logic Specification
Ide enta Control/Data Flow Graph
s s od
pre lue el
Re Va
Manual Mapping
Symbolic Execution
A-posteriori Satisfaction
Test Cases
Instantiate
Tests
Scaffolding
characteristics of values (e.g., any list with a single element) rather than actual
values.
Implicit enumeration requires the construction of a (partial) model of the
specifications. Such a model may be already available as part of a specifi-
cation or design model, but more often it must be constructed by the test
designer, in consultation with other designers. For example, a specification
given as a finite state machine implicitly identifies different values for the in-
puts by means of the transitions triggered by the different values. In some
cases, we can construct a partial model as a mean for identifying different
values for the inputs. For example, we may derive a grammar from a specifi-
cation and thus identify different values according to the legal sequences of
productions of the given grammar.
Directly enumerating representative values may appear simpler and less
expensive than producing a suitable model from which values may be de-
rived. However, a formal model may also be valuable in subsequent steps of
test case design, including selection of combinations of values. Also, a for-
mal model may make it easier to select a larger or smaller number of test
cases, balancing cost and thoroughness, and may be less costly to modify and
reuse as the system under test evolves. Whether to invest effort in producing a
model is ultimately a management decision that depends on the application
domain, the skills of test designers, and the availability of suitable tools.
Generate Test Cases and Instantiate Tests The test generation process is
completed by turning test case specifications into test cases and instantiating
them. Test case specifications can be turned into test cases by selecting one
or more test cases for each item of the test case specification.
decisions, the way in which collections and complex data are broken into pa-
rameter characteristics requires judgment based on a combination of analy-
sis and experience.
Parameter: Model
Model number
Number of required slots for se- Number of optional slots for se-
lected model(#SMRS) lected model (#SMOS)
Parameter: Components
Correspondence of selection with
model slots
Number of required components Number of optional components
with selection empty
with select empty
" #"
"
Table 13.1: An example category-partition test specification for the the con-
figuration checking feature of the web site of a computer vendor.
example, in the Table 13.1 we find 7 categories with 3 value classes, 2 cate-
gories with 6 value classes, and one with four value classes, potentially result-
ing in
test cases, which would be acceptable only if
the cost of executing and checking each individual test case were very small.
However, not all combinations of value classes correspond to reasonable test
case specifications. For example, it is not possible to create a test case from
a test case specification requiring a valid model (a model appearing in the
database) where the database contains zero models.
The category-partition method allows one to omit some combinations by
indicating value classes that need not be combined with all other values. The
label indicates a value class that need be tried only once, in combina-
tion with non-error values of other parameters. When constraints are
considered in the category-partition specification of Table 13.1, the number
of combinations to be considered is reduced to
. Note that we have treated “component not in database” as
an error case, but have treated “incompatible with slot” as a normal case of
an invalid configuration; once again, some judgment is required.
Although the reduction from 314,928 to 2,711 is impressive, the number
of derived test cases may still exceed the budget for testing such a simple fea-
ture. Moreover, some values are not erroneous per se, but may only be useful
or even valid in particular combinations. For example, the number of op-
tional components with non-empty selection is relevant to choosing useful
test cases only when the number of optional slots is greater than 1. A num-
ber of non-empty choices of required component greater than zero does not
make sense if the number of required components is zero.
Erroneous combinations of valid values can be ruled out with the property
and if-property constraints. The property constraint groups values of a single
parameter characteristic to identify subsets of values with common proper-
ties. The property constraint is indicated with label property PropertyName,
where PropertyName identifies the property for later reference. For exam-
ple, property RSNE (required slots non-empty) in Table 13.1 groups values
that correspond to non-empty sets of required slots for the parameter char-
acteristic Number of Required Slots for Selected Model (#SMRS), i.e., values 1
and many. Similarly, property OSNE (optional slots non-empty) groups non-
empty values for the parameter characteristic Number of Optional Slots for
Selected Model (#SMOS).
The if-property constraint bounds the choices of values for a parameter
characteristic once a specific value for a different parameter characteristic
has been chosen. The if-property constraint is indicated with label if Proper-
tyName, where PropertyName identifies a property defined with the property
constraint. For example, the constraint if RSNE attached to values 0 and
number of required slots of parameter characteristic Number of required com-
ponents with selection empty limits the combination of these values with
the values of the parameter characteristics Number of Required Slots for Se-
lected Model (#SMRS), i.e., values 1 and many, thus ruling out the illegal com-
bination of values 0 or number of required slots for Number of required com-
ponents with selection empty with value 0 for Number of Required Slots for
Selected Model (#SMRS). Similarly, the if OSNE constraint limits the combina-
tions of values of the parameter characteristics Number of optional compo-
nents with selection empty and Number of Optional Slots for Selected Model
(#SMOS).
The property and if-property constraints introduced in Table 13.1 further
reduce the number of combinations to be considered to
. (Exercise Ex13.4 discusses derivation of this
number.)
The number of combinations can be further reduced by iteratively adding
property and if-property constraints and by introducing the new single con-
straint, which is indicated with label single and acts like the error constraint,
i.e., it limits the number of occurrences of a given value in the selected com-
binations to 1.
Introducing new property, if-property, and single constraints further does
not rule out erroneous combinations, but reflects the judgment of the test de-
signer, who decides how to restrict the number of combinations to be consid-
ered by identifying single values (single constraint) or combinations (property
and if-property constraints) that are less likely to need thorough test accord-
ing to the test designer’s judgment.
The single constraints introduced in Table 13.1 reduces the number of
combinations to be considered to ,
which may be a reasonable tradeoff between costs and quality for the con-
sidered functionality. The number of combinations can also be reduced by
applying combinatorial techniques, as explained in the next section.
The set of combinations of values for the parameter characteristics can
be turned into test case specifications by simply instantiating the identified
combinations. Table 13.2 shows an excerpt of test case specifications. The
error tag in the last column indicates test cases specifications corresponding
to the error constraint. Corresponding test cases should produce an error
indication. A dash indicates no constraints on the choice of values for the
parameter or environment element.
Choosing meaningful names for parameter characteristics and value classes
allows (semi)automatic generation of test case specifications.
&
&
4
&5
6""*"
&'
6""*"
&
/70-
( !
(
&8
!
/70-
!
&,
6""*"
&
6""*"
Table 13.2: An excerpt of test case specifications derived from the value
classes given in Table 13.1
Language
Display Mode
"%!
! Fonts
% & !
%$
!
! ( "%
'""
Color
! Screen size
,%!
)% -
*%
&"%
.
+"%
Table 13.4: Covering all pairs of value classes for three parameters by extend-
ing the cross-product of two parameters
the approach). Fortunately, efficient heuristic algorithms exist for this task,
and they are simple enough to incorporate in tools.7
The tuples in Table 13.5 cover all pairwise combinations of value choices
for parameters. In many cases not all choices may be allowed. For exam-
ple, the specification of the Chipmunk web-site display may indicate that
monochrome displays are limited to hand-held devices. In this case, the tu-
ples covering the pairs Monochrome Laptop and Monochrome Full-size,
i.e., the fifth and ninth tuples of Table 13.5, would not correspond to legal in-
puts. We can restrict the set of legal combinations of value classes by adding
suitable constraints. Constraints can be expressed as tuples with wild-card
characters to indicate any possible value class. For example, the constraints
Monochrome Laptop
Monochrome Full-size
indicates that tuples that contain the pair Monochrome Hand-held as
values for the fourth and fifth parameter are not allowed in the relation of Ta-
ble 13.3. Tuples that cover all pairwise combinations of value classes without
violating the constraints can be generated by simply removing the illegal tu-
ples and adding legal tuples that cover the removed pairwise combinations.
Open choices must be bound consistently in the remaining tuples, e.g., tuple
pairs.
6
9 :
6 &
,
9
6 5; 7 < 9
6
,
-
7
9
7 7
9 &
9 -
9
9 5; ,
<
9
< < :
< -
9
&
7 :
5; 9 7
,
< :
4
,
< <
4
&
< 7
4
5; 7 -
:
4
9 9
4
7 :
Table 13.5: Covering all pairs of value classes for the five parameters
that the value class Hand-held for parameter Screen can be combined with
any value class of parameter Color, including Monochrome, while the sec-
ond table indicates that the value classes Laptop and Full-size for parameter
Screen size can be combined with all values classes but Monochrome for pa-
rameter Color.
If constraints are expressed as a set of tables that give only legal combi-
nations, tuples can be generated without changing the heuristic. Although
the two approaches express the same constraints, the number of generated
tuples can be different, since different tables may indicate overlapping pairs
and thus result in a larger set of tuples. Other ways of expressing constraints
may be chosen according to the characteristics of the specifications and the
preferences of the test designer.
So far we have illustrated the combinatorial approach with pairwise cov-
erage. As previously mentioned, the same approach can be applied for triples
or larger combinations. Pairwise combinations may be sufficient for some
subset of the parameters, but not enough to uncover potential interactions
among other parameters. For example, in the Chipmunk display example,
the fit of text fields to screen areas depends on the combination of language,
fonts, and screen size. Thus, we may prefer exhaustive coverage of combi-
nations of these three parameters, but be satisfied with pairwise coverage of
other parameters. In this case, we first generate tuples of classes from the
parameters to be most thoroughly covered, and then extend these with the
Hand-held devices
Language
Display Mode
"%!
! Fonts
% & !
%$
!
! ( "%
'""
Color
)% Screen size
*%
,%!
+"%
Color
! Screen size
)% -
*%
&"
.
+"%
Table 13.6: Pairs of tables that indicate valid value classes for the Chipmunk
web-site display controller
individual account
current purchase tier 1 individual threshold
special offer price individual scheduled price
business account
current purchase tier 1 business threshold
current purchase tier 1 business yearly threshold
special offer price business scheduled price
Pricing: The pricing function determines the adjusted price of a configuration for a
particular customer. The scheduled price of a configuration is the sum of the
scheduled price of the model and the scheduled price of each component in the
configuration. The adjusted price is either the scheduled price, if no discounts
are applicable, or the scheduled price less any applicable discounts.
There are three price schedules and three corresponding discount schedules,
Business, Educational, and Individual. The Business price and discount sched-
ules apply only if the order is to be charged to a business account in good stand-
ing. The Educational price and discount schedules apply to educational institu-
tions. The Individual price and discount schedules apply to all other customers.
Account classes and rules for establishing business and educational accounts
are described further in [. . . ].
A discount schedule includes up to three discount levels, in addition to the pos-
sibility of “no discount.” Each discount level is characterized by two threshold
values, a value for the current purchase (configuration schedule price) and a
cumulative value for purchases over the preceding 12 months (sum of adjusted
price).
Educational prices The adjusted price for a purchase charged to an educational ac-
count in good standing is the scheduled price from the educational price sched-
ule. No further discounts apply.
Business account discounts Business discounts depend on the size of the current
purchase as well as business in the preceding 12 months. A tier 1 discount is
applicable if the scheduled price of the current order exceeds the tier 1 current
order threshold, or if total paid invoices to the account over the preceding 12
months exceeds the tier 1 year cumulative value threshold. A tier 2 discount
is applicable if the current order exceeds the tier 2 current order threshold, or
if total paid invoices to the account over the preceding 12 months exceeds the
tier 2 cumulative value threshold. A tier 2 discount is also applicable if both the
current order and 12 month cumulative payments exceed the tier 1 thresholds.
Individual discounts Purchase by individuals and by others without an established
account in good standing are based on current value alone (not on cumula-
tive purchases). A tier 1 individual discount is applicable if the scheduled price
of the configuration in the current order exceeds the the tier 1 current order
threshold. A tier 2 individual discount is applicable if the scheduled price of the
configuration exceeds the tier 2 current order threshold.
Special-price non-discountable offers Sometimes a complete configuration is of-
fered at a special, non-discountable price. When a special, non-discountable
price is available for a configuration, the adjusted price is the non-discountable
price or the regular price after any applicable discounts, whichever is less.
A predicate is a function with a boolean (True or False) value. When the input argu-
ment of the predicate is clear, particularly when it describes some property of the input of
a program, we often leave it implicit. For example, the actual representation of account
types in an information system might be as three-letter codes, but in a specification we
may not be concerned with that representation — we know only that there is some pred-
icate educational-account which is either True or False.
An elementary condition is a single predicate that cannot be decomposed further. A
complex condition is made up of elementary conditions, combined with boolean con-
nectives.
The boolean connectives include “and” (), “or” (), “not” (
), and several less com-
mon derived connectives such as “implies” and “exclusive or.”
STEP 2: derive test case specifications from a model of the decision struc-
ture Different criteria can be used to generate test suites of differing com-
plexity from decision tables.
The basic condition adequacy criterion requires generation of a test case
specification for each column in the table, and corresponds to the intuitive
principle of generating a test case to produce each possible result. Don’t care "
entries of the table can be filled out arbitrarily, so long as constraints are not
violated.
The compound condition adequacy criterion requires a test case specifi-
cation for each combination of truth values of elementary conditions. The "
Constraints
Abbreviations
Figure 13.5: The decision table for the functional specification of feature pricing of the Chipmunk web site of Figure 13.4.
78
Testing Decision Structures 79
ing in all places where neither is don’t care), the two test cases are represented
by one merged column, provided they can be merged without violating con-
straints.
The MC/DC criterion formalizes the intuitive idea that a thorough test
suite would not only test positive combinations of values, i.e., combinations
that lead to specified outputs, but also negative combinations of values, i.e.,
combinations that differ from the specified ones and thus should produce
different outputs, in some cases among the specified ones, in some other
cases leading to error conditions.
Applying MC/DC to column 1 of table 13.5 generates two additional columns:
one for Educational Account = false and Special Price better than scheduled
price = false, and the other for Educational Account = true and Special Price
better than scheduled price = true. Both columns are already in the table
(columns 3 and 2, respectively) and thus need not be added.
Similarly, from column 2, we generate two additional columns correspond-
ing to Educational Account = false and Special Price better than scheduled
price = true, and Educational Account = true and Special Price better than
scheduled price = false, also already in the table.
The generation of a new column for each possible variation of the boolean
values in the columns, varying exactly one value for each new column, pro-
duces 78 new columns, 21 of which can be merged with columns already in
the table. Figure 13.6 shows a table obtained by suitably joining the generated
columns with the existing ones. Many don’t care cells from the original table
are assigned either true or false values, to allow merging of different columns
or to obey constraints. The few don’t-care entries left can be set randomly to
obtain a complete test case specification.
There are many ways of merging columns that generate different tables.
The table in Figure 13.6 may not be the optimal one, i.e., the one with the
fewest columns. The objective in test design is not to find an optimal test
suite, but rather to produce a cost effective test suite with an acceptable trade-
off between the cost of generating and executing test cases and the effective-
ness of the tests.
The table in Figure 13.6 fixes the entries as required by the constraints,
while the initial table in Figure 13.5 does not. Keeping constraints separate
from the table corresponding to the initial specification increases the num-
ber of don’t care entries in the original table, which in turn increases the op-
portunity for merging columns when generating new cases with the MC/DC
criterion. For example, if business account = false, the constraint at-most-
one(Edu, Bus) can be satisfied by assigning either true or false to entry edu-
cational account. Fixing either choice prematurely may later make merging
with a newly generated column impossible.
Edu. T T F F F F F F F F F F F F F F F F F F T T T T F -
Bus. F F F F F F F F T T T T T T T T T T T T F F F F F F
CP CT1 T T F F T T T T F F T T F F T T T T F F F F T - - F
YP YT1 F - F - - F T T F F F F T T T T F F T T T - - - T T
CP CT2 F F F F F F T T F F F F F F F F T T F F F F T T F F
YP YT2 - - - - - - - - - - - - F F F F - - T T F - - - T F
SP Sc F T F T F T - - F T F - F T - T - T - T F T - - - -
SP T1 F T F T F T F T F T F T F T F T F T F T F - - T T T
SP T2 F - F - F - F T F - F - F - F T F T F T F F F T T -
Out Edu SP ND SP T1 SP T2 SP ND SP T1 SP T1 SP T2 SP T2 SP T2 SP Edu SP Edu SP SP SP
Abbreviations
Figure 13.6: The set of test cases generated for feature pricing of the Chipmunk web site applying the modified adequacy
criterion.
80
Deriving Test Cases from Control and Data Flow Graphs 81
T-node
Process shipping order: The Process shipping order function checks the validity of or-
ders and prepares the receipt.
card information if the method of payment is credit card, fields credit card
number, name on card, expiration date, and billing address, if different
than shipping address, must be provided. If credit card information is not
valid the user can either provide new data or abort the order.
;;;;; international
individual customer no
no no
yes
method of payement
yes
credit card
no
abort order? payement status = valid
enter order
prepare receipt
yes
invalid order
T-branch
The catalog would in this way cover the intuitive cases of erroneous con-
ditions (cases 1 and 5), boundary conditions (cases 2 and 4), and normal con-
ditions (case 3).
The catalog based approach consists in unfolding the specification, i.e.,
decomposing the specification into elementary items, deriving an initial set
DEF 1 hexadecimal digits are: ’0’, ’1’, ’2’, ’3’, ’4’, ’5’, ’6’, ’7’, ’8’, ’9’, ’A’, ’B’, ’C’, ’D’,
’E’, ’F’, ’a’, ’b’, ’c’, ’d’, ’e’, ’f’
DEF 2 a CGI-hexadecimal is a sequence of three characters: ’’, where
and are hexadecimal digits
DEF 3 a CGI item is either an alphanumeric character, or character ’’, or a
CGI-hexadecimal
52@
) +
Note the distinction between a variable and a definition. Encoded and de-
coded are actually used or computed, while hexadecimal digits, CGI-hexadecimal,
and CGI item are used to describe the elements but are not objects in their
own right. Although not strictly necessary for the problem specification, ex-
plicit identification of definitions can help in deriving a richer set of test cases.
The description of cgi decode indicates some conditions that must be sat-
isfied upon invocation, represented by the following preconditions:
STEP 2 Derive a first set of test case specifications from preconditions, post-
conditions and definitions The aim of this step is to explicitly describe the
partition of the input domain:
TC-PRE2-1 6
: a sequence of CGI items
TC-PRE2-2 6
: not a sequence of CGI items
postconditions: all postconditions in the cgi decode specification are given
in a conditional form with a simple condition. Thus, we generate two
test case specifications for each of them. The generated test case speci-
fications correspond to a case that satisfies the condition and a case that
violates it.
POST 1:
TC-POST1-1 6
: contains one or more alphanumeric char-
acters
TC-POST1-2 6
: does not contain any alphanumeric char-
acters
POST 2:
TC-POST2-1 6
: contains one or more character ’+’
Tc-POST2-2 6
: does not any contain character ’+’
POST 3:
TC-POST3-1 6
: contains one or more CGI-hexadecimals
TC-POST3-2 6
: does not contain any CGI-hexadecimal
POST 4: we do not generate any new useful test case specifications, be-
cause the two specifications are already covered by the specifica-
tions generated from POST 2.
POST 5: we generate only the test case specification that satisfies the
condition; the test case specification that violates the specification
is redundant with respect to the test case specifications generated
from POST 3
TC-POST5-1 : 6
contains one or more malformed CGI-hexadecimals
POST 6: as for POST 5, we generate only the test case specification that
satisfies the condition; the test case specification that violates the
specification is redundant with respect to most of the test case spec-
ifications generated so far.
TC-POST6-1 6
: contains one or more illegal characters
definitions none of the definitions in the specification of cgi decode is given
in conditional terms, and thus no test case specifications are generated
at this step.
The test case specifications generated from postconditions refine test case
specification TC-PRE2-1, which can thus be eliminated from the checklist.
The result of step 2 for cgi decode is summarized in Table 13.9.
STEP 3 Complete the test case specifications using catalogs The aim of this
step is to generate additional test case specifications from variables and op-
erations used or defined in the computation. The catalog is scanned sequen-
tially. For each entry of the catalog we examine the elementary components
of the specification and we add test case specifications as required by the cat-
alog. As when scanning the test case specifications during step 2, redundant
test case specifications are eliminated.
Table 13.10 shows a simple catalog that we will use for the cgi decoder ex-
ample. A catalog is structured as a list of kinds of elements that can occur in
a specification. Each catalog entry is associated with a list of generic test case
specifications appropriate for that kind of element. We scan the specification
for elements whose type is compatible with the catalog entry, then generate
the test cases defined in the catalog for that entry. For example, the catalog of
Table 13.10 contains an entry for boolean variables. When we find a boolean
variable in the specification, we instantiate the catalog entry by generating
two test case specifications, one that requires a True value and one that re-
quires a False value.
Each generic test case in the catalog is labeled in, out, or in/out, meaning
that a test case specification is appropriate if applied to either an input vari-
able, or to an output variable, or in both cases. In general, erroneous values
should be used when testing the behavior of the system with respect to input
variables, but are usually impossible to produce when testing the behavior of
the system with respect to output variables. For example, when the value of
an input variable can be chosen from a set of values, it is important to test
the behavior of the system for all enumerated values and some values out-
side the enumerated set, as required by entry ENUMERATION of the catalog.
DEF 2 )01%!
#" 595 $! !
DEF 3 )01
! !"
! 545 )01%
!
OP 1
Table 13.9: Test case specificationsDraft
for cgi-decode
version produced
generated
20th
after
March
step2002
2
.
Catalog Based Testing 93
>
[
?"] True
[
?"] False
"
[
?"] Each enumerated value
[
] Some value outside the enumerated set
[
] (the element immediately preceding the lower bound)
[
?"] (the lower bound)
[
?"] A value between and
[
?"] (the upper bound)
[
] (the element immediately following the upper bound)
"
)
[
?"] (the constant value)
[
] (the element immediately preceding the constant value)
[
] (the element immediately following the constant value)
[
] Any other constant compatible with
%"
)
[
?"] (the constant value)
[
] Any other constant compatible with
[
] Some other compatible value
#"
[
?"] Empty
[
?"] A single element
[
?"] More than one element
[
?"] Maximum length (if bounded) or very long
[
] Longer than maximum length (if bounded)
[
] Incorrectly terminated
$
!
[
] occurs at beginning of sequence
[
] occurs in interior of sequence
[
] occurs at end of sequence
[
] occurs contiguously
[
] does not occur in sequence
[
] where is a proper prefix of
[
] Proper prefix occurs at end of sequence
However, when the value of an output variable belongs to a finite set of values,
we should derive a test case for each possible outcome, but we cannot derive
a test case for an impossible outcome, so entry ENUMERATION of the cata-
log specifies that the choice of values outside the enumerated set is limited
to input variables. Intermediate variables, if present, are treated like output
variables.
Entry Boolean of the catalog applies to "
(VAR 3). The catalog
requires a test case that produces the value True and one that produces the
value False. Both cases are already covered by test cases TC-PRE2-1 and TC-
PRE2-2 generated for precondition PRE 2, so no test case specification is ac-
tually added.
Entry Enumeration of the catalog applies to any variable whose values are
chosen from an explicitly enumerated set of values. In the example, the values
of &=0 (DEF 3) and of improper &=0 , in POST 5 are defined by
enumeration. Thus, we can derive new test case specifications by applying
entry enumeration to POST 5 and to any variable that can contain &=0 .
The catalog requires creation of a test case specification for each enumer-
ated value and for some excluded values. For
, which uses DEF 3, we
generate a test case specification where a CGI-item is an alphanumeric char-
acter, one where it is the character ’+’, one where it is a CGI-hexadecimal,
and some where it is an illegal value. We can easily ascertain that all the re-
quired cases are already covered by test case specifications for TC-POST1-
1, TC-POST1-2, TC-POST2-1, TC-POST2-2, TC-POST3-1, and TC-POST3-2, so
any additional test case specifications would be redundant.
From the enumeration of malformed CGI-hexadecimals in POST 5, we de-
rive the following test cases: %y, %x, %ky, %xk, %xy (where x and y are hex-
adecimal digits and k is not). Note that the first two cases, %x (the second
hexadecimal digit is missing) and %y (the first hexadecimal digit is missing)
are identical, and %x is distinct from %xk only if %x are the last two characters
in the string. A test case specification requiring a correct pair of hexadecimal
digits (%xy) is a value out of the range of the enumerated set, as required by
the catalog.
The added test case specifications are:
TC-POST5-2
: terminated with %x, where x is a hexadecimal digit
TC-POST5-3
: contains %ky, where k is not a hexadecimal digit and y
is a hexadecimal digit.
TC-POST5-4
: contains %xk, where x is a hexadecimal digit and k is
not.
Entry Range applies to any variable whose values are chosen from a finite
range. In the example, ranges appear three times in the definition of hexadec-
imal digit. Ranges also appear implicitly in the reference to alphanumeric
characters (the alphabetic and numeric ranges from the ASCII character set)
in DEF 3. For hexadecimal digits we will try the special values ’/’ and ’:’ (the
characters that appear before ’0’ and after ’9’ in the ASCII encoding), the val-
ues ’0’ and ’9’ (upper and lower bounds of the first interval), some value be-
tween ’0’ and ’9’, and similarly ’@’, ’G’, ’A’, ’F’, and some value between ’A’ and
’F’ for the second interval and ’"’, ’g’, ’a’, ’f’, and some value between ’a’ and ’f’
for the third interval.
These values will be instantiated for variable
, and result in 30 ad-
ditional test case specifications (5 values for each subrange, giving 15 values
for each hexadecimal digit and thus 30 for the two digits of CGI-hexadecimal).
The full set of test case specifications is shown in Table ??. These test case
specifications are more specific than (and therefore replace) test case specifi-
cations TC-POST3-1, TC-POST5-3, and TC-POST5-4.
For alphanumeric characters we will similarly derive boundary, interior
and excluded values, which result in 15 additional test case specifications,
also given in Table ??. These test cases are more specific than (and therefore
replace) TC-POST1-1, TC-POST1-2, TC-POST6-1.
Entry Numeric Constant does not apply to any element of this specifica-
tion.
Entry Non-Numeric Constant applies to ’+’ and ’%’, occurring in DEF 3 and
DEF 2 respectively. Six test case specifications result, but all are redundant.
Entry Sequence applies to
(VAR 1),
(VAR 2), and
(DEF 2). Six test case specifications result for each, of which only five are mu-
tually non-redundant and not already in the list. From VAR 1 (
) we
generate test case specifications requiring an empty sequence, a sequence
containing a single element, and a very long sequence. The catalog entry re-
quiring more than one element generates a redundant test case specification,
which is discarded. We cannot produce reasonable test cases for incorrectly
terminated strings (the behavior would vary depending on the contents of
memory outside the string), so we omit that test case specification.
All test case specifications that would be derived for
(VAR 2) would
be redundant with respect to test case specifications derived for
(VAR
1).
From &=0, (DEF 2) we generate two additional test case spec-
ifications for variable
: a sequence that terminates with ’%’ (the only
way to produce a one-character subsequence beginning with ’%’) and a se-
quence containing ’%xyz’, where x, y, and z are hexadecimal digits.
Entry Scan applies to 6
(OP 1) and generates 17 test case spec-
ifications. Three test case specifications (alphanumeric, ’+’, and &=0 ) are
generated for each of the first 5 items of the catalog entry. One test case spec-
ification is generated for each of the last two items of the catalog entry when
Scan is applied to CGI item. The last two items of the catalog entry do not
apply to alphanumeric characters and ’+’, since they have no non-trivial pre-
fixes. Seven of the 17 are redundant. The ten generated test case specifica-
tions are summarized in Table 13.11.
Test catalogs, like other check-lists used in test and analysis (e.g., inspec-
tion check-lists), are an organizational asset that can be maintained and en-
hanced over time. A good test catalog will be written precisely and suitably
annotated to resolve ambiguity (unlike the sample catalog used in this chap-
ter). Catalogs should also be specialized to an organization and application
domain, typically using a process such as defect causal analysis or root cause
analysis. Entries are added to detect particular classes of faults that have been
encountered frequently or have been particularly costly to remedy in previ-
ous projects. Refining check-lists is a typical activity carried out as part of
process improvement. When a test reveals a program fault, it is useful to
make a note of which catalog entries the test case originated from, as an aid
to measuring the effectiveness of catalog entries. Catalog entries that are not
effective should be removed.
where
are hexadecimal digits, is an alphanumeric character, represents
the beginning of the string, and $ represents the end of the string.
Table 13.11: Summary table: Test case specifications for cgi-decode gener-
ated with a catalog.
0
NO
Maintenance
by re q u
k up t a t t io n [ U S p h on e e st
p ic u es ta return
re q n ce s ) ( co o r UE o r we
te n rr an t y
a n t ra r b
in
m a no w a ct n esi de n
(contract number)
(
)
request at
1 2 3
Wait for Maintenance Wait for
returning (no warranty) pick up
in
c o v al
nu nt r a id
estimate
costs
re
mb ct up
je
er p ic k
ct
e
st
im
at
e
4 5 Repair 6
Wait for accept (maintenance repair completed Repaired
acceptance estimate
station)
) (U una
n t (a S ble
or ai r
ne UE t o r l rep
po sf u
co m component re ep a c es
k s id ir suc
l ac arrives (a) en
t)
ir
pa
7 8 Repair
re
Wait for
l
lack component (b) (regional
fu
component
s
es
headquarters) su
cc
component
unable to
l ac arrives (b)
repair
kc
unable to repair om
pon
(not (US or UE resident) en
t (c
component )
arrives (c) Repairi
9
(main
headquarters)
T-Cover
TC-1 0–2–4–1–0
TC-2 0–5–2–4–5–6–0
TC-3 0–3–5–9–6–0
TC-4 0–3–5–7–5–8–7–8–9–7–9–6–0
them not truly finite-state. A state machine that simply receives a message
on one port and then sends the same message on another port is not really
finite-state unless the set of possible messages is finite, but is often rendered
as a finite state machine, ignoring the contents of the exchanged messages.
State-machine specifications can be used both to guide test selection and
in construction of an oracle that judges whether each observed behavior is
correct. There are many approaches for generating test cases from finite state
machines, but most are variations on a basic strategy of checking each state
transition. One way to understand this basic strategy is to consider that each
transition is essentially a specification of a precondition and postcondition,
e.g., a transition from state to state on stimulus means “if the system
is in state and receives stimulus , then after reacting it will be in state .”
For instance, the transition labeled accept estimate from state Wait for accep-
tance to state Repair (maintenance station) of Figure 13.10 indicates that if an
item is on hold waiting for the customer to accept an estimate of repair costs,
and the customer accepts the estimate, then the maintenance station begins
repairing the item.
A faulty system could violate any of these precondition, postcondition
pairs, so each should be tested. For instance, the state Repair (maintenance
station) can be arrived through three different transitions, and each should
be checked.
Details of the approach taken depend on several factors, including whether
system states are directly observable or must be inferred from stimulus/response
sequences, whether the state machine specification is complete as given or
includes additional, implicit transitions, and whether the size of the (possibly
augmented) state machine is modest or very large.
A basic criterion for generating test cases from finite state machines is
transition coverage, which requires each transition to be traversed at least
"
once. Test case specifications for transition coverage are often given as sets of
state sequences or transition sequences. For example, T-Cover in Table 13.12
is a set of four paths, each beginning at the initial state, which together cover
all transitions of the finite state machine of Figure 13.10. T-Cover thus satisfies
the transition coverage criterion.
The transition coverage criterion depends on the assumption that the finite-
age criterion requires each path that traverses transitions at most once to be
exercised. The boundary interior loop coverage criterion requires each dis-
"
very small and simple finite-state machine specifications, but since the num-
ber of even simple paths (without repeating states) can grow exponentially
with the number of states, they are often impractical.
Specifications given as finite-state machines are typically incomplete, i.e.,
they do not include a transition for every possible (state, stimulus) pair. Often
the missing transitions are implicitly error cases. Depending on the system,
the appropriate interpretation may be that these are don’t care transitions
(since no transition is specified, the system may do anything or nothing), self
transitions (since no transition is specified, the system should remain in the
same state), or (most commonly) error transitions that enter a distinguished
state and possibly trigger some error handling procedure. In at least the latter
two cases, thorough testing includes the implicit as well as the explicit state
transitions. No special techniques are required; the implicit transitions are
simply added to the representation before test cases are selected.
The presence of implicit transitions with a don’t care interpretation is typ-
ically an implicit or explicit statement that those transitions are impossible,
e.g., because of physical constraints. For example, in the specification of the
maintenance procedure of Figure 13.10, the effect of event lack of compo-
nent is specified only for the states that represent repairs in progress. Some-
times it is possible to test such sequences anyway, because the system does
not prevent such events from occurring Where possible, it may be best to
treat don’t care transitions as self transitions (allowing the possibility of im-
perfect translation from physical to logical events, or of future physical layers
11 The boundary interior path coverage was originally proposed for structural coverage of pro-
Advanced search: The Advanced search function allows for searching elements in the
website database.
$
%!
&%'$(())) )* (+(,(-./0%!&1
$1
$ !1
2%'3 2'!
4 5 2 0%!
2'% + . 5!66! .% 7
(
$ !1
(
$1
$!!! !&. !& '!&5 2'!&(1
$'!'! !&5 2'!&1
$8! !& !98!& '!&
$
&
!&!:! &(1
$
!:!!1
$!!! !&2'!&
&&
&8 ! &1
$!!! !&2'!'!& '!&
&(1
$!!! !&2'!;!& '!&
&(1
(
$!!!1
(
$
!:!!1
$!!! !&<'2'!&
&&
&8 ! &1
$ !8!'!
&2'!= &
$!!! !&2'!'!& '!&
&(1
(
$!!!1
(
$'!'!1
Figure 13.13: The XML Schema that describes a Product configuration of the
Chipmuk website
<search>
{<choices>} Char
<regexp> , <choices>
* <regexp>
Char
Figure 13.15: The derivation tree of a test case for functionality Advanced
Search derived from the BNF specification of Figure 13.12.
fifteen required components (compSeq1 applied times)
fifteen optional components (optCompSeq1 applied times)
weight Model 1
weight compSeq1 10
weight compSeq2 0
weight optCompSeq1 10
weight optCompSeq2 0
weight Comp 1
weight OptComp 1
weight modNum 1
weight CompTyp 1
weight CompVal 1
Tools Some techniques may require the use of tools, whose availability and
cost should be taken into account when choosing a specific testing technique.
For example, several tools are available for deriving test cases from SDL spec-
ifications. The availability of one of these tools may suggest the use of SDL for
capturing a subset of the requirements expressed in the specification.
evaluate all cost related aspects. For example, the generation of a large num-
ber of random tests may require the design of sophisticated oracles, which
may raise the costs of testing over an acceptable threshold; the cost of a spe-
cific tool and the related training may go beyond the advantages of adopting
a specific approach, even if the nature and the form of the specification may
suggest the suitability of that approach.
Many engineering activities require careful trading off different aspects.
Functional testing is not an exception: successfully balancing the many as-
pects is a difficult and often underestimated problem that requires highly
skilled designers. Functional testing is not an exercise of choosing the opti-
mal approach, but a complex set of activities for finding a suitable combina-
tion of models and techniques that can lead to a set of test cases that satisfy
cost and quality constraints. This balancing extends beyond test design to
software design for test. Appropriate design not only improves the software
development process, but can greatly facilitate the job of test designers, and
thus lead to substantial savings.
Too often test designers make the same mistake as non-expert program-
mers, that is to start generating code in one case, test cases in the other,
without prior analysis of the problem domain. Expert test designers care-
fully examine the available specifications, their form, domain and company
constraints for identifying a suitable framework for designing test case speci-
fications before even starting to consider the problem of test case generation.
Another hot research area is fed by the increasing interest in different spec-
ification and design paradigms. New software development paradigms, such
as the object oriented paradigm, as well as techniques for addressing increas-
ingly important topics, such as software architectures and design patterns,
are often based on new notations. Semi-formal and diagrammatic notations
offer several opportunities for systematically generating test cases. Resarch is
active in investigating different possibilities of (semi) automatically deriving
test cases from these new forms of specifications and studying the effective-
ness of existing test case generation techniques12 .
Most functional testing techniques do not satisfactory address the prob-
lem of testing increasingly large artifacts. Existing functional testing tech-
niques do not take advantages of test cases available for parts of the artifact
under test. Compositional approaches for deriving test cases for a given sys-
tem taking advantage of test cases available for its subsystems is an important
open research problem.
Further Reading
Functional testing techniques, sometimes called “black-box testing” or “specification-
based testing,” are presented and discussed by several authors. Ntafos [DN81]
makes the case for random, rather than systematic testing; Frankl, Hamlet,
12 Problems and state-of-art techniques for testing object oriented software and software ar-
Littlewood and Strigini [FHLS98] is a good starting point to the more recent
literature considering the relative merits of systematic and statistical approaches.
Category partition testing is described by Ostrand and Balcer [OB88]. The
combinatorial approach described in this chapter is due to Cohen, Dalal,
Fredman, and Patton [CDFP97]; the algorithm described by Cohen et al. is
patented by Bellcore. Myers’ classic text [Mye79] describes a number of tech-
niques for testing decision structures. Richardson, O’Malley, and Tittle [ROT89]
and Stocks and Carrington [SC96] are among more recent attempts to gener-
ate test cases based on the structure of (formal) specifications. Beizer’s Black
Box Testing [Bei95] is a popular presentation of techniques for testing based
on control and data flow structure of (informal) specifications.
Catalog-based testing of subsystems is described in depth by Marick’s The
Craft of Software Testing [Mar97].
Test design based on finite state machines has been important in the do-
main of communication protocol development and conformance testing; Fu-
jiwara, von Bochmann, Amalou, and Ghedamsi [FvBK 91] is a good introduc-
tion. Gargantini and Heitmeyer [GH99] describe a related approach applica-
ble to software systems in which the finite-state machine is not explicit but
can be derived from a requirements specification.
Test generation from context-free grammars is described by Celentano et
al. [CCD 80] and apparently goes back at least to Hanford’s test generator
for an IBM PL/I compiler [Han70]. The probabilistic approach to grammar-
based testing is described by Sirer and Bershad [SB99], who use annotated
grammars to systematically generate tests for Java virtual machine imple-
mentations.
Related topics
Readers interested in the complementarites between functional and struc-
tural testing as well as readers interested in the testing decision structures and
control and data flow graphs may continue with the next Chapters that de-
scribe structural and data flow testing. Readers interested in finite state ma-
chine based testing may go to Chapters 17 and ?? that discuss testing of object
oriented and distributed system, respectively. Readers interested in the qual-
ity of specifications may goto Chapters 25 and ??, that describe inspection
techniques and methods for testing and analysis of specifications, respec-
tively. Readers interested in other aspect of functional testing may move to
Chapters 16 and ??, that discuss technuqes for testing complex data struc-
tures and GUIs, respectively.
Exercises
Ex13.1. In the “Extreme Programming” (XP) methodology [?], a written descrip-
tion of a desired feature may be a single sentence, and the first step to design-
ing the implementation of that feature is designing and implementing a set
of test cases. Does this aspect of the XP methodology contradict our assertion
that test cases are a formalization of specifications?
Ex13.2. Compute the probability of selecting a test case that reveals the fault in-
serted in line 25 of program Root of Figure 13.1 by randomly sampling the
input domain, assuming that type double has range
. Com-
pute the probability of selecting a test case that reveals a fault, asuming that
both lines 18 and 25 of program Root contains the same fault, i.e., missing
condition . Compare the two probabilities.
2 , to clear display
. , . , .0 , .C , .2 , where . is pressed before a digit to indicate
the target memory, 0. . . 9, keys . , .0 , .C , .2 pressed after .
and a digit indicate the operation to be performed on the target mem-
ory: add display to memory, store display in memory, restore memory,
i.e., move the value in memory to the display and clear memory.
Example: ? . * .0 , 4 . * .C prints 65 (the
value 15 is stored in memory cell 3 and then retrieved to compute 80 -
15).
Ex13.5. Given a set of parameter characteristics (categories) and value classes (choices)
obtained by applying the category partition method to an informal specifi-
cation, explain either with a deduction or with examples why unrestricted
use of constraints property and if-property makes it difficult to compute the
number of derivable combinations of value classes.
Write heuristics to compute a reasonable upper bound for the number of
derivable combinations of value classes when constraints can be used with-
out limits.
Ex13.8. Derive test specifications using the category partition method for the fol-
lowing Airport connection check function:
Airport Database
The Valid Connection function uses an internal, in-memory table
of airports which is read from a configuration file at system initial-
ization. Each record in the table contains the following informa-
tion:
Three-letter airport code. This is the key of the table and can be
used for lookups.
Airport zone. In most cases the airport zone is a two-letter coun-
try code, e.g., ”us” for the United States. However, where passage
from one country to another is possible without a passport, the
airport zone represents the complete zone in which passport-
free travel is allowed. For example, the code ”eu” represents the
European countries which are treated as if they were a single
country for purposes of travel.
Domestic connect time. This is an integer representing the min-
imum number of minutes that must be allowed for a domestic
connection at the airport. A connection is ”domestic” if the orig-
inating and destination airports of both flights are in the same
airport zone.
International connect time. This is an integer representing the
minimum number of minutes that must be allowed for an in-
ternational connection at the airport. The number -1 indicates
that international connections are not permitted at the airport.
A connection is ”international” if any of the originating or des-
tination airports are in different zones.
Ex13.9. Derive test specifications using the category partition method for the func-
tion SUM of Excel from the following description taken from the Excel
manual:
Examples
SUM(3, 2) equals 5
SUM(”3”, 2, TRUE) equals 6 because the text values are translated
into numbers, and the logical value TRUE is translated into the
number 1.
Unlike the previous example, if A1 contains ”3” and B1 contains
TRUE, then:
SUM(A1, B1, 2) equals 2 because references to nonnumeric values
in references are not translated.
If cells A2:E2 contain 5, 15, 30, 40, and 50:
SUM(A2:C2) equals 50
SUM(B2:E2, 15) equals 150
Ex13.10. Eliminate from the test specifications of the feature check configuration
given in Table 13.1 all constraints that do not correspond to infeasible tuples,
but have been added for the sake of reducing the number of test cases.
Compute the number of test cases corresponding to the new specifications.
Apply the combinatorial approach to derive test cases covering all pairwise
combinations.
Compute the number of derived test cases.
Ex13.11. Consider the value classes obtained by applying the category partition
approach to the Airport Connection Check example of Exercise Ex13.8. Elim-
inate from the test specifications all constraints that do not correspond to
infeasible tuples and compute the number of derivable test cases. Apply the
combinatorial approach to derive test cases covering all pairwise combina-
tions, and compare the number of derived test cases.
Ex13.12. Given a set of parameter characteristics and value classes, write a heuris-
tic algorithm that selects a small set of tuples that cover all possible pairs of
the value classes using the combinatorial approach. Assume that parameter
characteristics and value classes are given without constraints.
Ex13.14. Generate a set of tuples that cover all triples of language, screen-size, and
font and all pairs of other parameters for the specification given in Table
13.3.
Ex13.15. Consider the following columns that correspond to educational and in-
dividual accounts of feature pricing of Figure 13.4:
Education Individual
Edu. T T F F F F F F
CP CT1 - - F F T T - -
CP CT2 - - - - F F T T
SP Sc F T F T - - - -
SP T1 - - - - F T - -
SP T2 - - - - - - F T
Out Edu SP ND SP T1 SP T2 SP
write a set of boolean expressions for the outputs and apply the modified
condition/decision adequacy criterion (MC/DC) presented in Chapter 14
to derive a set of test cases for the derived boolean expressions. Compare the
result with the test case specifications given in Figure 13.6.
Ex13.16. Derive a set of test cases for the Airport Connection Check example of
Exercise Ex13.8 using the catalog based approach.
Extend the catalog of Table 13.10 as needed to deal with specification con-
structs.
Ex13.17. Derive sets of test cases for functionality Maintenance applying Transi-
tion Coverage, Single State Path Coverage, Single Tranistion Path Coverage,
and Boundary Interior Loop Coverage to the FSM specification of Figure
13.9
Ex13.18. Derive test cases for functionality Maintenance applying Transition Cov-
erage to the FSM specification of Figure 13.9, assuming that implicit transi-
tions are (1) error conditions or (2) self transitions.