0% found this document useful (0 votes)
2 views59 pages

Testing

The document provides an overview of program testing, emphasizing its goal to identify defects while acknowledging that error-free software cannot be guaranteed. It covers key concepts such as test cases, testing strategies (black-box and white-box), and various testing techniques including unit testing, equivalence class partitioning, and coverage-based testing. Additionally, it distinguishes between verification and validation processes in software development.

Uploaded by

movieswala18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views59 pages

Testing

The document provides an overview of program testing, emphasizing its goal to identify defects while acknowledging that error-free software cannot be guaranteed. It covers key concepts such as test cases, testing strategies (black-box and white-box), and various testing techniques including unit testing, equivalence class partitioning, and coverage-based testing. Additionally, it distinguishes between verification and validation processes in software development.

Uploaded by

movieswala18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Testing

Introduction
• The aim of program testing is to help in identifying all defects in a
program.
• However, in practice, even after satisfactory completion of the
testing phase, it is not possible to guarantee that a program is
error free.
Basic Concepts and Terminologies
• How to test a program?
• Testing a program involves executing the program with a set of test inputs
and observing if the program behaves as expected.
• If the program fails to behave as expected, then the input data and the
conditions under which it fails are noted for later debugging and error
correction.
• However, unless the conditions under which a software fails are noted down,
it becomes difficult for the developers to reproduce a failure observed by the
testers.
• For example, a software might fail for a test case only when a network
connection is enabled. Unless this condition is documented in the failure
report, it becomes difficult to reproduce the failure.
Terminologies

• A mistake is essentially any programmer action that later shows


up as an incorrect result during program execution.
• A programmer may commit a mistake in almost any of the
development activities.
• For example, during coding a programmer might commit the
mistake of not initializing a certain variable.
• An error is the result of a mistake committed by a developer in any
of the development activities. Mistakes can give rise to an
extremely large variety of errors.
• One example error is a call made to a wrong function.
• The terms error, fault, bug, and defect are used interchangeably by
the program testing community.
• A failure of a program essentially denotes an incorrect behaviour
exhibited by the program during its execution.
• An incorrect behaviour is observed either as production of an
incorrect result or as an inappropriate activity carried out by the
program.
• Every failure is caused by one or more bugs present in the
program.
• The result computed by a program is 0, when the correct result is
10.
A test case is a triplet [I, S, R], where I is the data input to the
program under test, S is the state of the program at which the data
is to be input, and R is the result expected to be produced by the
program.
The state of a program is also called its execution mode.
As an example, consider the different execution modes of a certain
text editor software. The text editor can at any time during its
execution assume any of the following execution modes—edit,
view, create, and display.
A test scenario is an abstract test case in the sense that it only
identifies the aspects of the program that are to be tested without
identifying the input, state, or output.
A test case can be said to be an implementation of a test scenario.
For example, a test scenario can be the traversal of a path in the
control flow graph of the program.
In the test case, the input, output, and the state at which the input
would be applied is designed such that the scenario can be
executed.
• A test script is an encoding of a test case as a short program.
• Test scripts are developed for automated execution of the test
cases.
• A test case is said to be a positive test case if it is designed to
test whether the software correctly performs a required
functionality. A test case is said to be negative test case, if it is
designed to test whether the software carries out something that
is not required of the system.
A test suite is the set of all test cases that have been designed by a
tester to test a given program.
Testability of a program indicates the effort needed to validate the
program.
In other words, the testability of a requirement is the degree of
difficulty to adequately test an implementation to determine its
conformance to its requirements.
A failure mode of a software denotes an observable way in which it
can fail.
In other words, all failures that have similar observable symptoms,
constitute a failure mode.
As an example of the failure modes of a software, consider a railway
ticket booking software that has three failure modes—failing to
book an available seat, incorrect seat booking (e.g., booking an
already booked seat), and system crash.
Verification versus validation
• Verification is the process of determining whether the output of
one phase of software development conforms to that of its
previous phase; whereas validation is the process of determining
whether a fully developed software conforms to its requirements
specification.
• The primary techniques used for verification include review,
simulation, formal verification, and testing. On the other hand,
validation techniques are primarily based on testing.
• Unit and integration testing can be considered as verification
steps where it is verified whether the code is as per the module
and module interface specifications. system testing can be
considered as a validation step where it is determined whether the
fully developed code is as per its requirements specification.
• Verification is carried out during the development process to
check if the development activities are proceeding alright,
whereas validation is carried out to check if the right as required
by the customer has been developed.
• Verification techniques can be viewed as an attempt to achieve
phase containment of errors. the aim of validation is to check
whether the deliverable software is error free.
Testing Activities

• Testing involves performing the following major activities:


• Test suite design: The test suite is designed possibly using
several test case design techniques.
• Running test cases and checking the results to detect failures:
Each test case is run and the results are compared with the
expected results.
• A mismatch between the actual result and expected results
indicates a failure. The test cases for which the system fails are
noted down for later debugging.
• Locate error: In this activity, the failure symptoms are analysed to
locate the errors. For each failure observed during the previous
activity, the statements that are in error are identified.
• Error correction: After the error is located during debugging, the
code is appropriately changed to correct the error.
• A typical testing process in terms of the activities that are carried
out has been shown schematically in Figure 10.2.
• As can be seen, the test cases are first designed. Subsequently,
the test cases are run to detect failures. The bugs causing the
failure are identified through debugging, and the identified error is
corrected. Of all the above mentioned testing activities, debugging
often turns out to be the most time consuming activity.
UNIT TESTING

• Unit testing is undertaken after coding of a module is complete, all


syntax errors have been removed, and the code has been
reviewed.
• This activity is typically undertaken by the coder of the module
himself in the coding phase. Before carrying out unit testing, the
unit test cases have to be designed and the test environment for
the unit under test has to be developed.
Driver and stub modules

• In order to test a single module, we need a complete environment


to provide all relevant code that is necessary for execution of the
module.
• That is, besides the module under test, the following are needed
to test the module:
• The procedures belonging to other modules that the module
under test calls.
• Non-local data structures that the module accesses.
• A procedure to call the functions of the module under test with
appropriate parameters.
• Modules required to provide the necessary environment (which
either call, provide the required global data, or are called by the
module under test) are usually not available until they too have
been unit tested. In this context, stubs and drivers are designed to
provide the complete environment for a module so that testing
can be carried out.
• The role of stub and driver modules is pictorially shown in Figure
10.3.
• Stub: A stub module consists of several stub procedures that are
called by the module under test.
• A stub procedure is a dummy procedure that takes the same
parameters as the function called by the unit under test but has a
highly simplified behavior.
• For example, a stub procedure may produce the expected
behaviour using a simple table look up mechanism, rather than
performing actual computations.
• Driver: A driver module contains the non-local data structures
that are accessed by the module under test. Additionally, it should
also have the code to call the different functions of the unit under
test with appropriate parameter values for testing.
BLACK-BOX TESTING

• In black-box testing, test cases are designed from an examination


of the input/output values only and no knowledge of design or
code is required.
• The following are the two main approaches available to design
black box test cases:
1. Equivalence class partitioning
2. Boundary value analysis
Equivalence Class Partitioning

• In the equivalence class partitioning approach, the domain of


input values to the unit under test is partitioned into a set of
equivalence classes.
• The partitioning is done such that for every input data belonging to
the same equivalence class, the program behaves similarly.
• Equivalence classes for a unit under test can be designed by
examining the input data and output data.
• The following are two general guidelines for designing the
equivalence classes:
• If the input data values to a system can be specified by a range of
values, then one valid and two invalid equivalence classes can be
defined. For example, if the equivalence class is the set of integers
in the range 1 to 10 (i.e., [1,10]), then the two invalid equivalence
classes are [−∞,0], [11,+∞], and the valid equivalence class is
[1,10].
• If the input data assumes values from a set of discrete members
of some domain, then one equivalence class for the valid input
values and another equivalence class for the invalid input values
should be defined. For example, if the valid equivalence classes
are {A,B,C}, then the invalid equivalence class is ∪-{A,B,C}, where
∪ is the universe of all possible input values.
• A type of programming error that is frequently committed by
programmers is missing out on the special consideration that
should be given to the values at the boundaries of different
equivalence classes of inputs.
• The reason behind programmers committing such errors might
purely be due to psychological factors.
• Programmers often fail to properly address the special processing
required by the input values that lie at the boundary of the
different equivalence classes.
• For example, a programmer may improperly use < instead of <=, or
conversely <= for <, etc.
• To design boundary value test cases, it is required to examine the
equivalence classes to check if any of the equivalence classes
contains a range of values.
• For those equivalence classes that are not a range of values (i.e.,
consist of a discrete collection of values) no boundary value test
cases can be defined.
• For an equivalence class that is a range of values, the boundary
values need to be included in the test suite.
• For example, if an equivalence class contains the integers in the
range 1 to 10, then the boundary value test suite is {0,1,10,11}.
WHITE-BOX TESTING

• White-box testing is an important type of unit testing. A large


number of white-box testing strategies exist.
• Each testing strategy essentially designs test cases based on
analysis of some aspect of source code and is based on some
heuristic.
Basic Concepts
• Fault-based testing
• A fault-based testing strategy targets to detect certain types of
faults. An example of a fault-based strategy is mutation testing.
• Coverage-based testing
• A coverage-based testing strategy attempts to execute (or cover)
certain elements of a program.
• Popular examples of coverage-based testing strategies are
statement coverage, branch coverage, multiple condition
coverage, and path coverage-based testing.
Testing criterion for coverage-based testing

• A coverage-based testing strategy typically targets to execute (i.e.,


cover) certain program elements for discovering failures.
• For example, if a testing strategy requires all the statements of a
program to be executed at least once, then we say that the testing
criterion of the strategy is statement coverage.
• We say that a test suite is adequate with respect to a criterion, if it
covers all program elements of the domain defined by that
criterion
Stronger versus weaker testing

• We can compare two testing strategies by determining whether


one is stronger, weaker, or complementary to the other.
• A white-box testing strategy is said to be stronger than another
strategy, if the stronger testing strategy covers all program
elements covered by the weaker testing strategy, and the stronger
strategy additionally covers at least one program element that is
not covered by the weaker strategy.
• If a stronger testing has been performed, then a weaker testing
need not be carried out.
Statement Coverage
• Statement coverage is a metric to measure the percentage of
statements that are executed by a test suite in a program at least
once.
• It is obvious that without executing a statement, it is difficult to
determine whether it causes a failure due to illegal memory
access, wrong result computation due to improper arithmetic
operation, etc.
• It must however be pointed out that an important weakness of the
statement coverage strategy is that executing a statement once
and observing that it behaves properly for one input value is no
guarantee that it will behave correctly for all input values.
Branch Coverage
• Branch coverage is also called decision coverage (DC). It is also
sometimes referred to as all edge coverage.
• A test suite achieves branch coverage, if it makes the decision
expression in each branch in the program to assume both true and
false values.
• In other words, for branch coverage each branch in the CFG
representation of the program must be taken at least once, when
the test suite is executed.
• Branch testing is also known as all edge testing, since in this
testing scheme, each edge of a program’s control flow graph is
required to be traversed at least once.
Condition Coverage
• Condition coverage testing is also known as basic condition
coverage (BCC) testing.
• A test suite is said to achieve basic condition coverage (BCC), if
each basic condition in every conditional expression assumes
both true and false values during testing.
• For example, for the following decision statement: if(A||B && C) …;
the basic conditions A, B, and C assume both true and false
values.
• However, for the given expression, just two test cases can achieve
condition coverage. For example, one test case, may assign A =
True, B = True, and C = True and another test case may assign A =
False, B = False, and C = False.
Condition and Decision Coverage

• A test suite is said to achieve condition and decision coverage, if it


achieves condition coverage as well as decision (that is, branch)
coverage.
Multiple Condition Coverage

• Multiple condition coverage (MCC) is achieved, if the test cases


make the component conditions of a composite conditional
expression to assume all possible combinations of true and false
values. For example, consider the composite conditional
expression [(c1 and c2) or c3].
• A test suite would achieve MCC, if all the component conditions
c1, c2, and c3 are each made to assume all combinations of true
and false values.
• Therefore, at least eight test cases would be required in this case
to achieve MCC.
Multiple Condition/Decision Coverage (MC/DC)

• The name (MC/DC) implies that it ensures decision coverage and


modifies (that is relaxes) the MCC.
• The requirement for MC/DC is usually expressed as the following:
A test suite would achieve MC/DC if during execution of the test
suite each condition in a decision expression independently
affects the outcome of the decision.
• That is, an atomic condition independently affects the outcome of
the decision, if the decision outcome changes as a result of
changing the truth value of the single atomic condition, while
other conditions maintain their truth values.
Path Coverage

• A test suite achieves path coverage if it exeutes each linearly


independent paths (or basis paths) at least once.
• A linearly independent path can be defined in terms of the control
flow graph (CFG) of a program.
Control flow graph (CFG)
• A control flow graph describes how the control flows through the
program.
• In order to draw the control flow graph of a program, we need to first
number all the statements of a program.
• The different numbered statements serve as nodes of the control flow
graph (see Figure 10.5).
• There exists an edge from one node to another, if the execution of the
statement representing the first node can result in the transfer of
control to the other node.
• More formally, we can define a CFG as follows. A CFG is a directed
graph consisting of a set of nodes and edges (N, E), such that each
node n ∈ N corresponds to a unique program statement and an edge
exists between two nodes if control can transfer from one node to the
other.
• A path through a program is any node and edge sequence from the
start node to a terminal node of the control flow graph of a
program.
• A set of paths for a given program is called linearly independent
set of paths (or the set of basis paths or simply the basis set), if
each path in the set introduces at least one new edge that is not
included in any other path in the set.
McCabe’s Cyclomatic Complexity Metric

• McCabe’s cyclomatic complexity metric is an important result


that lets us compute the number of linearly independent paths for
any arbitrary program.
• McCabe’s cyclomatic complexity defines an upper bound for the
number of linearly independent paths through a program.
• Given a control flow graph G of a program, the cyclomatic
complexity V (G) can be computed as:
• V (G) = E − N + 2
• where, N is the number of nodes of the control flow graph and E is
the number of edges in the control flow graph.
INTEGRATION TESTING
• Integration testing is carried out after all (or at least some of) the
modules have been unit tested.
• Successful completion of unit testing, to a large extent, ensures
that the unit (or module) as a whole works satisfactorily.
• In this context, the objective of integration testing is to detect the
errors at the module interfaces (call parameters).
• For example, it is checked that no parameter mismatch occurs
when one module invokes the functionality of another module.
• Thus, the primary objective of integration testing is to test the
module interfaces, i.e., there are no errors in parameter passing,
when one module invokes the functionality of another module.
• During integration testing, different modules of a system are
integrated in a planned manner using an integration plan.
• The integration plan specifies the steps and the order in which
modules are combined to realise the full system.
• After each integration step, the partially integrated system is
tested.
• An important factor that guides the integration plan is the module
dependency graph.
• a structure chart (or module dependency graph) specifies the
order in which different modules call each other.
• Thus, by examining the structure chart, the integration plan can be
developed. Any one (or a mixture) of the following approaches can
be used to develop the test plan:
• Big-bang approach to integration testing
• Top-down approach to integration testing
• Bottom-up approach to integration testing
• Mixed (also called sandwiched ) approach to integration testing
Approaches To Integration Testing
• Big-bang approach to integration testing
Big-bang testing is the most obvious approach to integration testing. In
this approach, all the modules making up a system are integrated in a
single step. In simple words, all the unit tested modules of the system
are simply linked together and tested.
However, this technique can meaningfully be used only for very small
systems.
The main problem with this approach is that once a failure has been
detected during integration testing, it is very difficult to localise the error
as the error may potentially exist in any of the modules.
Therefore, debugging errors reported during big-bang integration testing
are very expensive to fix. As a result, big-bang integration testing is
almost never used for large programs.
• Bottom-up approach to integration testing
• Large software products are often made up of several
subsystems. A subsystem might consist of many modules which
communicate among each other through well-defined interfaces.
• In bottom-up integration testing, first the modules for the each
subsystem are integrated.
• Thus, the subsystems can be integrated separately and
independently.
• The primary purpose of carrying out the integration testing a
subsystem is to test whether the interfaces among various
modules making up the subsystem work satisfactorily.
• The test cases must be carefully chosen to exercise the interfaces
in all possible manners.
• Top-down approach to integration testing
• Top-down integration testing starts with the root module in the
structure chart and one or two subordinate modules of the root
module.
• After the top-level ‘skeleton’ has been tested, the modules that
are at the immediately lower layer of the ‘skeleton’ are combined
with it and tested.
• Top-down integration testing approach requires the use of
program stubs to simulate the effect of lower-level routines that
are called by the routines under test.
• A pure top-down integration does not require any driver routines.
Mixed approach to integration testing
• The mixed (also called sandwiched) integration testing follows a combination
of top-down and bottom-up testing approaches.
• In top-down approach, testing can start only after the top-level modules have
been coded and unit tested.
• Similarly, bottom-up testing can start only after the bottom level modules are
ready.
• The mixed approach overcomes this shortcoming of the top-down and
bottom-up approaches.
• In the mixed testing approach, testing can start as and when modules
become available after unit testing.
• Therefore, this is one of the most commonly used integration testing
approaches.
• In this approach, both stubs and drivers are required to be designed.
SYSTEM TESTING

• After all the units of a program have been integrated together and
tested, system testing is taken up.
• The system testing procedures are the same for both object-
oriented and procedural programs, since system test cases are
designed solely based on the SRS document and the actual
implementation (procedural or object-oriented) is immaterial.
• There are three main kinds of system testing. These are essentially
similar tests, but differ in who carries out the testing:
1. Alpha Testing: Alpha testing refers to the system testing carried
out by the test team within the developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a
select group of friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing
performed by the customer to determine whether to accept the
delivery of the system.
• As can be observed from the above discussions, in the different
types of system tests, the test cases can be the same, but the
difference is with respect to who designs test cases and carries
out testing.
• Before a fully integrated system is accepted for system testing,
smoke testing is performed. Smoke testing is done to check
whether at least the main functionalities of the software are
working properly.
• Unless the software is stable and at least the main functionalities
are working satisfactorily, system testing is not undertaken.
• The functionality tests are designed to check whether the
software satisfies the functional requirements as documented in
the SRS document.
• The performance tests, on the other hand, test the conformance
of the system with the non-functional requirements of the system.
Smoke Testing
• Smoke testing is carried out before initiating system testing to
ensure that system testing would be meaningful, or whether many
parts of the software would fail.
• The idea behind smoke testing is that if the integrated program
cannot pass even the basic tests, it is not ready for a vigorous
testing.
• For smoke testing, a few test cases are designed to check whether
the basic functionalities are working.
• For example, for a library automation system, the smoke tests may
check whether books can be created and deleted, whether
member records can be created and deleted, and whether books
can be loaned and returned.
Performance Testing

• Performance testing is an important type of system testing.


• There are several types of performance testing corresponding to
various types of non-functional requirements.
• For a specific system, the types of performance testing to be
carried out on a system depends on the different non-functional
requirements of the system documented in its SRS document.
• All performance tests can be considered as black-box tests.

You might also like