0% found this document useful (0 votes)
340 views

VTU B.E CSE Sem 8 Software Testing Notes

This document contains notes for a software testing course. It discusses basics of software testing including errors, faults, and failures. It defines test automation and discusses tools used for test automation. It also discusses developers and testers as separate but complementary roles. Finally, it covers software quality attributes like reliability, correctness, completeness, consistency, usability, and performance.

Uploaded by

Aravind Rossi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
340 views

VTU B.E CSE Sem 8 Software Testing Notes

This document contains notes for a software testing course. It discusses basics of software testing including errors, faults, and failures. It defines test automation and discusses tools used for test automation. It also discusses developers and testers as separate but complementary roles. Finally, it covers software quality attributes like reliability, correctness, completeness, consistency, usability, and performance.

Uploaded by

Aravind Rossi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

R N S INSTITUTE OF TECHNOLOGY

CHANNASANDRA, BANGALORE - 98

SOFTWARE TESTING
NOTES FOR 8TH SEMESTER INFORMATION SCIENCE

SUBJECT CODE: 06IS81

PREPARED BY

DIVYA K NAMRATHA R
1RN09IS016 1RN09IS028
8th Semester 8th Semester
Information Science Information Science
[email protected] [email protected]

SPECIAL THANKS TO
ANANG A – BNMIT & CHETAK M - EWIT

TEXT BOOKS:
FOUNDATIONS OF SOFTWARE TESTING – Aditya P Mathur, Pearson Education, 2008
SOFTWARE TESTING AND ANALYSIS: PROCESS, PRINCIPLES AND TECHNIQUES – Mauro Pezze, Michal Young, John
Wiley and Sons, 2008
Notes have been circulated on self risk. Nobody can be held responsible if anything is wrong or is improper information or insufficient information provided in it.

CONTENTS:

UNIT 1, UNIT 2, UNIT 3, UNIT 5, UNIT 7

Visit: www.vtuplanet.com for my notes as well as Previous VTU papers


RNSIT SOFTWARE TESTING NOTES

UNIT 1
BASICS OF SOFTWARE TESTING - 1
ERRORS AND TESTING
 Humans make errors in their thoughts, in their actions, and in the products that might result from
their actions.
 Humans can make errors in an field.
Ex: observation, in speech, in medical prescription, in surgery, in driving, in sports, in love and
similarly even in software development.
 Example:
o An instructor administers a test to determine how well the students have understood what
the instructor wanted to convey
o A tennis coach administers a test to determine how well the understudy makes a serve

Errors, Faults and Failures


Error: An error occurs in the process of writing a program
Fault: a fault is a manifestation of one or more errors
Failure: A failure occurs when a faulty piece of code is executed leading to an incorrect state that propagates to
program’s output

The programmer might misinterpret the requirements and consequently write incorrect code. Upon execution,
the program might display behaviour that does not match with the expected behaviour, implying thereby that a
failure has occurred.
A fault in the program is also commonly referred to as a bug or a defect. The terms error and a bug or a
defect. The terms error and bug are by far the most common ways of referring to something wrong in the
program text that might lead to a failure. Faults are sometimes referred to as defects.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 2


RNSIT SOFTWARE TESTING NOTES
In the above diagram notice the separation of observable from observed behaviour. This separation is
important because it is the observed behaviour that might lead one to conclude that a program has failed.
Sometimes conclusion might be incorrect due to one or more reasons.

Test Automation:
 Testing of complex systems, embedded and otherwise, can be a human intensive task.
 Execution of many tests can be tiring as well as error-prone. Hence, there is a tremendous need for
software testing.
 Most software development organizations, automate test-related tasks such as regression testing,
graphical user interface testing, and i/o device driver testing.
 The process of test automation cannot be generalized.

General purpose tools for test automation might not be applicable in all test environments
Ex:
 Eggplant
 Marathon
 Pounder for GUI testing
 Load & performance testing tools
 eloadExpert
 DBMonster
 JMeter
 Dieseltest
 WAPT
 LoadRunner
 Grinder

Regression testing tools:


 Echelon
 Test Tube
 WinRunner
 X test

AETG is an automated test generator that can be used in a variety of applications.


Random Testing is often used for the estimation of reliability of products with respect to specific events.
Tools: DART
Large development organizations develop their own test automation tools due primarily to the unique nature
of their test requirements.

Developers and Testers as two Roles:


 Developer is one who writes code & tester is one who tests code. Developer & Tester roles are different
and complementary roles. Thus, the same individual could be a developer and a tester. It is hard to
imagine an individual who assumes the role of a developer but never that of a tester, and vice versa.
 Certainly, within a software development organization, the primary role of a individual might be to test
and hence hs individual assumes the role of a tester. Similarly, the primary role of an individual who
designs applications and writes code is that of a developer.

SOFTWARE QUALITY
 Software quality is a multidimensional quantity and is measurable.

Quality Attributes
 These can be divided to static and dynamic quality attributes.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 3


RNSIT SOFTWARE TESTING NOTES
Static quality attributes
 It refers to the actual code and related documents.

Example: A poorly documented piece of code will be harder to understand and hence difficult to modify.
A poorly structured code might be harder to modify and difficult to test.

Dynamic quality Attributes:


 Reliability
 Correctness
 Completeness
 Consistency
 Usability
 performance

Reliability:
 It refers to the probability of failure free operation.

Correctness:
 Refers to the correct operation and is always with reference to some artefact.
 For a Tester, correctness is w.r.t to the requirements
 For a user correctness is w.r.t the user manual

Completeness:
 Refers to the availability of all the features listed in the requirements or in the user manual.
 An incomplete software is one that does not fuly implement all features required.

Consistency:
 Refers to adherence to a common set of conventions and assumptions.
 Ex: All buttons in the user interface might follow a common-color coding convention.

Usability:
 Refer to ease with which an application can be used. This is an area in itself and there exist
techniques for usability testing.
 Psychology plays an important role in the design of techniques for usability testing.
 Usability testing is a testing done by its potential users.
 The development organization invites a selected set of potential users and asks them to test the
product.
 Users in turn test for ease of use, functionality as expected, performance, safety and security.
 Users thus serve as an important source of tests that developers or testers within the organization
might not have conceived.
 Usability testing is sometimes referred to as user-centric testing.

Performance:
 Refers to the time the application takes to perform a requested task. Performance is considered as a
non-functional requirement.

Reliability:

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 4


RNSIT SOFTWARE TESTING NOTES

 (Software reliability is the probability of failure free operation of software over a given time interval
& under given conditions.)
 Software reliability can vary from one operational profile to another. An implication is that one
might say “this program is lousy” while another might sing praises for the same program.
 Software reliability is the probability of failure free operation of software in its intended
environments.
 The term environment refers to the software and hardware elements needed to execute the
application. These elements include the operating system(OS)hardware requirements and any
other applications needed for communication.

Requirements, Behaviour and Correctness:


 Product(or) software are designed in response to requirements. (Requirements specify the
functions that a product is expected to perform.) During the development of the product, the
requirement might have changed from what was stated originally. Regardless of any change, the
expected behaviour of the product is determined by the tester’s understanding of the requirements
during testing.
 Example:
Requirement 1: It is required to write a program that inputs and outputs the maximum of these.
Requirement 2: It is required to write a program that inputs a sequence of integers and outputs the
sorted version of this sequence.
 Suppose that the program max is developed to satisfy requirement 1 above. The expected output of
max when the input integers are 13 and 19 can be easily determined to be 19.
 Suppose now that the tester wants to know if the two integers are to be input to the program on one
line followed by a carriage return typed in after each number.
 The requirement as stated above fails to provide an answer to this question. This example
illustrates the incompleteness requirements 1.
 The second requirement in (the above example is ambiguous. It is not clear from this requirement
whether the input sequence is to be sorted in ascending or descending order. The behaviour of sort
program, written to satisfy this requirement, will depend on the decision taken by the programmers
while writing sort. Testers are often faced with incomplete/ambiguous requirements. In such
situations a testers may resort to a variety of ways to determine what behaviour to expect from the
program under test).
 Regardless of the nature of the requirements, testing requires the determination of the expected
behaviour of the program under test. The observed behaviour of the program is compared with the
expected behaviour to determine if the program functions as desired.

Input Domain and Program Correctness


 A program is considered correct if it behaves as desired on all possible test inputs. Usually, the set of
all possible inputs is too large for the program to be executed on each input.
 For integer value, -32,768 to 32,767. This requires 232 executions.
 Testing a program on all possible inputs is known as “exhaustive testing”.
 If the requirements are complete and unambiguous, it should be possible to determine the set of all
possible inputs.

Definition: Input Domain


 The set of all possible inputs to program P is known as the input domain, or input space, of P.
 Modified requirement 2: It is required to write a program that inputs a sequence of integers and
outputs the integers in this sequence sorted in either ascending or descending order. The order of
the output sequence is determined by an input request character which should be “A” when an
ascending sequence is desired, and “D” otherwise while providing input to the program, the request
character is entered first followed by the sequence of integers to be sorted. The sequence is
terminated with a period.

Definition: Correctness

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 5


RNSIT SOFTWARE TESTING NOTES
A program is considered correct if it behaves as expected on each element of its input domain.

Valid and Invalid Inputs:


 The input domains are derived from the requirements. It is difficult to determine the input domain for
incomplete requirements.
 Identifying the set of invalid inputs and testing the program against these inputs are important parts of
the testing activity. Even when the requirements fail to specify the program behaviour on invalid inputs,
the programmer does treat these in one way or another. Testing a program against invalid inputs might
reveal errors in the program.
Ex: sort program
< E 7 19...>
The sort program enters into an infinite loop and neiter asks the user for any input nor responds to
anything typed by the user. This observed behaviour poins to a possible error in sort.

Correctness versus reliability:


 Though correctness of a program is desirable, it is almost never the objective of testing.
 To establish correctness via testing would imply testing a program on all elements in the input domain,
which is impossible to accomplish in most cases that are encountered in practice.
 Thus, correctness is established via mathematical proofs of programs.
 While correctness attempts to establish that the program is error-free, testing attempts to find if there
are any errors in it.
 Thus, completeness of testing does not necessarily demonstrate that a program is error-free.
 Removal of errors from the program. Usually improves the chances, or the probability, of the program
executing without any failure.
 Also testing, debugging and the error-removal process together increase confidence in the correct
functioning of the program under test.
 Example:
Integer x, y
Input x, y
If(x<y) this condition should be x≤ 𝑦
{
Print f(x, y)
}
Else(x
{
Print g(x, y)
}
 Suppose that function f produces incorrect result whenever it is invoked with x=y and that f(x, y)≠ g(x, y),
x=y. In its present form the program fails when tested with equal input values because function g is invoked
instead of function f. When the error is removed by changing the condition x<y to x≤ 𝑦, the program fails
again when the input values are the same. The latter failure is due to the error in function f. In this program,
when the error in f is also removed, the program will be correct assuming that all other code is correct.
 A comparison of program correctness and reliability reveals that while correctness is a binary metric,
reliability is a continuous metric, over a scale from 0 to 1. A program can be either correct or incorrect, it is
reliability can be anywhere between 0 and 1. Intuitively when an error is removed from a program, the
reliability of the program so obtained is expected to be higher than that of the one that contains the error.

Program Use and Operational Profile:


 An operational profile is a numerical description
of how a program is used. In accordance with the
above definition, a program might have several
operational profiles depending on its users.
 Example: sort program

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 6


RNSIT SOFTWARE TESTING NOTES

Testing and Debugging


 (Testing is the process of determining if a program behaves as expected.) In the process one may
discover errors in the program under test. However, when testing reveals an error, (the process used to
determine the cause of this error and to remove it is known as debugging.) As illustrated in figure,
testing and debugging are often used as two related activities in a cyclic manner.
Steps are
1. Preparing a test plan
2. Constructing test data
3. Executing the program
4. Specifying program behaviour
5. Assessing the correctness of program behaviour
6. Construction of oracle

 Preparing a test plan:


(A test cycle is often guided by a test plan. When relatively small programs are being tested, a test plan is
usually informal and in the tester’s mind or there may be no plan at all.)
Example test plan: Consider following items such as the method used for testing, method for evaluating the
adequacy of test cases, and method to determine if a program has failed or not.
Test plan for sort:
The sort program is to be tested to meet the requirements given in example
1. Execute the program on at least two input sequence one with “A” and the other with “D” as request
characters.
2. Execute the program on an empty input sequence
3. Test the program for robustness against erroneous input such as “R” typed in as the request character.
4. All failures of the test program should be recorded in a suitable file using the company failure report
form.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 7


RNSIT SOFTWARE TESTING NOTES

 Constructing Test Data:


 A test case is a pair consisting of test data to be input to the program and the expected output.
 The test data is a set of values, one for each input variable.
 A test set is a collection of zero or ore cases.
Program requirements and the test plan help in the construction of test data. Execution of the program
on test data might begin after al or a few test cases have been constructed.
Based on the results obtained, the testers decide whether to continue the construction of additional test
cases or to enter the debugging phase.
The following test cases are generated for the sort program using the test plan in the previous figure.

 Executing the program:


 Execution of a program under test is the next significant step in the testing. Execution of this step for
the sort program is most likely a trivial exercise. The complexity of actual program execution is
dependent on the program itself.
 Testers might be able to construct a test harness to aid is program execution. The harness initializes any
global variables, inputs a test case, and executes the program. The output generated by the program
may be saved in a file for subsequent examination by a tester.

In preparing this test harness assume that:


(a) Sort is coded as a procedure

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 8


RNSIT SOFTWARE TESTING NOTES
(b) The get-input procedure reads the request character & the sequence to be sorted into variables
request_char, num_items and in_number, test_setup procedure-invoked first to set up the test includes
identifying and opening the file containing tests.
 Check_output procedure serve as the oracle that checks if the program under test behaves correctly.
 Report_failure: output from sort is incorrect. May be reported via a message(or)saved in a file.
 Print_sequence: prints the sequence generated by the sort program. This also can be saved in file for
subsequent examination.

 Specifying program behaviour:

State vector: collecting the current values of program variables into a vector known as the state vector.
An indication of where the control of execution is at any instant of time can be given by using an identifier
associated with the next program statement.

State sequence diagram can be used to specify the behavioural requirements. This same specification can then
be used during the testing to ensure if the application confirms to the requirements.

 Assessing the correctness of program


Behaviour: It has two steps:
1. Observes the behaviour
2. Analyzes the observed behaviour.

Above task, extremely complex for large distributed system


The entity that performs the task of checking the correctness of the observed behaviour is known as an oracle.

 But human oracle is the best available oracle.


 Oracle can also be programs designed to check the behaviour of other programs.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 9


RNSIT SOFTWARE TESTING NOTES

 Construction of oracles:
 Construction of automated oracles, such as the one to check a matrix multiplication program or a sort
program, Requires determination of I/O relationship. When tests are generated from models such as
finite-state machines(FSMs)or state charts, both inputs and the corresponding outputs are available.
This makes it possible to construct an oracle while generating the tests.

Example: Consider a program named Hvideo that allows one to keep track of home videos. In the data
entry mode, it displays a screen in which the user types in information about a DVD. In search mode, the
program displays a screen into which a user can type some attribute of the video being searched for and
set up a search criterion.
 To test Hvideo we need to create an oracle that checks whether the program function correctly in data
entry and search nodes. The input generator generates a data entry request. The input generaor now
requests the oracle to test if Hvideo performed its task correctly on the input given for data entry.

 The oracle uses the input to check if the information to be entered into the database has been entered
correctly or not. The oracle returns a pass or no pass to the input generator.

TEST METRICS
 The term metric refers to a standard of measurement. In software testing, there exist a variety of metrics.

There are four general core areas that assist in the design of metrics  schedule, quality, resources and size.

Schedule related metrics:


Measure actual completion times of various activities and compare these with estimated time to
completion.

Quality related metrics:


Measure quality of a product or a process

Resource related metrics:


Measure items such as cost in dollars, man power and test executed.

Size-related metrics:
Measure size of various objects such as the source code and number of tests in a test suite

Organizational metrics:
Metrics at the level of an organization are useful in overall project planning and management.
Ex: the number of defects reported after product release, averaged over a set of products developed and
marketed by an organization, is a useful metric of product quality at the organizational level.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 10


RNSIT SOFTWARE TESTING NOTES

 Organizational metrics allow senior management to monitor the overall strength of the organization
and points to areas of weakness. Thus, these metrics help senior management in setting new goals and
plan for resources needed to realize these goals.

Project metrics:
 Project metrics relate to a specific project, for example the I/O device testing project or a compiler
project. These are useful in the monitoring and control of a specific project.
1. Actual/planned system test effort is one project metrics. Test effort could be measured in terms
of the tester_man_months.
𝑛𝑜 .𝑜𝑓 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 𝑓𝑢𝑙 𝑡𝑒𝑠𝑡𝑠
2. Project metric=𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑠𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑠𝑦𝑠𝑡𝑒𝑚 𝑝ℎ𝑎𝑠𝑒

Process metrics:
 Every project uses some test process. Big-bang approach well suited for small single person projects.
The goal of a process metric is to assess the goodness of the process.
 Test process consists of several phases like unit test, integration test, system test, one can measure how
many defects were found in each phase. It is well known that the later a defect is found, the consttier it
is to fix.

Product metrics: Generic


Cyclomatic complexity
Halstead metrics

Cyclomatic complexity
V(G)= E-N+2P
Program p containing N node, E edges and p connected procedures.
Larger value of V(G)higher program complexity & program more difficult to understand &test than one
with a smaller values.
V(G) values 5 or less are recommended

Halstead complexity
Number of error(B) found using program size(S) and effort(E)
B= 7.6𝐸 0.667 𝑆 0.33

Product metrics: OO software


Metrics are reliability, defect density, defect severity, test coverage, cyclomatic complexity, weighted
methods/class, response set, number of children.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 11


RNSIT SOFTWARE TESTING NOTES
Static and dynamic metrics:
Static metrics are those computed without having to execute the product.
Ex: no. of testable entities in an application. Dynamic metric requires code execution.
Ex: no. of testable entities actually covered by a test suite is a dynamic quality.

Testability:
 According to IEEE, testability is the “degree to which a system or component facilitates the
establishment of test criteria and the performance of tests to determine whether those criteria have
been met”.
 Two types:
 static testability metrics
dynamic testability metrics

Static testability metric:


Software complexity is one static testability metric. The more complex an application, the lower the testability,
that is higher the effort required to test it.

Dynamic metrics for testability includes various code based coverage criteria.
Ex: when it is difficult to generate tests that satisfy the statement coverage criterion is considered to have low
testability them one for which it is easier to construct such tests.

UNIT 1 QUESTION BANK

No. QUESTION YEAR MARKS


1 How do you measure Software Quality? Discuss Correctness versus Reliability Jan 10 10
Pertaining to Programs?
2 Discuss Various types of Metrics used in software testing and Relationship? Jan 10 10
3 Define the following June 10 4
i) Errors ii) Faults iii) Failure iv) Bug
4 Discuss Attributes associated with Software Quality? June 10 8
5 What is a Test Metric? List Various Test Metrics ?and Explain any two? June 10 8
6 Explain Static & Dynamic software quality Attributes? July 11 8
7 Briefly explain the different types of test metrics. July 11 8
8 What are input domain and program correctness? July 11 4
9 Why is it difficult for tester to find all bugs in the system? Why might not be Dec 11 10
necessary for the program to be completely free of defects before its delivered to
customers?
10 Define software quality. Distinguish between static quality attributes and Dec 11 10
dynamic quality attributes. Briefly explain any one dynamic quality attribute.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 12


RNSIT SOFTWARE TESTING NOTES

UNIT 2
BASICS OF SOFTWARE TESTING - 2
SOFTWARE AND HARDWARE TESTING
There are several similarities and differences between techniques used for testing software and hardware
Software application Hardware product
Does not integrate over time Does integrate over time
Fault present in the application will remain and no VLSI chip, that might fail over time due to a fault that
new faults will creep in unless the application is did not exist at the time chip was manufactured and
changed tested
Built-in self test meant for hardware product, rarely, BIST intended to actually test for the correct
can be applied to software designs and code functioning of a circuit
It only detects faults that were present when the last Hardware testers generate test based on fault-models
change was made Ex: stuck_at fault model – one can use a set of input
test patterns to test whether a logic gate is functioning
as expected
 Software testers generate tests to test for correct functionality.
 Sometimes such tests do not correspond to any general fault model
 For example: to test whether there is a memory leak in an application, one performs a combination of
stress testing and code inspection
 A variety of faults could lead to memory leaks
 Hardware testers use a variety of fault models at different levels of abstraction
 Example:
o transistor level faults  low level
o gate level, circuit level, function level faults  higher level
 Software testers might not or might use fault models during test generation even though the model
exist
 Mutation testing is a technique based on software fault models
 Test Domain  a major difference between tests for hardware and software is in the domain of tests
 Tests for VLSI chips for example, take the form of a bit pattern. For combinational circuits, for example a
Multiplexer, a finite set of bit patterns will ensure the detection of any fault with respects to a circuit
level fault model.
 For software, the domain of a test input is different than that for hardware. Even for the simplest of
programs, the domain could be an infinite set of tuples with each tuple consisting of one or more basic
data types such as integers and reals.

Example

Consider a simple twp-input NAND gate in Fig.


A test bit vector V: (A=O, B=1) leads to output 0. Whereas the correct output should be 1: Thus V detects a
single S-a-1 fault to the A input of the NAND gate. There could be multiple stuck-at faults also.
 Test Coverage  It is practically impossible to completely test a large piece of software, for example, an
OS as well as a complex integrated circuit such as modern 32 or 64 bit Microprocessor. This leads to a
notion of acceptable test coverage. In VLSI testing such coverage is measured using a fraction of the
faults covered to the total that might be present with respect to a given fault model.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 13


RNSIT SOFTWARE TESTING NOTES
 The idea of fault coverage to hardware is also used in software testing using program mutation. A
program is mutated by injecting a number of faults using a fault model that corresponds to mutation
operators. The effectiveness or adequacy of a test case is assessed as a fraction of the mutants covered
to the total number of mutatis.

TESTING AND VERIFICATION


 Program verification aims at proving the correctness of progress by showing that is contains no errors.
 This is very different from testing that aims at uncovering errors in a program.
 While verification aims at showing that a given program works for all possible inputs that satisfy a set
of conditions, testing aims to show that the given program is reliable to that, no errors of any
significance were found.
 Program verification and testing are best considered as complimentary techniques.
 In the developments of critical applications, such as smart cards or control of nuclear plants, one often
makes use of verification techniques to prove the correctness of some artifact created during the
development cycle, not necessarily the complete program.
 Regardless of such proofs, testing is used invariably to obtain confidence in the correctness of the
application.
 Testing is not a perfect process in that a program might contain errors despite the success of a set of
tests; verification might appear to be a perfect process as it promises to verify that a program is free
from errors.
 Verification reveals that it has its own weakness.
 The person who verified a program might have made mistakes in the verification process’ there might
be an incorrect assumption on the input conditions; incorrect assumptions might be made regarding
the components that interface with the program.
 Thus, neither verification nor testing is a perfect technique for proving the correctness of program.

DEFECT MANAGEMENT
Defect Management is an integral part of a development and test process in many software development
organizations. It is a sub process of a the development process. It entails the following:
 Detect prevention
 Discovery
 Recording and reporting
 Classification
 Resolution
 Production

Defect Prevention
It is achieved through a variety of process and tools: They are,
 Good coding techniques.
 Unit test plans.
 Code Inspections.

Defect Discovery
 Defect discovery is the identification of defects in response to failures observed during dynamic testing
or found during static testing.
 It involves debugging the code under test.

Defect Classification
Defects found are classified and recorded in a database. Classification becomes important in dealing with the
defects. Classified into
 High severity-to be attended first by developer.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 14


RNSIT SOFTWARE TESTING NOTES

 Low severity.

Example: Orthogonal defect classification is one of the defect classification scheme which exist called ODC, that
measures types of defects, this frequency, and Their location to the development phase and documents.

Resolution
Each defect, when recorded, is marked as ‘open’ indicating that it needs to be resolved. It required careful
scrutiny of the defects, identifying a fix if needed, implementing the fix, testing the fix, and finally closing the
defect indicating that every recorded defect is resolved prior to release.

Defect Prediction
 Organizations often do source code Analysis to predict how many defects an application might contain
before it enters the testing the phase.
 Advanced statistical techniques are used to predict defects during the test process.
 Tools are existing for Recording defects, and computing and reporting defect related statistics.
o BugZilla - Open source
o Fog-Buzz - commercially available tools.

EXECUTION HISTORY
Execution history of a program, also known as execution trace, is an organized collection of information about
various elements of a program during a given execution. An execution slice is an executable subsequence of
execution history. There are several ways to represent an execution history,
 Sequence in which the functions in a given program are executed against a given test input,
 Sequence in which program blocks are executed.
 Sequence of objects and the corresponding methods accessed for object oriented languages such as Java
An execution history may also included values of program variables.

 A complete execution history recorded from the start of a program’s execution until its termination
represents a single execution path through the program.
 It is possible to get partial execution history also for some program elements or blocks or values of
variables are recorded along a portion of the complete path.

TEST GENERATION STRATEGIES

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 15


RNSIT SOFTWARE TESTING NOTES
Test generation uses a source document. In the most informal of test methods, the source document resides in
the mind of the tester who generates tests based on knowledge of the requirements.
Fig summarizes the several strategies for test generation. These may be informal techniques that assign
value to input variables without the use of any rigorous or formal methods. These could also be techniques that
identify input variables, capture the relationship among these variables, and use formal techniques for test
generation such as random test generation and cause effect graphing.
 Another set of strategies fall under the category of model based test generation. These strategies
require that a subset of the requirements be modelled using a formal notation.
 FSMs, statecharts, petrinets and timed I/O automata are some of the well known and used formal
notations for modelling various subset requirements.
 Sequence & activity diagrams in UML also exist and are used as models of subsets of requirements.
 There also exist techniques to generate tests directly from the code i.e. code based test generation.
 It is useful when enhancing existing tests based on test adequacy criteria.
 Code based test generation techniques are also used during regression testing when there is often a
need to reduce the size of the suite or prioritize tests, against which a regression test is to be performed.

STATIC TESTING
 Static testing is carried out without executing the application under test.
 This is in contrast to dynamic testing that requires one or more executions of the application under test.
 It is useful in that it may lead to the discovery of faults in the application, ambiguities and errors in the
requirements and other application-related document, at a relatively low cost,
 This is especially so when dynamic testing expensive.
 Static testing is complementary to dynamic testing.
 This is carried out by an individual who did not write the code or by a team of individuals.
 The test team responsible for static testing has access to requirenments document, application, and all
associated documents such as design document and user manual.
 Team also has access to one or more static testing tools.
A static testing tool takes the application code as input and generates a variety of data useful in the test
process.

WALKTHROUGHS
 Walkthroughs and inspections are an integral part of static testing.
 Walkthrough are an integral part of static testing.
 Walkthrough is an informal process to review any application-related document.
eg:
requirements are reviewed---->requirements walkthrough
code is reviewed---->code walkthrough
(or)
peer code review
Walkthrough begins with a review plan agreed upon by all members of the team.
Advantages:
 improves understanding of the application.
Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 16
RNSIT SOFTWARE TESTING NOTES

 both functional and non functional requirements are reviewed.


 A detailed report is generated that lists items of concern regarding the requirements.

INSPECTIONS
 Inspection is a more formally defined process than a walkthrough. This term is usually associated with
code.
 Several organizations consider formal code inspections as a tool to improve code quality at a lower cost
than incurred when dynamic testing is used.
Inspection plan:
i. statement of purpose
ii. work product to be inspected this includes code and associated documents needed for inspection.
iii. team formation, roles, and tasks to be performed.
iv. rate at which the inspection task is to be completed
v. Data collection forms where the team will record its findings such as defects discovered, coding
standard violations and time spent in each task.

Members of inspection team


a) Moderator: in charge of the process and leads the review.
b) Leader: actual code is read by the reader, perhaps with help of a code browser and with monitors for all
in the team to view the code.
c) Recorder: records any errors discovered or issues to be looked into.
d) Author: actual developer of the code.

It is important that the inspection process be friendly and non confrontational.


Use of static code analysis tools in static testing
 Static code analysis tools can be provide control flow and data flow information.
 Control flow information presented in terms of a CFG, is helpful to the inspection team in that it allows
the determination of the flow of control under different conditions.
 A CFG can be annotated with data flow information to make a data flow graph.
 This information is valuable to the inspection team in understanding the code as well as pointing out
possible defect.

Commercially available static code analysis tools are:


o Purify  IBM Rationale
o Klockwork  Klockwork
o LAPSE (Light weight analysis for program security in eclipse)  open source tool

(a) CFG clearly shows that the definition of x at block 1 is used at block-3 but not at block 5.In fact the definition
of x at block 1 is considered killed due to its redefinition at block 4.
(b) CFG indicates the use of variable y in the block 3.If y is not defined along the path from start to block 3,then
there is a data-flow error as a variable is used before it is defined.
Several such errors can be detected by static analysis tools.
->compute complexity metrics, used as a parameter in deciding which modules to inspect first.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 17


RNSIT SOFTWARE TESTING NOTES
Model-Based Testing and Model checking:
o Model based testing refers to the acts of modeling and the generation of tests from a formal model of
application behavior.
o Model checking refers to a class of techniques that allow the validation of one or more properties from a
given model of an application.

o Above diagram illustrates the process of model-checking. A model, usually finite state is extracted from
some source. The source could be the requirements and in some cases, the application code itself.
o One or more desired properties are then coded to a formal specification language. Often, such
properties are coded in temporal logic, a language for formally specifying timing properties. The model
and the desired properties are then input to a model checker. The model checker attempts to verify
whether the given properties are satisfied by the given model.
o For each property, the checker could come up with one of three possible answer:
o the property is satisfy
o the property is not satisfied.
o or unable to determine
o In the second case, the model checker provides a counter example showing why the property is not
satisfied.
o The third case might arise when the model checker is unable to terminate after an upper limit on the
number of iterations has reached.
o While model checking and model based testing use models, model checking uses finite state models
augmented with local properties that must hold at individual states. The local properties are known as
atomic propositions and augmented models as kripke structure.

CONTROL FLOW GRAPH


o A CFG captures the flow of control within a program. Such a graph assists testers in the analysis of a
program to a understand its behaviour in terms of the flow of control. A CFG can be constructed
manually without much difficulty for relatively small programs, say containing less than about 50
statements.
o However, as the size of the program grows, so does the difficulty of constructing its CFG and hence
arises the need for tools.
o A CFG is also known by the names flow graph or program and it is not to be confused with program-
dependence graph(PDG).

Basic Block
 Let P denotes a program written in a procedural programming language, be it high level as C or Java or
low level such as the 80x86 assembly. A basic block, or simply a block, in P is a sequence of consecutive
statements with a single entry and a single exit point.
 Thus, a block has unique entry and exit points.
 Control always enters a basic block at its entry point and exits from its exit point. There is no possibility
of exit or a halt at any point inside the basic block except at its exit point. The entry and exit points of a
basic block co inside when the block contains only one statement.
 example: the following program takes two integers x and y and output x^y.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 18


RNSIT SOFTWARE TESTING NOTES
 There are a total of 17 lines in this program including the begin and end. The execution of this program
begins at line 1 and moves through lines 2, 3 and 4 to the line 5 containing an if statement. Considering
that there is a decision at line 5, control could go to one of two possible destinations at line 6 and 8.
Thus, the sequence of statements starting at line 1 and ending at line 5 constitutes a basic block. Its only
entry point is at line 1 and the only exit point is at line 5.

Note: ignored lines 7 and 13 from the listing


because these are syntactic markers, and so
are begin and end that are also ignored.
Flow Graph: Definition and pictorial representation
 A flow graph G is defines as a finite set N of nodes and a finite set E of a directed edges. In a flow graph
of a program P, we often use a basic block as a node and edges indicate the flow of control across basic
blocker.
 A pictorial representation of a flow graph is often used in the analysis of control behaviour of a
program. Each node is represented by a symbol, usually an oval or a rectangular box. These boxes are
labelled by their corresponding block numbers. The boxes are connected by lines representing edges.
Arrows are used to indicate the direction of flow. These edges are labelled true or false to indicate the
path taken when the condition evaluates to true and false respectively.
 N={start,1,2,3,4,5,6,7,8,9,end}
 E={(start,1),(1,2),(1,3),(2,4),(3,4),(4,5),(5,6),(6,5),(5,7),(7,8),(7,9),(9,end)}

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 19


RNSIT SOFTWARE TESTING NOTES
Path
 A path through a flow graph is considered complete if the first node along the path is considered
complete if the first node along the path is start and the terminating node is END.
 A path p through a flow graph for a program p is considered feasible if there exists at least one test case
which when input to p causes p to be traversed. If no such test case exists, then p is considered
infeasible. Whether a given path p through a program p is feasible is in general an undecidable problem.
 This statement implies that it is not possible to write an algorithm that takes as inputs an arbitrary
program and a path through that program, and corr

TYPES OF TESTING
 Framework consists of a set of five classifies that serve to classify testing techniques that fall under the
dynamic testing category.Dynamic testing requires the excution of program under test.Static testing
consists of testing for the review and analysis of the program.
 five classifiers of testing:-
o 1.C1:source of test generation
o 2.C2:life cycle phase in which testing takes place
o 3.C3:goal of a specific testing activity.
o 4.C4:characteristics of the artifact under test
o 5.C5:test process

Classifier C1: Source of test generation


 Black box Testing: Test generation is an essential part of testing. There are a variety of ways to generate
tests, listed in table. Tests could be generated from informally or formally specified requirements and
without the aid of the code that is under test. Such form of testing is commonly referred to as black box
testing.

Model based or specification based testing:


 Model based or specification based testing occurs when the requirements are formally specified as for
example, using one or more mathematical or graphical notations such as, z, statecharts, event sequence
graphs

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 20


RNSIT SOFTWARE TESTING NOTES
White box testing:
 White box testing refers to the test activity where in code is used in the generation of or the assessment
of the test cases.
 Code could be used directly or indirectly for test generation.
o In the direct case, a tool, or a human tester examines the code and focuses on a given path to be
covered. A test is generated to cover path.
o In the indirect case, test generated using some black box testing is assessed against some code
based coverage criterion.
 Additional tests are then generated to cover the uncovered positions of the code by the analyzing which
parts of the code are feasible.
 Control flow, data flow, and mutation testing can be used for direct as well as indirect code-based test
generation.

Interface testing:
 Tests are often generated using a components interface.
 Interface itself forms a part of the components requirements and hence this form of testing is black box
testing. However, the focus on the interface leads us to consider interface testing in its own right.
Techniques such as
o --->pairwise testing
o --->interface mutation

Pairwise testing:
 Set of values for each input is obtained from the components requirement.

Interface mutation:
 The interface itself, such as function coded in /c orCORBA component written in an IDL,serves to
extract the information needed to perform interface mutation.
o pairwise testing:is a black box testing
o interface mutation:is a white box testing
Ad-hoc testing:
 In adhoc testing,a tester generates tests from requirements but without the use of any systematic
method.

Random testing:
 Random testing uses a systematic method to generate tests.Generation of tests using random testing
requires modeling the input space randomly.

Classifier C2: Life cycle phase


 Testing activities take place throughout the software life cycle.
 Each artifact produced is often subject to testing at different levels of rigor and using different testing
techniques.
Unit testing:
 Programmers write code during the early coding phase.
 They test their code before it is integrated with other system components.
 This type of testing is referred to as the unit testing.
System testing:
 When units are integrated and a large component or a subsystem formed, programmers do integration
testing of the sub system.
 System testing is to ensure that all the desired functionality is in the system and works as per its
requirements.
 Note: test designed during unit testing are not likely to be used during integrating and system testing.
Acceptance testing:
 two types:
o -alpha testing

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 21


RNSIT SOFTWARE TESTING NOTES
o -beta testing
 Carefully selected set if customers are asked to test a system before commercialization.
 This form of testing is referred to as beta testing.
 In case of contract software, the customer who contracted the development performs acceptability
testing prior to making the final decisions as to whether to purchase the application for deployment.

Classifier C3: Goal-directed testing


There exists a variety of goals of course finding any hidden errors is the prime goal of testing, goal-oriented
testing books for specific type of failure.
Robustness testing:
 Robustness testing refers to the task of testing an application for robustness against unintended inputs.
It differs from functional testing in that the tests for robustness are derived from outside of the valid (or
expected) input space, whereas in the former the tests are derived from the valid input space.
Stress testing:
 In stress testing, one checks for the behavior of an application under stress. Handling of overflow of
data storage, for example buffers, can be checked with the help of stress testing.
Performance testing:
 The term performance testing refers to that phase of testing where an application tested specifically
with performance requirements in the view.
 Ex: An application might be required to process 1,000billing transactions per minute on a specific intel
processer-based machine and running a specific OS.
Load testing:
 The term load testing refers to that phase of testing in which an application is loaded with respect to
one or more applications. The goal is to determine if the application continues to perform as required
under various load conditions.
 Ex: a database server can be loaded with requests from a large number of simulated users.

Classifier C4: Artifact under test


Table 1.7 is a partial list of testing techniques named after the artifact that is being tested. For ex, during the
design phase one might generate a design using SDL notation. This form of testing is known as design testing.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 22


RNSIT SOFTWARE TESTING NOTES
While testing a batch processing application, it is also important to include an oracle that will check the result
of executing each test script. This oracle might be a part of the test script itself. It could, for example, query the
contents of a database after performing an operation that is intended to change the status of the database.

Classifier C5: Test process models


Software testing can be integrated into the software development life cycle in a variety of ways. This leads to
various models for the tests process listed in the table 1.8

Testing in the waterfall model:


 The waterfall model is one of the earliest and least used, software life cycle.
 Figure 1.23 shows different phases in a development process based on the waterfall model. While
verification and validation of documents produced in each phase is an essential activity, static as well as
dynamic testing occurs toward the end if the process.
 Waterfall model requires adherence to an inherently sequential process, defects introduced in the early
phases and discovered in the later phases could be costly to correct.
 There is a very little iterative or incremental development when using the waterfall model.

Testing in the V-model:


The v-model, as shown in the fig, explicitly specifies testing activities associated with each phase of the
development cycle. These activities begin from the start and continue until the end of life cycle. The testing
activities are carried out parallel with the development activities.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 23


RNSIT SOFTWARE TESTING NOTES

Spiral testing:
 The term spiral testing is not to be confused with spiral model, through they both are similar in that
both can be visually represented as a spiral of activities.
 In the spiral testing, the sophisticated of testing of test activities increases with the stages of an evolving
prototype.
 In the early stages, when a prototype is used to evaluate how an application must evolve, one focuses on
test planning. The focus at this stage is on how testing will be performed in the remainder of the project.
 Subsequent iterations refine the prototype based on more precise set of requirements.
 Further test planning takes place and unit & integration tests are performed.
 In the final stage ,when the requirements are well defined, testers focus on system and acceptance
testing.

Agile testing:
Agile testing involves in addition to the usual steps such as test planning, test design and test execution.
Agile testing promotes the following ideas:
 Include testing -related activities throughout a development project starting from the requirement phase.
 Work collaboratively with the customer who specifies requirements in terms of tests.
 testers and development must collaborate with each other rather than serve as adversaries and
 Test often and in small chunks.

THE SATURATION EFFECT


 The saturation effect is an abstraction of a phenomenon observed during the testing of complex
software systems.
 The horizontal axis the figure refers to the test effort that increase over time.

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 24


RNSIT SOFTWARE TESTING NOTES
 The test effort can be measured as, for ex, the number of test cases executed or total person days spent
during the test and debug phase.
 The vertical axis refers to the true reliability (solid lines) and the confidence in the correct behavior
(dotted lines) of the application under test evolves with an increase in test effort due to error
correction.
 The vertical axis can also be labeled as the cumulative count of failures that are observed over time, that
is as the test effort increases.
 The error correction process usually removes the cause of one or more failures.

Confidence and true reliability:


Confidence in fig refers to the confidence of the test manager in the true reliability of the application under test.

 Reliability in the figure refers to the probability of failure free operation of the application under test in
its intended environment.
 The true reliability differs from the estimated reliability in that the latter is an estimate of the
application reliability obtained by using one of the many statistical methods.
o 0-indicates lowest possible confidence
o 1-the highest possible confidence
 Similarly,
o 0-indicates the lowest possible true reliability
o 1-the highest possible true reliability.

Saturation region:
->assumes application A in the system test phase.
->the test team needs to generate tests, set up the test environment, and run A against the test.
1. Assume that the testers are generated using a suitable test generation method (TGAT 1) and that
each test either passes or fails.
2. If we measure the test effort as the combined effort of testing, debugging and fixing the errors the
true reliability increases as shown in the fig.
False sense of confidentiality:
 This false sense of confidence is due to the lack of discovery of new faults, which in turn is due to the
inability of the tests generated using TGA1 to exercise the application code in ways significantly
different from what has already been exercised.
 Thus, in the saturation region, the robust states of the application are being exercised, perhaps
repeatedly, whereas the faults lie in the other states.
Reducing delta:
 Empirical studies reveal that every single test generation method has its limitations in that the resulting
test set is unlikely to detect all faults in an application.
 The more complex an application, the more unlikely it is that tests generated using any given method
will detect all faults.
 This is one of the prime regions why tests use or must use multiple techniques for test generation.
Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 25
RNSIT SOFTWARE TESTING NOTES
Impact on test process:
 A knowledge and application of the saturation effect are likely to be of value of any test team while
designing and implementing a test process.

UNIT 2 QUESTION BANK

No. QUESTION YEAR MARKS


1 Define the following: June 10 4
i)Testability ii)Verification
2 What is defect management? List the different activities. Explain any two. June 10 8
3 Explain the following: June 10 8
i) Static testing ii) Model based testing and model checking.
4 Explain how CFG assists the tester in analysis of program to understand the June 11 10
behavior in terms of flow of control with examples?
5 Describe the following test classifiers: June 11 10
i) Source of test generation; ii) Life cycle phase; iii)Test process models.
6 Explain Variety of ways in which Software testing can be integrated into the Dec 11 10
Software development life cycle.
7 Consider the following program: Dec 11 6
1) begin 10) while(power 1=0){
2) int x,y,power; 11) z=z*x;
3) float z; 12) power=power-1;
4) input(x,y); 13) }
5) if(y<0) 14) if(y<0)
6) power=-y; 15) z=1/z;
7) else 16) output(z);
8) power=y; 17) end
9) z=1;
Identify the basic blocks, their entry points and exit points. Draw the control flow
graph.
8 Write a short notes on the saturation effect Dec 11 4

Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 26

You might also like