Software Engineering_unit-4
Software Engineering_unit-4
UNIT-IV
Software Testing
• Two major categories of software testing
Black box testing
White box testing
Equivalence partitioning
• Divides all possible inputs into classes such that there are a finite equivalence classes.
• Equivalence class
-- Set of objects that can be linked by relationship
• Reduces the cost of testing
• Example
• Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within the allowed range and two invalid classes where values
are greater than maximum value and smaller than minimum value.
Boundary Value analysis
• Select input from equivalence classes such that the input lies at the edge of the
equivalence classes
• Set of data lies on the edge or boundary of a class of input data or generates the data that lies at the
boundary of a class of output data
Example
• If 0.0<=x<=1.0
• Then test cases (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
Orthogonal array Testing
• To problems in which input domain is relatively small but too large for exhaustive testing
Example
• Three inputs A,B,C each having three values will require 27 test cases
• L9 orthogonal testing will reduce the number of test case to 9 as shown below
A B C
1 1 1
1 2 2
1 3 3
2 1 3
2 2 3
2 3 1
3 1 3
3 2 1
3 3 2
Loop Testing
• Focuses on the validity of loop constructs
• Four categories can be defined
1. Simple loops
2. Nested loops
3. Concatenated loops
4. Unstructured loops
Testing of simple loops
-- N is the maximum number of allowable passes through the loop
• Cause Elimination
-- Based on the concept of Binary partitioning
-- A list of all possible causes is developed and tests are conducted to eliminate each
Software Quality
• Conformance to explicitly stated functional and performance requirements, explicitly documented
development standards, and implicit characteristics that are expected of all professionally
developed software.
• Factors that affect software quality can be categorized in two broad groups:
1. Factors that can be directly measured (e.g. defects uncovered during testing)
2. Factors that can be measured only indirectly (e.g. usability or maintainability)
• McCall‟s quality factors
1. Product operation
a. Correctness
b. Reliability
c. Efficiency
d. Integrity
e. Usability
2. Product Revision
a. Maintainability
b. Flexibility
c. Testability
3. Product Transition
a. Portability
b. Reusability
c. Interoperability
ISO 9126 Quality Factors
1.Functionality
2. Reliability
3.Usability
4.Efficiency
5.Maintainability
6.Portability
Product metrics
• Product metrics for computer software helps us to assess quality.
• Measure
-- Provides a quantitative indication of the extent, amount, dimension, capacity or size of some attribute of
a product or process
• Metric(IEEE 93 definition)
-- A quantitative measure of the degree to which a system, component or process possess a given attribute
• Indicator
-- A metric or a combination of metrics that provide insight into the software process, a software project or
a product itself
Product Metrics for analysis,Design,Test and maintenance
• Product metrics for the Analysis model
Function point Metric
First proposed by Albrecht
Measures the functionality delivered by the system
FP computed from the following parameters
1) Number of external inputs(EIS)
2) Number external outputs(EOS)
3) Number of external Inquiries(EQS)
4) Number of Internal Logical Files(ILF)
5) Number of external interface files(EIFS)
Each parameter is classified as simple, average or complex and weights are assigned as follows
• Information Domain Count Simple avg Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
1) SOFTWARE MEASUREMENT
Software measurement can be categorized in two ways.
(1) Direct measures of the software engineering process include cost and effort applied. Direct
measures of the product include lines of code (LOC) produced, execution speed, memory size, and
defects reported over some set period of time.
(2) Indirect measures of the product include functionality, quality, complexity, efficiency, reliability,
maintainability, and many other "–abilities"
Size-Oriented Metrics
Size-oriented software metrics are derived by normalizing quality and/or productivity measures by
considering the size of the software that has been produced.
To develop metrics that can be assimilated with similar metrics from other projects, we choose lines of
code as our normalization value. From the rudimentary data contained in the table, a set of simple size-
oriented metrics can be developed for each project:
Errors per KLOC (thousand lines of code).
Defects per KLOC.
$ per LOC.
Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
Errors per person-month.
LOC per person-month.
$ per page of documentation.
Function-Oriented Metrics
Function-oriented software metrics use a measure of the functionality delivered by the application
as a normalization value. Since „functionality‟ cannot be measured directly, it must be derived indirectly
using other direct measures. Function-oriented metrics were first proposed by Albrecht, who suggested a
measure called the function point. Function points are derived using an empirical relationship based on
countable (direct) measures of software's information domain and assessments of software complexity.
Proponents claim that FP is programming language independent, making it ideal for application
using conventional and nonprocedural languages, and that it is based on data that are more likely
to be known early in the evolution of a project, making FP more attractive as an estimation
approach.
Opponents claim that the method requires some “sleight of hand ” in that computation is
basedsubjective rather than objective data, that counts of the information domain can be difficult
to collect after the fact, and that FP has no direct physical meaning- it‟s just a number.
Typical Function-Oriented Metrics:
errors per FP (thousand lines of code)
defects per FP
$ per FP
pages of documentation per FP
FP per person-month
Measuring Quality
The measures of software quality are correctness, maintainability, integrity, and usability. These measures
will provide useful indicators for the project team.
Correctness. Correctness is the degree to which the software performs its required function. The
most common measure for correctness is defects per KLOC, where a defect is defined as a verified
lack of conformance to requirements.
Maintainability. Maintainability is the ease with which a program can be corrected if an error is
encountered, adapted if its environment changes, or enhanced if the customer desires a change in
requirements. A simple time-oriented metric is mean-time-tochange (MTTC), the time it takes to
analyze the change request, design an appropriate modification, implement the change, test it, and
distribute the change to all users.
Integrity. Attacks can be made on all three components of software: programs, data, and
documents.
To measure integrity, two additional attributes must be defined: threat and security. Threat is the
probability (which can be estimated or derived from empirical evidence) that an attack of a
specific type will occur within a given time. Security is the probability (which can be estimated or
derived from empirical evidence) that the attack of a specific type will be repelled. The integrity of
a system can then be defined as
integrity = ∑ [1 – – security))]
Usability: Usability is an attempt to quantify user-friendliness and can be measured in terms of
four characteristics: