Unit-4-converted
Unit-4-converted
Unit-4
Prepared by Y Sowjanya
Unit-4 Syllabus
• Testing Strategies: A strategic approach to
software testing, test strategies for conventional
software, Black-Box and White-Box testing,
Validation testing, System testing
• Product metrics: Software Quality, Metrics for
Analysis Model-function based metrics, Metrics
for Design Model-object oriented metrics, class
oriented metrics, component design metrics,
Metrics for source code, Metrics for
maintenance.
Prepared by Y Sowjanya
Testing Strategies:
A Strategic Approach for Software testing
• Testing is a process used to identify the correctness, completeness, and
the quality of developed computer software.
• Testing is a set of activities that can be planned in advance and
conducted systematically.
• It also helps to identify errors, gaps, or missing requirements.
• A strategy for software testing provides a road map that describes the
steps to be conducted as part of testing. (strategy means “approaches
and philosophies”).
• Testing is the process of execution of a program with the intention of
finding errors
• The main objective of testing is to prove that the software product
meets a set of pre-established acceptance criteria under a prescribed
set of environment circumstances.
• Testing can be done either manually or by using software tools.
• Involves 40% of total project cost
Prepared by Y Sowjanya
A Strategic Approach for Software Testing
• A Template for software testing—a set of steps into which
you can place specific test case design techniques and
testing methods.
• The following generic characteristics:
– To perform effective testing, you should conduct effective
technical reviews many errors will be eliminated before testing
commences.
– Testing begins at the component level and works “outward”
toward the integration of the entire computer-based system.
– Different testing techniques are appropriate for different
software engineering approaches and at different points in time.
– Testing is conducted by the developer of the software and (for
large projects) an independent test group.
– Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy.
• In other words software testing is a verification and
validation process.
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
a) Verification and Validation (V&V)
• Verification refers to the set of tasks that ensure
that software correctly implements a specific
function. (Ex: Does the product meet its specifications?)
• Validation refers to a different set of tasks that
ensure that the software that has been built is
traceable to customer requirements. (Ex: “Does the
product perform as desired?”)
• State another way:
– Verification: “Are we building the product right?”
– Validation: “Are we building the right product?”
• V&V encompasses a wide array of Software
Quality Assurance Activities.
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
b) Organizing for Software Testing:
• A number of misconceptions that might infer erroneously-
– (1) that the developer of software should do no testing at all,
– (2) that the software should be “tossed over the wall” to strangers
who will test it mercilessly,
– (3) that testers get involved with the project only when the testing
steps are about to begin
• The software developer is always responsible for testing the
individual units (components) of the program.
• The role of an independent test group (ITG) is to remove the
inherent problems associated with letting the builder test the
thing that has been built.
• The developer and the ITG work closely throughout a software
project to ensure that thorough tests will be conducted.
Note: The ITG is part of the software development project team in the sense that
it becomes involved during analysis and design and stays involved
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
c) Software Testing Strategy—The Big Picture
• A strategy for software testing may also be
viewed in the context of the spiral
– Unit testing begins at the vortex of the spiral and
concentrates on each unit (e.g., component, class, or
WebApp content object) of the software.
– Integration testing, where the focus is on design and
the construction of the software architecture.
– Validation testing, where requirements established as
part of requirements modeling are validated against
the software that has been constructed.
– System testing, where the software and other system
elements are tested as a whole.
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
c) Software Testing Strategy—The Big Picture
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
c) Software Testing Strategy—The Big Picture
• Unit testing makes heavy use of testing techniques that
exercise specific paths in a component’s control structure to
ensure complete coverage and maximum error detection
• Integration testing addresses the issues associated with the
dual problems of verification and program construction
(complete software package).
• After the software has been integrated (constructed), a set of
high-order tests is conducted
• Validation testing provides final assurance that software
meets all informational, functional, behavioral, and
performance requirements.
• The last high-order testing step falls outside the boundary of
software engineering and into the broader context of
computer system engineering.
• System testing verifies that all elements mesh properly and
that overall system function/performance is achieved
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
c) Software Testing Strategy—The Big Picture
Prepared by Y Sowjanya
A Strategic Approach for Software Testing:
d) Criteria for Completion of Testing
Prepared by Y Sowjanya
Types of Testing
Acceptance Testing
System Testing
Validation Testing
Integration Testing
Unit Testing
Prepared by Y Sowjanya
Testing Strategies for Conventional Software
1) Unit Testing
– Unit-test considerations.
– Unit-test procedures.
2) Integration Testing
• Different incremental integration strategies are:
– Top-down integration
– Bottom-up integration
– Regression testing
– Smoke testing
Prepared by Y Sowjanya
1) Unit Testing:
Prepared by Y Sowjanya
2) Integration Testing
• It is a systematic technique for constructing the
software architecture while at the same time
conducting tests to uncover errors associated
with interfacing.
• The objective is to take unit-tested components
and build a program structure that has been
dictated by design.
• Different incremental integration strategies are:
– Top-down integration
– Bottom-up integration.
– Regression testing
– Smoke testing
Prepared by Y Sowjanya
2) Integration Testing (cont..)
• Top-down integration→ Top-down integration
testing is an incremental approach to construction
of the software architecture.
• Modules are integrated by moving downward
through the control hierarchy, beginning with the
main control module (main program).
• Modules subordinate (and ultimately subordinate)
to the main control module are incorporated into
the structure in either a depth-first or breadth-first
manner.
Prepared by Y Sowjanya
Prepared by Y Sowjanya
2) Integration Testing –Top down Integration
• Top down integration process five steps-
– The main control module is used as a test driver and stubs are substituted
for all components directly subordinate to the main control module.
– Depending on the integration approach selected (i.e., depth or breadth
first), subordinate stubs are replaced one at a time with actual
components.
– Tests are conducted as each component is integrated.
– On completion of each set of tests, another stub is replaced with the
real component.
– Regression testing may be conducted to ensure that new errors have not
been introduced.
• Top-down strategy sounds relatively uncomplicated, but in practice,
logistical problems can arise. The most common of these problems
occurs when processing at low levels in the hierarchy is required to
adequately test upper levels
Note: Stubs (Called Programs) are used to test the functionality of modules as well as test
modules & stimulate the behavior of the lower level modules that are not yet
integrated as well as the activity of the missing components
Prepared by Y Sowjanya
2) Integration Testing (cont..)
• Bottom-up integration→
– begins construction and testing with atomic modules (i.e.,
components at the lowest levels in the program structure).
– Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is
always available and the need for stubs is eliminated.
• A bottom-up integration strategy may be implemented with
the following steps:
– Low-level components are combined into clusters (sometimes called
builds) that perform a specific software sub function
– A driver (a control program for testing) is written to coordinate test
case input and output.
– The cluster is tested
– Drivers are removed and clusters are combined moving upward in
the program structure.
• Note: drivers (Calling Program) are used when the main module is not ready. These are
usually complex than stubs and are developed during Bottom-Up approach
Prepared by Y Sowjanya
2) Integration Testing (cont..)
• Regression testing: is the re execution of some subset
of tests that have already been conducted to ensure that
changes have not propagated unintended side effects.
• Successful tests result in the discovery of errors, and errors
must be corrected.
• It helps to ensure that changes (due to testing or for other
reasons) do not introduce unintended behavior or
additional errors.
• It uses automated capture/playback tools -- enable the
software engineer to capture test cases and results for
subsequent playback and comparison.
• It is impractical and inefficient to reexecute every test for
every program function once a change has occurred.
Prepared by Y Sowjanya
2) Integration Testing (cont..)
• Smoke testing (Confidence Testing or Build Verification Testing) →
• The main aim of smoke testing is to detect early major issues.
• In this , we are verifying whether the important features are working and
there are no showstoppers in the build that is under testing
• It is designed as a pacing mechanism for time-critical projects, allowing
the software team to assess the project on a frequent basis.
• The smoke-testing approach encompasses the following activities:
– Software components that have been translated into code are integrated
into a build. A build includes all data files, libraries, reusable modules, and
engineered components that are required to implement one or more
product functions.
– A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover
“showstopper” errors that have the highest likelihood of throwing the
software project behind schedule.
– The build is integrated with other builds, and the entire product (in its
current form) is smoke tested daily. The integration approach may be top
down or bottom up.
Prepared by Y Sowjanya
2) Integration Testing (Smoke testing)Cont..
• In this testing method, the development team deploys the
build in QA. The subsets of test cases are taken, and then
testers run test cases on the build. The QA team test the
application against the critical functionalities. These series
of test cases are designed to expose errors that are in build.
If these tests are passed, QA team continues
with Functional testing.
• Frequent tests give both managers and practitioners a
realistic assessment of integration testing progress.
• Smoke testing provides a number of benefits when it is
applied on complex, time-critical software projects:
– Integration risk is minimized.
– The quality of the end product is improved.
– Error diagnosis and correction are simplified
– Progress is easier to assess
Prepared by Y Sowjanya
SMOKE TEST
Prepared by Y Sowjanya
2) Integration Testing (cont..)– Strategic options
• The major disadvantage of the top-down approach is the need for stubs and
the attendant testing difficulties that can be associated with them.
• The major disadvantage of bottom-up integration is that “the program as an
entity does not exist until the last module is added”.
• Selection of an integration strategy depends upon software characteristics
and, sometimes, project schedule.
• A combined approach (sometimes called sandwich testing) that uses top-
down tests for upper levels of the program structure, coupled with bottom-up
tests for subordinate levels may be the best compromise.
• As integration testing is conducted, the tester should identify critical modules.
• A critical module has one or more of the following characteristics:
– (1) addresses several software requirements,
– (2) has a high level of control (resides relatively high in the program structure),
– (3) is complex or error prone, or
– (4) has definite performance requirements.
• Critical modules should be tested as early as is possible. In addition,
regression tests should focus on critical module function.
• An overall plan for integration of the software and a description of specific
tests is documented in a Test Specification
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Validation Testing
Prepared by Y Sowjanya
Validation Testing –Configuration Review
• An important element of the validation
process is a configuration review.
• The intent of the review is to ensure that all
elements of the software configuration have
been properly developed, are cataloged, and
have the necessary detail to bolster the
support activities.
• The configuration review, sometimes called an
audit.
Prepared by Y Sowjanya
Validation Testing - Alpha and Beta Testing
• The alpha test is conducted at the developer’s site by a
representative group of end users. The software is used in a
natural setting with the developer “looking over the shoulder”
of the users and recording errors and usage problems. Alpha
tests are conducted in a controlled environment.
• The beta test is conducted at one or more end-user sites
(developer generally is not present)
– the beta test is a “live” application of the software in an
environment that cannot be controlled by the developer.
– The customer records all problems (real or imagined) that are
encountered during beta testing and reports these to the developer
at regular intervals.
– As a result of problems reported during beta tests, you make
modifications and then prepare for release of the software product
to the entire customer base.
Prepared by Y Sowjanya
Prepared by Y Sowjanya
System Testing
• System Testing is a type of software testing that
is performed on a complete integrated system to
evaluate the compliance of the system with the
corresponding requirements.
• Types of system tests:
– Recovery Testing
– Security Testing
– Stress Testing
– Performance Testing
Prepared by Y Sowjanya
System Testing- Recovery Testing
• Recovery testing is a system test that forces the
software to fail in a variety of ways and verifies
that recovery is properly performed.
• If recovery is automatic (performed by the system
itself), reinitialization, checkpointing mechanisms,
data recovery, and restart are evaluated for
correctness.
• If recovery requires human intervention, the
mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
Prepared by Y Sowjanya
System Testing- Security Testing
• Any computer-based system that manages sensitive information or
causes actions that can improperly harm (or benefit) individuals is a
target for improper or illegal penetration.
• Security testing attempts to verify that protection mechanisms built
into a system will, in fact, protect it from improper penetration.
• During security testing, the tester plays the role(s) of the individual
who desires to penetrate the system. Anything goes! The tester may
attempt
– To acquire passwords through external clerical means;
– may attack the system with custom software designed to break down any
defenses that have been constructed;
– may overwhelm the system, thereby denying service to others;
– may purposely cause system errors, hoping to penetrate during recovery;
– may browse through insecure data, hoping to find the key to system
entry.
Prepared by Y Sowjanya
System Testing- Stress Testing
• Stress tests are designed to confront programs with
abnormal situations.
• Stress testing executes a system in a manner that
demands resources in abnormal quantity,
frequency, or volume (like excessive memory
requests, database requests, many users at a time).
• A variation of stress testing is a technique called
sensitivity testing -- Sensitivity testing attempts to
uncover data combinations within valid input
classes that may cause instability or improper
processing.
Prepared by Y Sowjanya
System Testing- Performance Testing
• Performance testing is designed to test the
run-time performance of software within the
context of an integrated system.
• Performance testing occurs throughout all
steps in the testing process. Even at the unit
level, the performance of an individual
module may be assessed as tests are
conducted.
Prepared by Y Sowjanya
System Testing- Deployment Testing
(configuration testing)
Prepared by Y Sowjanya
Black-Box and White-Box Testing
Prepared by Y Sowjanya
White Box testing
• Executes all loops
• Exercises all data structures for their validity
• White box testing techniques
– 1.Basis path testing
• Flow Graph Notation
• Independent Program Paths
• Deriving Test Cases
• Graph Matrices
– 2.Control structure testing
• 1.Basis path testing→ Proposed by Tom McCabe
– Defines a basic set of execution paths based on logical
complexity of a procedural design
– Guarantees to execute every statement in the program at
least once
Prepared by Y Sowjanya
White Box testing-1. Basis path testing
• Steps of Basis Path Testing
1.Draw the flow graph from flow chart of the program
2.Calculate the cyclomatic complexity of the resultant flow
graph
3.Prepare test cases that will force execution of each path
• Three methods to compute Cyclomatic complexity
number
– 1. V(G)=E-N+2(E is number of edges, N is number of nodes
– 2. V(G)=Number of regions
– 3. V(G)= Number of predicates +1
Prepared by Y Sowjanya
Flow Chart and Flow Graph
Prepared by Y Sowjanya
Flow Graph
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Graph Matrix
Prepared by Y Sowjanya
White Box testing-2.Control Structure testing
Prepared by Y Sowjanya
A. Condition Testing
• Exercise the logical conditions contained in a
program module
• Focuses on testing each condition in the
program to ensure that it does contain errors
• Simple condition
– E1<relation operator>E2
• Compound condition
– simple condition<Boolean operator>simple condition
Prepared by Y Sowjanya
B. Data flow Testing
• Selects test paths according to the locations of
definitions and use of variables in a program
• Aims to ensure that the definitions of
variables and subsequent use is tested
• First construct a definition-use graph from the
control flow of a program
• A statement with S as its statement number,
– DEF(S) {X | statement S contains a definition of X}
– USE(S) {X | statement S contains a use of X}
Prepared by Y Sowjanya
B. Data flow Testing
• DEF(definition):definition of a variable on the
left-hand side of an assignment statement
• C- use: Computational use of a variable like
read, write or variable on the right hand of
assignment statement
• P- use: Predicate use in the condition
• Every DU chain be tested at least once.
Prepared by Y Sowjanya
C. Loop Testing
• Focuses on the validity of loop constructs
• Four categories can be defined
– 1.Simple loops
– 2.Nested loops
– 3.Concatenated loops
– 4.Unstructured loops
• Testing of simple loops
– N is the maximum number of allowable passes through the
loop
1.Skip the loop entirely
2.Only one pass through the loop
3.Two passes through the loop
4.m passes through the loop where m>N
5.N-1,N,N+1 passes the loop
Prepared by Y Sowjanya
C. Loop Testing (cont..)
• Nested Loops
– 1.Start at the innermost loop. Set all other loops to maximum
values
– 2.Conduct simple loop test for the innermost loop while holding
the outer loops at their minimum iteration parameter.
– 3.Work outward conducting tests for the next loop but keeping all
other loops at minimum.
• Concatenated loops
– Follow the approach defined for simple loops, if each of the loop
is independent of other.
– If the loops are not independent, then follow the approach for
the nested loops
• Unstructured Loops
– Redesign the program to avoid unstructured loops
Prepared by Y Sowjanya
Black-box Testing (behavioral testing)
• It focuses on the functional requirements of the
software.
• This method is applied at every level of software testing.
• It enable you to derive sets of input conditions that will
fully exercise all functional requirements for a program.
• Black-box testing attempts to find errors in the following
categories:
– (1) incorrect or missing functions,
– (2) interface errors,
– (3) errors in data structures or external database access,
– (4) behavior or performance errors, and
– (5) initialization and termination errors
Prepared by Y Sowjanya
Black-box Testing - 1. Graph-Based Testing Methods
• Software testing begins by creating a graph of
important objects and their relationships and then
devising a series of tests that covers the graph.
• To accomplish these steps,
– begin by creating a graph—a collection of nodes that
represent objects, links that represent the relationships
between objects, node weights that describe the properties
of a node, and link weights that describe some
characteristic of a link.
• Derive test cases by traversing the graph and covering
each of the relationships shown. These test cases are
designed in an attempt to find errors in any of the
relationships
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Black-box Testing - 2. Equivalence Partitioning
• Equivalence partitioning is a black-box testing method
that divides the input domain of a program into classes
of data from which test cases can be derived.
• Test-case design for equivalence partitioning is based on
an evaluation of equivalence classes for an input
condition.
• if a set of objects can be linked by relationships that are
symmetric, transitive, and reflexive, an equivalence class
is present.
– An equivalence class represents a set of valid or invalid
states for input conditions.
– An input condition is either a specific numeric value, a range
of values, a set of related values, or a Boolean condition.
• Reduces the cost of testing
Prepared by Y Sowjanya
Black-box Testing - 2. Equivalence Partitioning
• Equivalence classes may be defined according to the
following guidelines:
– 1. If an input condition specifies a range, one valid and two
invalid equivalence classes are defined.
– 2. If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined.
– 3. If an input condition specifies a member of a set, one valid
and one invalid equivalence class are defined.
– 4. If an input condition is Boolean, one valid and one invalid
class are defined.
• Example:
– Input consists of 1 to 10
– Then classes are n<1,1<=n<=10,n>10
– Choose one valid class with value within the allowed range and
two invalid classes where values are greater than maximum
value and smaller than minimum value.
Prepared by Y Sowjanya
Black-box Testing - 3. Boundary Value Analysis
• A greater number of errors occurs at the
boundaries of the input domain rather than in
the “center.”
• Boundary value analysis leads to a selection of
test cases that exercise bounding values-- at
the “edges” of the class and also derives test
cases from the output domain.
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Black-box Testing - 4. Orthogonal Array Testing
• Orthogonal array testing can be applied to problems in
which the input domain is relatively small but too large
to accommodate exhaustive testing.
• The orthogonal array testing method is particularly
useful in finding region faults—an error category
associated with faulty logic within a software
component.
• Example:
– Three inputs A,B,C each having three values will require 27
test cases
– L9 orthogonal testing will reduce the number of test case
to 9 as shown below
Prepared by Y Sowjanya
Orthogonal Array Testing (Cont..)
Prepared by Y Sowjanya
Prepared by Y Sowjanya
Product Metrics: Software Quality
• Software quality can be defined as an effective software
process applied in a manner that creates a useful product
that provides measurable value for those who produce it
and those who use it.
• The definition serves to emphasize three important points:
– Software requirements are the foundation from which
quality is measured. Lack of conformance to requirements is
lack of quality.
– Specified standards define a set of development criteria that
guide the manner in which software is engineered. If the
criteria are not followed, lack of quality will almost surely
result.
– There is a set of implicit requirements that often goes
unmentioned (eg : the desire for ease of use). If software
conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.
Prepared by Y Sowjanya
Software Quality: 1. McCall’s Quality Factors
• Factors that affect software quality can be categorized in two broad groups:
– 1. Factors that can be directly measured (e.g. defects uncovered during testing)
– 2. Factors that can be measured only indirectly (e.g. usability or maintainability)
• McCall software quality model was introduced in 1977. This model is incorporated with
many attributes, termed as software factors, which influence a software. The model
distinguishes between two levels of quality attributes : Quality Factors , Quality Criteria.
• Product quality factors are: Product Operation, Product Revision , Product Transition
• 1) Product Operation : it includes five software quality factors, which are related with
the requirements that directly affect the operation of the software
– Correctness– The extent to which a software meets its requirements specification.
– Efficiency – The amount of hardware resources and code the software, needs to
perform a function.
– Integrity– The extent to which the software can control an unauthorized person
from the accessing the data or software.
– Reliability – The extent to which a software performs its intended functions
without failure.
– Usability – The extent of effort required to learn, operate and understand the
functions of the software.
Prepared by Y Sowjanya
Software Quality: McCall’s Quality Factors (Cont..)
Prepared by Y Sowjanya
McCall’s Quality Factors: (cont..)
• 2. Product Revision : three software quality factors - which are required for
testing and maintenance of the software
– Maintainability – The effort required to detect and correct an error during
maintenance phase.
– Flexibility – The effort needed to improve an operational software program.
– Testability – The effort required to verify a software to ensure that it meets the
specified requirements.
• 3. Product Transition: three software quality factors, that allows the software
to adapt to the change of environments in the new platform or technology
from the previous.
– Portability – The effort required to transfer a program from one platform to
another.
– Re-usability – The extent to which the program’s code can be reused in other
applications.
– Interoperability – The effort required to integrate two systems with one another.
2. The Transition to a Quantitative View:
– Subjectivity and specialization also apply to determining software quality.
– To help solve this problem, a more precise definition of software quality is needed
as well as a way to derive quantitative measurements of software quality for
objective analysis
Prepared by Y Sowjanya
Software Quality: 3. ISO 9126 Quality Factors- an attempt to
identify quality attributes for computer software
Prepared by Y Sowjanya
Metrics for Analysis Model: Function-Based Metrics
• The function point (FP) metric can be used effectively as a means for
measuring the functionality delivered by a system.
• The FP metric can then be used to (measure the standard worth of the software)
– (1) estimate the cost or effort required to design, code, and test the software;
– (2) predict the number of errors that will be encountered during testing; and
– (3) forecast the number of components and/or the number of projected source
lines in the implemented system.
• Information domain values are defined in the following manner:
– Number of external inputs (EIs).
– Number of external outputs (EOs).
– Number of external inquiries (EQs).
– Number of internal logical files (ILFs).
– Number of external interface files (EIFs).
• Each parameter is classified as simple, average or complex and weights are assigned.
• To compute function points (FP),
– FP= count total x [0.65 + 0.01 x ∑ (Fi) ], where count total is the sum of all FP
entries obtained from the table.
– The Fi (i=1 to 14) are value adjustment factors (VAF) based on responses to the
questions. –answered using a scale that ranges from 0 (not important or
applicable) to 5 (absolutely essential).
Prepared by Y Sowjanya
Function-Based Metrics Example
Fig: Computing Function Points
Prepared by Y Sowjanya
Metrics for Design Model – a) Object oriented Metrics
Whitmire describes nine distinct and measurable characteristics of an OO
design:
• 1) Size: It is defined in terms of four views: population, volume, length,
and functionality.
– Population is a static count of OO entities (classes or operations).
– Volume measures are identical to population but are collected
dynamically—at a given instant of time.
– Length is a chain of interconnected design elements (e.g., the depth of an
inheritance tree).
– Functionality an indirect indication of the value delivered to the customer
by an OO application
• 2) Complexity: examine how classes of an OO design are interrelated to
one another.
• 3) Coupling: The physical connections between elements of the OO
design - the number of collaborations between classes
• 4) Sufficiency: “the degree to which an abstraction possesses the
features required of it, or the degree to which a design component
possesses features in its abstraction, from the point of view of the
current application.”
Prepared by Y Sowjanya
Metrics for Design Model – a) Object oriented Metrics (cont..)
Prepared by Y Sowjanya
Metrics for Design Model – b) Class-oriented Metrics
1) The CK Metrics Suite--proposed six class-based design metrics for OO
systems.
• a) Weighted methods per class (WMC)-Assume that n methods of
complexity c1, c2, . . ., cn are defined for a class C. The specific complexity
metric that is chosen (ex. cyclomatic complexity) should be normalized
so that nominal complexity for a method takes on a value of 1.0.
WMC = ∑ci for i = 1 to n. The number of methods and their complexity
are reasonable indicators of the amount of effort required to implement
and test a class.
• b) Depth of the inheritance tree (DIT): This metric is “the maximum
length from the node to the root of the tree”.
– the value of DIT for the class hierarchy shown is 4.
• C) Number of children (NOC): The subclasses that are immediately
subordinate to a class in the class hierarchy are termed its children.
– Class C2 has three children—subclasses C21, C22, and C23. As the number
of children grows, reuse increases, but also, as NOC increases the amount
of testing will also increase.
Prepared by Y Sowjanya
Metrics for Design Model – b) Class-oriented Metrics(Cont..)
Prepared by Y Sowjanya
Ci is a class within the architecture, and Ma(Ci ) = Md(Ci ) + Mi(Ci )
The value of MIF provides an indication of the impact of inheritance on the OO software.
Prepared by Y Sowjanya
The MOOD Metrics Suite (cont..)
ii) Coupling factor (CF)
• Coupling is an indication of the connections between
elements of the OO design. The MOOD metrics suite
defines coupling in the following way:
– CF = Σi Σj is_client (Ci, Cj)]/(Tc2 -Tc)
• where the summations occur over i = 1 to TC and j = 1 to TC. The
function
» is_client = 1, if and only if a relationship exists between the
client class, Cc, and the server class, Cs,
and Cc≠ Cs
= 0, otherwise
– it is reasonable to conclude that, as the value for CF
increases, the complexity of the OO software will also
increase and understandability, maintainability, and the
potential for reuse may suffer as a result.
Prepared by Y Sowjanya
Metrics for Design Model – c) Component-Level Design Metrics
Prepared by Y Sowjanya
c) Component-Level Design Metrics (Cont..) 2) Coupling Metrics
L= (2/n1) X (n2/N2)
Prepared by Y Sowjanya
Metrics For Source Code (cont..)
• Volume ratio (L)can be calculated by using the following equation
– L= Volume of the most compact form of a program
Volume of the actual program
– Where, value of L must be less than 1. Volume ratio can also be
calculated by using the following equation.
• L = (2/n1)* (n2/N2).
• Program difficulty level (D) and effort (E)can be calculated by using
the following equations.
– D = (n1/2)*(N2/n2).
– E = D * V.
• Note that program volume depends on the programming language
used and represents the volume of information (in bits) required to
specify a program
Prepared by Y Sowjanya
Prepared by Y Sowjanya