0% found this document useful (0 votes)
64 views

ACM '81, November 9-11, 1!) 81 Tutorial Abstract

This document summarizes a tutorial on using metrics to assess software quality assurance. It defines software quality as conformance to requirements and meeting user needs. Metrics are defined as objective mathematical measures of software attributes that can quantify aspects of quality. The document discusses how metrics can relate to traditional factors like maintainability and testability, and provides an overview of how metrics fit into the functions of a software quality assurance organization, including defining standards, assessing adherence to standards, and quantifying attributes of the development process and product.

Uploaded by

Sapta Riansyah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

ACM '81, November 9-11, 1!) 81 Tutorial Abstract

This document summarizes a tutorial on using metrics to assess software quality assurance. It defines software quality as conformance to requirements and meeting user needs. Metrics are defined as objective mathematical measures of software attributes that can quantify aspects of quality. The document discusses how metrics can relate to traditional factors like maintainability and testability, and provides an overview of how metrics fit into the functions of a software quality assurance organization, including defining standards, assessing adherence to standards, and quantifying attributes of the development process and product.

Uploaded by

Sapta Riansyah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ACM '81, November 9-11, 1!

)81 Tutorial Abstract

Metrics In Software Quality Assurance

J. E. Gaffney, Jr.
IBM Corporation
Federal Systems Division
Manassas, Va.

Abstract basis for discussion between user and developer


in clarifying whether a design appears appropriate
The nature of "software quality ': and some software as a step in implementing the functions ~esired by
metrics are defined an,] their relationship to the user.
traditional software indicators such as "maintain-
ability" and "reliability" are sugges;ted. Recent Software performance criteria can include a wide
work in the field is summarized and an outlook for variety of items, such as there being less than
software metrics in quality assurance is provided. some number of software defects being noted during
The material was originally presented as a tutorial a sell-off demonstration, fewer than some stated
at the "ACM SIGMETRICS Workshop/Symposium on number of defects being found during design and/or
Measurement and Evaluation of Software Quality" on code inspections, etc.
March 25, 1981.
Before focusing on the subject of the application
Key words and phrases: of metrics to software quality assurance, let us
briefly consider the principal functions of a
software quality assurance software quality assurance function. First, it
software metrics defines the standards for the software products
software research developed in its organizational unit. These
software quality factors standards may include ones established by the
Government, by a higher organizational unit
Software Quality and the Quality Assurance such as a corporate or divisional headquarters,
Organization or by the particular software quality assurance
organization itself.
What is "software quality"? It is the focus of
the software quality assurance organization. The second major function of the software quality
"Quality" is an aspect of (software) "product assurance function is to specify and implement
integrity". "Product integrity" includes other tools or aids for assessing software product
things, such as adherence to schedule and cost not quality. The tools may be as simple as checkoff
covered in the present art~9~e which are, nonethe- lists or as sophisticated as ones that automatically
less, of great importance.~±J Most simply, (soft- count the occurrence of such software measure-
ware) "quality" may be defined as "conformance to ables as the number of unique instruction types
requirements". Such "conformance" means, most in a program, the number of conditional jumps
generally, that the product meets the needs of the in it, or other such elements that may have a
user and satisfies stated performance criteria. bearing on software quality.

The "user needs" that the software product is to The third major function of the software quality
satisfy should be provided in written form to the assurance organization is to apply the tools to
developer before he begins his design. They should assess the degree to which the software products
be expressed in terms of the functions that the developed by its organizational unit adhere to
software product is to provide. Such a written the standards appropriate to that product, which
expression of "user needs" can be used as the it has established. The assessment may be
qualitative, such as certifying the adherence of
the software development group to certain
development approaches, such as top-down pro-
gramming and other modern programming practices.
The assessment may be quantitative, such as
recording the number of major defects (such as
a non-terminating loop) found in inspections of
the software design and/or the actual code.

126
ACM '81, November 9-11, 1981 Tutorial Abstract

Software Metrics and Quality Assurance "program linguistics" oriented metrics. Such
metrics are the subject of the remainder of
A software metric may be defined as "an objective, this presentation.
mathematical measure of software that is sensitive
to differences in software characteristics. It B. Those problems having to do with the dynamic
provides a quantitative measure of an attribute aspects of software, such as that a program
which the body of software exhibits." It should be is difficult to operate and/or to integrate
"objective" in that it is not a measure of my with other programs. Such problems are not
feelings or yours but is, insofar as possible, a considered further in this presentation.
reproducible measure of or about the software
product of interest. The number of major defects Quality Factors and Metrics
found during a sell-off test and a comparison of
that figure with a pre-established threshold of Software quality focuses on the degree of correct-
"goodness" or "badness" is objective. Saying that ness of an implementation of a function conceived
the software "has a lot of defects" is not. The to "meet the user's needs" (see above). It also is
range of values of software metrics should reflect concerned with the "goodness" of such an implementation.
differences with respect to one or more dimensions Ideally, this measure of "goodness" should be
of quality among the software products to which it quantifiable, indicating how well the software is
is applied. designed and coded according to measurable,
quantifiable criteria. This is where "metrics" fit
Software development is increasingly being ac- into software quality assurance. They should relate
complished more in line with established engineering to software quality "attributes" or "factors" of
and scientific principles and less as an art form. interest acknowledged by the community of software
Quantification of the software development process developers and users.
and the resultant software product is mandatory in
order for software engineering to truly be a J. A. McCall (2) has listed some "software qualify
scientific discipline. The use of software metrics factors", some of which can be related to "software
will contribute to this desired objective of an metrics", as is done for two of then, "maintainability"
increased level of quantification. Without such and "testability", in the section, "Some Software
quantification, the integrity of the software Metrics". McCall's "software quality factors"
product, which was considered above, cannot be what (using his definitions) are:
it should or otherwise has the potential to be.
i. Correctness Extent to which a program
What Lord Kelvin said in 1891 applies here: satisfies its specifications
and fulfills the user's
"When you can measure what you are speaking mission objectives.
about, and express it in numbers, you know
something about it; but when you cannot 2. Reliability Extend to which a program
measure it, when you cannot express it in can be expected to perform
numbers, your knowledge is a meager and un- its intended function with
satisfactory kind; it may be the beginning of required precision.
knowledge, but you have scarcely, in your
thoughts, advanced to the stage of science." 3. Efficiency The amount of computing
resources and code required
Or, the message to us in the software community by a program to perform a
is: function.

If you can't measure it, you can't manage it. 4. Integrity Extent to which access to
software or data by un-
Software metrics are of interest for several reasons. authorized persons can be
Numerical measures of the software product can be controlled.
transformed to indicators, such as "reliability"
and "maintainability" of interest to both users and 5. Usability Effort required to learn,
software development management. Some of these operate, prepare input, and
measures are defined in the section, "Some Software interpret output of program.
Metric~"~ A number of indicators, as defined by
McCall "2~ are presented in the section, "Quality 6. Maintainability Effort required to locate
Factors and Metrics". Also, software metrics are and fix an error in an
of interest because they might suggest modification operational program.
to the software development process. For example,
the number of conditional jumps used should be 7. Testability Effort required to test a
minimized because the amount of development testing program to insure it performs
required is proportional to that figure. its intended function.

The quantitative evaluation of software quality can 8. Flexibility Effort required to modify
address two principal problem types encountered in an operational program.
software products:
9. Portability Effort required to transfer
A. Those problems having to do with the static a program from one hardware
aspects of software and which are addressable configuration and/or software
(at least potentially) by software based/or system environment to another.

127
ACM '81, November 9-11, 1981 Tutorial Abstract

i0. Reusability Extent to which a program Among the metrics developed by others are these
can be used in other appli- three:
cations--related to the
packaging and scope of the 5. Division Proportion or number of con-
functions that programs ditional jumps; relates to
perform. testing effort; inversely pro-
portional to overall produc-
ii. Interoperability Effort required to couple tivity; a measure of control
one system with another. complexity. Var$~s work~,
including McCabe " ~ , ~ n ~'~,
G. J. Myers (3) has defined some other items which Paige" ", and Gaffney'-', have
certainly can be considered "software quality indicated its significance.
factors" in the same sense that the eleven cited
above are. They are: 6. Information Length of procedur e (number
Flow of instructions) times the
Coupling - The degree of interconnectedness of Complexity square of the number of possible
modules combination of an input source
Strength - The degree of cohensiveness of a module to an output ~ t i n a t i o n .
Kafura et al.(-v" have developed
These measures related to the degree of propagation this metric.
of changes, and h ~ e , "maintainability". Cruick-
shank and Gaffney" ~have developed quantitative A metric of particular significance to questions
measures of these items. relating to software maintenance is:

Some Software Metrics 7. Proportion A measure of complexity of the


of Modules software; relates to difficulty
In this section, several important metrics are de- Changed in in modifying the software (main-
fined and an example of the relationship between an Update tenance). This m ~ "c was de-
then and some of the qualitative "software quality veloped by Belady . It appears
factors" (defined above) is provided. Some of the to be related to "coupling" and
basic w o ~ . d o n e by the late Professor Maurice "strength" (see above), perhaps
Halstead" ) of Purdue University in software metrics quantifying them to some extent
is briefly summarized. as they deal with questions
about propagation of changes.
Among the metrics developed by Halstead are these
four: Various metrics of the group defined above can be
related to "software quality factors," such as
1. Potential Volume ~he minimum amount of listed earlier. One can think of an hierarchy in
or Intelligence "information" an algorithm; increasing order of detail and quantifiability. An
function of conceptually example of such an hierarchy is where the complex
unique number of inputs concept of "maintainability" is repetitively de-
and outputs to a software composed until it is described by various m e t r ~
procedure or module. Its in this case, 'Effort' and 'Division'. Gordon" )
unit is "bits" or "binary showed a relationship between 'effort' and the
digits". 'understandability' of a program, which is a major
attribute of 'maintainability'.
2. Volume The actual amount of infor-
mation a program; a function In the remainder of this section, mathematical
of the unique number of definitions of the four Halstead metrics qualitatively
operators (instructions) defined earlier in this section are provided.
and a unique number of
operands (data labels) Consider a program to be composed of a sequence of
used. Its unit is "bits". 'operators' and 'operands' For example, in the
instruction "ADD A", 'ADD~5would be an operator and
3. Difficulty (or Volume/Intelligence; the 'A' an operand. Halstead ( ) made the following
What Might be "size" of the program definitions.
Called, "Expan- relative to its minimum
sion Ratio") "size", a measure of re- n I = No. of operator types used.
dundancy. n~ = No. of operand types used.
n~ minimum no. of operand types (= the conceptually
4. Effort Volume times difficulty; unique no. of inputs and outputs)
relates to the difficulty n = n_ + n 2 = Total 'vocabulary' size.
a person finds in under- N1 T~tal no. of operators used.
standing a program; relates N Total no. of operands used.
to the degree of difficulty N2 N1 + N2
one may find in modifying a
program; also relates to Then, he mathematically defined the first of the
error proneness of a pro- metrics given above as:
gram.

128
ACM '81, November 9-11, 1981 Tutorlal Abstract

I. Potential volume, V*=(2+np*)logg(2+np*), the i. Refining the metrics and selecting the most
minimum program size, a m~asure-of tNe valuable ones; standardizing a set to be used,
instrinsic 'size' of the algorithm to be pro- if possible.
grammed.
2. Establishing the validity of the metrics used
2. Volume, V=NLOG2n A measure of program size. in a particular environment in which they are
employed.
3. Difficulty (D) or expansion ratio = V/V*.
3. Establishing 'goodness/badness' thresholds for
4. Effort = V x D = No. of decisions required to the metrics used.
develop the program.
4. Applying metrics at the software design stage
Some Recent Metrics Work if possible (a greater payoff is potentially
possible due to less expense for earlier
A great deal of work is being done with software modifiability).
metrics in various universities and industrial
organizations. Some of this work as applied to 5. Building up a base of metrics application
software quality assurance is summarized below: experience on a set of programs of application
size (not just 'toy programs').
i. Fitsos and Smith, IBM, GPD, Santa Teresa -
Demonstrated a strong relationship between 6. Conducting cost-benefit analyses of metrics
'Difficulty' metric and number of defects applications.
in code.
7. Obtaining acceptance of the utility of metrics
2. Kafura, Iowa State (I0) - Developed 'information by the software development community.
flow' metric, found high correlation with
number of defects. Some 'social' issues in the practical application
of software quality metrics are:
(13)
3. Ottenstein, Michigan Technological Univ.
- Estimated number of defects in code as function i. There is no accepted measurement practice
of 'program volume' metric.
2. Analysts and programmers are often relatively
4. Bailey and Dingee, Bell Labs., Denver (14) - autonomous
Applied 'software science' metrics to some
switching system software, found defects not 3. Management has a need for control
to be a linear function of 'program volume'
metric. 4. There is resistance to quantitative measure
of one's work product by most software pro-
5. Gaffney, IBM, FSD, Manassas (15) - Applied fessionals
'software science' metrics to some signal
processing and control software, found potential 5. Often, there is organizational inertia and
for 'goodness/badness' relations, found resistance to change in the way in which
relation between 'volume' and 'potential volume' business is done
metrics and number of c o n d i t i o ~ jumps and.A.
hence, testability (per McCabe "-~ and Paige[~)). 6. Who should do the measuring?

6. Belady, IBM, Research, Yorktown Hts. (II) - 7. Who should use the measures?
Developing measures of software complexity and
progressive deterioration based on propagation 8. How should the measures be used?
of changes.
Summary
7. Basili, Univ. of Maryland (16) - Determining
relationships among various software metrics Software "quality" is an aspect of "product integrity"
with software engineering laboratory data. definition of a "software metric" was given and
several "metrics" were defined. The utility of
Outlook for Software Metrics In Quality Assurance "metrics" including the quantification of certain
attributes of software "quality factors", such as
Metrics are promising and are potentially very "maintainability" was outlined Mathematical
valuable for providing a basis for objective com- definitions of four of the metrics developed by
parison of software products and possibly for Halstead were provided. Some recent work in the software
providing a basis for establishing standards of metrics field related to software quality was
'goodness/badness' They should prove useful in summarized. An outlook for the application of
supplementing the 'softwa ~ ~defect count' models , software metrics in quality assurance was provided;
such as developed by Musa 5). their use is promising but much work needs to be
done if their full potential is to be realized.
Much work is yet to be done for metrics to be
extensively applied on a practical basis in the
software development and maintenance environments.
Areas in which work needs to be done include:

129
ACM '81, November 9-11, 1981 Tutorial Abstract..

Bibliography 15. Gaffney, J., "Software Metrics: A key to


Improved Software Development Management,"
"Computer Science and Statistics", 13th
i. Bersoff, E., Henderson, V., Siegel, S. "Software Symposium on the Interface" (at Carnegie-
Configuration Management, An Investment Mellon University), March, 1981, to be
in Product Integrity," Prentice-E1all, 1980. in proceedings published by Springer-Verlag.

2. McCall, J.A., "An Introduction to Software 16. Basili, V., and Phillips, T-Y, "Evaluating and
Quality Metrics," In J. D. Cooper and Comparing Software Metrics in the Soft-
M. J. Fisher (Eds), "Software Quality ware Engineering Lab," 1981 ACM Workshop,
Management," Petrocelli, 1979. op. cir., pg. 95.

3. Myers, G. J., "Reliable Software Through Com- 17. Musa, J., "Software Reliability Measurement,"
posite Design," Petrocelli/Charter, 1975. "The Journal of Systems and Software I,
1980, pg. 223.
4. Cruickshank, R. and Gaffney, J., "Measuring the
Development Process: Software Design
Coupling and Strength Metrics;" The Fifth
Annual Software Engineering Workshop,
November, 1980, NASA Goddard Space Flight
Center.

5. Halstead, M., "Elements of Software Science,"


Elsevier, ]977.

6. McCabe, T., "A Complexity Measure," "IEEE


Transactions on Software Engineering,"
December, 1976, pg. 308.

7. Chen, E., "Program Complexity and Programmer


Productivity," "IEEE Transactions on
Software Engineering," May, 1978, pg. 187.

8. Paige, M., "An Analytical Approach to Software


Testing," Proceedings of the "IEEE Computer
Software and Applications Conference,"
October, 1978, pg. 527.

9. Gaffney, J., "Program Control Complexity and


ProductivitF," Proceedings of thc "IEEE
Workshop on Quantitative Software Models,"
October, 1979, pg. 140.

i0. Kafura, D., Harris, K., I1enry, S., "On the


Relationship Among Three Software Metrics,"
Proceedings of the "1981 ACM Workshop/
Symposium on Measurement and Evaluation
of Software Quality," March, 1981 (ACM
SIGMETRICS, Volume I0, Number i; Spring,
1981), pg. 81.

ii. Belady, L., "An Anti-Complexity Experiment,"


IEEE Workshop on Quantitative Software
Models, October, 1979, pg. 128.

12. Gordon, R., "A Measure of Mental Effort Related


to Program Clarity," Ph.D. Thesis,
Purdue University, 1977; University
Microfilms International.

13. Ottenstein, L., "Predicting Software Development


Errors Using Software Science Parameters,"
1981 ACM Workshop, op. cit., pg. 157.

14. Bailey, C., and Dingee, W., "A Software Study


Using Halstead Metrics," 1981 ACM Workshop,
op. cit., pg. 189.

130

You might also like