ACM '81, November 9-11, 1!) 81 Tutorial Abstract
ACM '81, November 9-11, 1!) 81 Tutorial Abstract
J. E. Gaffney, Jr.
IBM Corporation
Federal Systems Division
Manassas, Va.
The "user needs" that the software product is to The third major function of the software quality
satisfy should be provided in written form to the assurance organization is to apply the tools to
developer before he begins his design. They should assess the degree to which the software products
be expressed in terms of the functions that the developed by its organizational unit adhere to
software product is to provide. Such a written the standards appropriate to that product, which
expression of "user needs" can be used as the it has established. The assessment may be
qualitative, such as certifying the adherence of
the software development group to certain
development approaches, such as top-down pro-
gramming and other modern programming practices.
The assessment may be quantitative, such as
recording the number of major defects (such as
a non-terminating loop) found in inspections of
the software design and/or the actual code.
126
ACM '81, November 9-11, 1981 Tutorial Abstract
Software Metrics and Quality Assurance "program linguistics" oriented metrics. Such
metrics are the subject of the remainder of
A software metric may be defined as "an objective, this presentation.
mathematical measure of software that is sensitive
to differences in software characteristics. It B. Those problems having to do with the dynamic
provides a quantitative measure of an attribute aspects of software, such as that a program
which the body of software exhibits." It should be is difficult to operate and/or to integrate
"objective" in that it is not a measure of my with other programs. Such problems are not
feelings or yours but is, insofar as possible, a considered further in this presentation.
reproducible measure of or about the software
product of interest. The number of major defects Quality Factors and Metrics
found during a sell-off test and a comparison of
that figure with a pre-established threshold of Software quality focuses on the degree of correct-
"goodness" or "badness" is objective. Saying that ness of an implementation of a function conceived
the software "has a lot of defects" is not. The to "meet the user's needs" (see above). It also is
range of values of software metrics should reflect concerned with the "goodness" of such an implementation.
differences with respect to one or more dimensions Ideally, this measure of "goodness" should be
of quality among the software products to which it quantifiable, indicating how well the software is
is applied. designed and coded according to measurable,
quantifiable criteria. This is where "metrics" fit
Software development is increasingly being ac- into software quality assurance. They should relate
complished more in line with established engineering to software quality "attributes" or "factors" of
and scientific principles and less as an art form. interest acknowledged by the community of software
Quantification of the software development process developers and users.
and the resultant software product is mandatory in
order for software engineering to truly be a J. A. McCall (2) has listed some "software qualify
scientific discipline. The use of software metrics factors", some of which can be related to "software
will contribute to this desired objective of an metrics", as is done for two of then, "maintainability"
increased level of quantification. Without such and "testability", in the section, "Some Software
quantification, the integrity of the software Metrics". McCall's "software quality factors"
product, which was considered above, cannot be what (using his definitions) are:
it should or otherwise has the potential to be.
i. Correctness Extent to which a program
What Lord Kelvin said in 1891 applies here: satisfies its specifications
and fulfills the user's
"When you can measure what you are speaking mission objectives.
about, and express it in numbers, you know
something about it; but when you cannot 2. Reliability Extend to which a program
measure it, when you cannot express it in can be expected to perform
numbers, your knowledge is a meager and un- its intended function with
satisfactory kind; it may be the beginning of required precision.
knowledge, but you have scarcely, in your
thoughts, advanced to the stage of science." 3. Efficiency The amount of computing
resources and code required
Or, the message to us in the software community by a program to perform a
is: function.
If you can't measure it, you can't manage it. 4. Integrity Extent to which access to
software or data by un-
Software metrics are of interest for several reasons. authorized persons can be
Numerical measures of the software product can be controlled.
transformed to indicators, such as "reliability"
and "maintainability" of interest to both users and 5. Usability Effort required to learn,
software development management. Some of these operate, prepare input, and
measures are defined in the section, "Some Software interpret output of program.
Metric~"~ A number of indicators, as defined by
McCall "2~ are presented in the section, "Quality 6. Maintainability Effort required to locate
Factors and Metrics". Also, software metrics are and fix an error in an
of interest because they might suggest modification operational program.
to the software development process. For example,
the number of conditional jumps used should be 7. Testability Effort required to test a
minimized because the amount of development testing program to insure it performs
required is proportional to that figure. its intended function.
The quantitative evaluation of software quality can 8. Flexibility Effort required to modify
address two principal problem types encountered in an operational program.
software products:
9. Portability Effort required to transfer
A. Those problems having to do with the static a program from one hardware
aspects of software and which are addressable configuration and/or software
(at least potentially) by software based/or system environment to another.
127
ACM '81, November 9-11, 1981 Tutorial Abstract
i0. Reusability Extent to which a program Among the metrics developed by others are these
can be used in other appli- three:
cations--related to the
packaging and scope of the 5. Division Proportion or number of con-
functions that programs ditional jumps; relates to
perform. testing effort; inversely pro-
portional to overall produc-
ii. Interoperability Effort required to couple tivity; a measure of control
one system with another. complexity. Var$~s work~,
including McCabe " ~ , ~ n ~'~,
G. J. Myers (3) has defined some other items which Paige" ", and Gaffney'-', have
certainly can be considered "software quality indicated its significance.
factors" in the same sense that the eleven cited
above are. They are: 6. Information Length of procedur e (number
Flow of instructions) times the
Coupling - The degree of interconnectedness of Complexity square of the number of possible
modules combination of an input source
Strength - The degree of cohensiveness of a module to an output ~ t i n a t i o n .
Kafura et al.(-v" have developed
These measures related to the degree of propagation this metric.
of changes, and h ~ e , "maintainability". Cruick-
shank and Gaffney" ~have developed quantitative A metric of particular significance to questions
measures of these items. relating to software maintenance is:
128
ACM '81, November 9-11, 1981 Tutorlal Abstract
I. Potential volume, V*=(2+np*)logg(2+np*), the i. Refining the metrics and selecting the most
minimum program size, a m~asure-of tNe valuable ones; standardizing a set to be used,
instrinsic 'size' of the algorithm to be pro- if possible.
grammed.
2. Establishing the validity of the metrics used
2. Volume, V=NLOG2n A measure of program size. in a particular environment in which they are
employed.
3. Difficulty (D) or expansion ratio = V/V*.
3. Establishing 'goodness/badness' thresholds for
4. Effort = V x D = No. of decisions required to the metrics used.
develop the program.
4. Applying metrics at the software design stage
Some Recent Metrics Work if possible (a greater payoff is potentially
possible due to less expense for earlier
A great deal of work is being done with software modifiability).
metrics in various universities and industrial
organizations. Some of this work as applied to 5. Building up a base of metrics application
software quality assurance is summarized below: experience on a set of programs of application
size (not just 'toy programs').
i. Fitsos and Smith, IBM, GPD, Santa Teresa -
Demonstrated a strong relationship between 6. Conducting cost-benefit analyses of metrics
'Difficulty' metric and number of defects applications.
in code.
7. Obtaining acceptance of the utility of metrics
2. Kafura, Iowa State (I0) - Developed 'information by the software development community.
flow' metric, found high correlation with
number of defects. Some 'social' issues in the practical application
of software quality metrics are:
(13)
3. Ottenstein, Michigan Technological Univ.
- Estimated number of defects in code as function i. There is no accepted measurement practice
of 'program volume' metric.
2. Analysts and programmers are often relatively
4. Bailey and Dingee, Bell Labs., Denver (14) - autonomous
Applied 'software science' metrics to some
switching system software, found defects not 3. Management has a need for control
to be a linear function of 'program volume'
metric. 4. There is resistance to quantitative measure
of one's work product by most software pro-
5. Gaffney, IBM, FSD, Manassas (15) - Applied fessionals
'software science' metrics to some signal
processing and control software, found potential 5. Often, there is organizational inertia and
for 'goodness/badness' relations, found resistance to change in the way in which
relation between 'volume' and 'potential volume' business is done
metrics and number of c o n d i t i o ~ jumps and.A.
hence, testability (per McCabe "-~ and Paige[~)). 6. Who should do the measuring?
6. Belady, IBM, Research, Yorktown Hts. (II) - 7. Who should use the measures?
Developing measures of software complexity and
progressive deterioration based on propagation 8. How should the measures be used?
of changes.
Summary
7. Basili, Univ. of Maryland (16) - Determining
relationships among various software metrics Software "quality" is an aspect of "product integrity"
with software engineering laboratory data. definition of a "software metric" was given and
several "metrics" were defined. The utility of
Outlook for Software Metrics In Quality Assurance "metrics" including the quantification of certain
attributes of software "quality factors", such as
Metrics are promising and are potentially very "maintainability" was outlined Mathematical
valuable for providing a basis for objective com- definitions of four of the metrics developed by
parison of software products and possibly for Halstead were provided. Some recent work in the software
providing a basis for establishing standards of metrics field related to software quality was
'goodness/badness' They should prove useful in summarized. An outlook for the application of
supplementing the 'softwa ~ ~defect count' models , software metrics in quality assurance was provided;
such as developed by Musa 5). their use is promising but much work needs to be
done if their full potential is to be realized.
Much work is yet to be done for metrics to be
extensively applied on a practical basis in the
software development and maintenance environments.
Areas in which work needs to be done include:
129
ACM '81, November 9-11, 1981 Tutorial Abstract..
2. McCall, J.A., "An Introduction to Software 16. Basili, V., and Phillips, T-Y, "Evaluating and
Quality Metrics," In J. D. Cooper and Comparing Software Metrics in the Soft-
M. J. Fisher (Eds), "Software Quality ware Engineering Lab," 1981 ACM Workshop,
Management," Petrocelli, 1979. op. cir., pg. 95.
3. Myers, G. J., "Reliable Software Through Com- 17. Musa, J., "Software Reliability Measurement,"
posite Design," Petrocelli/Charter, 1975. "The Journal of Systems and Software I,
1980, pg. 223.
4. Cruickshank, R. and Gaffney, J., "Measuring the
Development Process: Software Design
Coupling and Strength Metrics;" The Fifth
Annual Software Engineering Workshop,
November, 1980, NASA Goddard Space Flight
Center.
130