0% found this document useful (0 votes)
44 views

A Conceptual Model of Software Testing

1. The document presents a conceptual model of software testing that uses basic blocks and mutation analysis to explain how testing increases confidence in a program's correctness. 2. The model illustrates that successfully testing one path increases confidence not just in that path, but also in other paths that share basic blocks. 3. The degree of confidence increase depends on the number of times a basic block is executed on the tested path, with more executions providing more confidence.

Uploaded by

Nisha Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

A Conceptual Model of Software Testing

1. The document presents a conceptual model of software testing that uses basic blocks and mutation analysis to explain how testing increases confidence in a program's correctness. 2. The model illustrates that successfully testing one path increases confidence not just in that path, but also in other paths that share basic blocks. 3. The degree of confidence increase depends on the number of times a basic block is executed on the tested path, with more executions providing more confidence.

Uploaded by

Nisha Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

A.C.

Marshall A Conceptual Model of Software Testing 5

A Conceptual Model of Software Testing

A. C. Marshall
Centre for Mathematical Software Research, University of Liverpool,
PO Box 147, Liverpool L69 3BX, U.K.

Abstract
As code is executed correctly under test, confidence in the correctness. of
the code increases. In this context, an intuitive conceptual model of the
process of software testing which draws upon experience gained with
mutation analysis is presented. The model is used to explain how the testing
of one path can influence confidence in other (possibly unexecuted) paths. It
is also discussed in relation to software reliability and systematic structural
testing, and is shown to be consistent with observations made during these
forms of testing.
KeywordslPhrases: Mutation Testing, Software Reliability, Structural
Testing.

1.0 Introduction
This paper addresses the question: "what knowledge is gained, in terms of confidence of correctness,
when a program is successfully executed?" An attempt is made to answer, or at least provoke
discussion regarding, this question by presenting a conceptual model which uses the notions of basic
blocks and mutation analysis [1,2] in order to explain the process of software testing and the subsequent
reliability growth. This model is described and then shown to be applicable in a number of situations.
It is assumed that a program contains a number of paths each consisting of a connected set of basic
blocks, and that each basic block resides on a set of distinct program paths. A basic block contains one
or more statements which, if untested, could be totally incorrect. The process of testing is therefore, in
effect, a way of demonstrating that the possible errors are not actually present, and that the code which
is present, is correct. The idea that untested statements contain potential errors was the inspiration
behind mutation testing.
6 Journal of Software Testing, Verification and Reliability Vol. I. Issue No.3

Mutation analysis can generally be performed in two distinct ways; the first, strong mutation [1],
involves inserting one specific error into a program, compiling, linking and then executing the code
with test data designed to expose the recently inserted error. If test data can be designed which exposes
the error and makes the program behave incorrectly, then it can be stated that this particular error
cannot possibly be present in a correct program and therefore may be ruled out from any further
consideration. In theory, an exhaustive set of potential errors could be defined, individually introduced
into the code and then exposed, thus showing that the program does not contain any errors (except
errors of omission which, this author believes, can only be discovered by analysis under actual, or
simulated, operating conditions, for example, by random Testing from an operational profile [3,4]).
The above form of mutation analysis is obviously impractical for any life-sized program since the
amount of CPU time required would be prohibitively large. However, Howden [2], attempted to avoid
this overwhelming deficiency by the introduction of weak mutation which is best described as a cross
between strong mutation and dynamic structural testing.
Weak mutation testing relies on defining a series of test data criteria which must be satisfied in order to
discount the existence of possible errors. As an example, consider the statement if (i.GT.3) then . . .; if i
is integral then the test data set which removes the possibility of an error in the relational operator
would be i E (2, 3, 4} corresponding to just before-, on- and just after- the border between true and
false. This test data is such that if a different relational operator were used in place of .GT. then the
logical result of the predicate would differ on at least one of the test cases. Other test data requirements
can be defined for other classes of errors [2,5].
The test data set given above is said to be reliable for exposing a relational operator error in the above
statement. Use of the word 'reliable' to describe 'probing' test data is not ideal because confusion easily
arises with areas of software reliability. Indeed, in this paper, the author does not refer to individual
basic blocks being 'reliable' but instead writes of degrees of 'confidence' of particular blocks depending
upon the amount of testing which the block has received.

2.0 The Consequences Of A Successful Execution

2.1 The execution of a basic block on one path many times


One successful execution of a given program path does not establish that the path is l(}()% correct. By
considering a simple program which contains only one computationally complex basic block, it is clear
to see that, for anything other than a trivial computation, a single input case is not 'reliable' test data (in
a weak mutation sense). A number of possible errors (live mutants) will still remain, generating an
unspecified amount of uncertainty in the blocks, and consequently, the program's correctness [6]. In
order to reduce this uncertainty (and increase confidence) a number of additional executions are
necessary.
However, for a general program with many paths, the successful execution of one path will, to a certain
degree, induce an increase in confidence in all other paths which contain basic blocks in common with
the tested path. An attempt to illustrate this has been made in Figure I.
Referring to the figure, the basic block, BBj , is represented as a rectangle which is divided into strips.
Each vertical strip corresponds to one of the paths that pass through the basic block. The shading of a
strip, which results from a successful execution, demonstrates the degree of confidence which has been
A.C. Marshall A Conceptual Model ofSoftware Testing 7

obtained by testing that block on that path; areas which remain unshaded still correspond to potential
errors which have not been shown to be definitely absent. It can be seen that most of the shading is on
path 1, the path which control passed down, but that there is also a degree of implied confidence in the
other paths. This is because a successful execution of the block on one path implies confidence that the
block will also perform correctly on other paths. The different types of shading correspond to the
individual executions of that block along path 1.

area of potential errors


I
I
I confidence achieved after first execution of path 1
'"
1"tJ"" extra confidence gained following second execution of path 1

extra confidence gained following third execution of path 1

1 Basic block i
C
o
n
f I-;:-;-
/
i ~
d
e '"
n '"
c ~
e I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I

, I

I ... ... . .. ... ... ... . .. ... . .. ... .. . . .. . ..


I
....'" ....'"'" ....'"'" ....
I
....'"'" '"....'" '"'"'" '"'"'" '"....'" '"....'" '"....'" ........'" ........'" '"....
I
I
'"
I , I I I I
,, ,, ,
I I I , I I

o I
I
I
I
I
I
I
I
, I
I
I
I
I
I
I
I
I
I
I
I
I
,
I

1 2 3 4 5 6 ... .. . ... .. . ... ... m-I m


Path number

Figure 1: Execution of a single basic block many times on one path (Note: not to scale)
8 Journal of Software Testing, Verification and Reliability Vol. I. Issue No.3

On average, the first execution of a previously unexecuted basic block, BBj , gives the largest single
step increase in the confidence of correctness of that particular block with this increase being, in some
way, proportional to the number of mutants killed or, more specifically, mutant complexity. Assuming
that all paths can be viewed in the same way, then the flrst successful execution on path i will cause the
confidence to increase from zero to Cji (which will be fairly close to one). (Arguably, the initial
confidence would be greater than zero as, presumably, the code would have been written by a
competent programmer [1], and would have already been subjected to a number of static readings,
walkthroughs, analyses and, presumably, compilations, however, this will be ignored as the first
'proper' execution of the code will instantly absorb this small amount of confidence.) Subsequent
executions of this basic block on the same path will, on average, give smaller and smaller increases in
the confidence of correctness as more, but proportionally less, mutants die - an example of the law of
diminishing returns.
An intuitive and convenient model to apply here is one in which, on average, the increase in confidence
in the correctness of a basic block on a given path between successive executions of the path, is
proportional to the remaining amount of uncoverage of the block on that particular path. (This should be
compared to a similar assumption made, and justified, in [7].) In other words, at each successful
execution of a block BBj along path i, a constant proportion, 0 < Aji < 1, of the residual uncertainty,
(l-lCji), is removed. To formalise this, consider BBj executed once on path i, then:

A second execution on path i yields:

lCji = Cji + (1 - Cjij')"ji

and a third execution gives:

lCji = Cji + (1 - Cji)"-Ji + (1 - (Cji + (1 - Cji)"-JiJ )"-Ji


= 1 + (Cji - 1)(1 - AjY

By induction, after n executions of BBj on path i, lCji achieves the value:


lCji = 1 + (Cji - 1)(1 - ~iJn-l (1)

The growth curve of lCji during the testing of one block on one path is depicted in Figure 2.

1
"'" ')

1 2 3 4 5
Number of executions
Figure 2: Growth of x for the testing of one block on one path (not to scale).
A.C. Marshall A Conceptual Model ofSoftware Testing 9

If, as is usually the case, basic block BBj lies not just on path i but on some other path, k say, then, as
has been argued above, the relationship Kji > 0 implies that Kji > Kjk> O.
This implies that it will not be necessary to test every path in a program, because when enough paths
through the block have been executed, the remaining unexecuted ones will already have gained a
sufficiently large amount of implied confidence to make any further testing superfluous.
Therefore, in theory, only a carefully selected critical subset of paths needs to be tested, however,
explicit determination of this subset is almost certainly not possible! The number of executions needed
to achieve a confidence of approximately unity in block BBj will be related to the number of theoretical
mutants of BBj . Given a comprehensively defmed set of mutant productions, (for example, see [5]) the
theoretical number of possible mutants could be estimated statically and specific factors which govern
the mutant death rate could also be assessed such as the degrees of polyjmultinomials [1,6]; code
complexity; the number of variables active in the block; the number and values of boolean variables
needed to kill all mutant boolean expressions [5]; the maximum dimension of arrays, etc.
As an illustrative example of how the confidence can vary between two statements of different
computational complexity, assume there are two single basic block programs Prand Ps- Pr is a one line
computation:
y = ao + ap:
and P, is another one line computation:
y = ao + aJx + a2x2 + ... + af;X9
It can be shown, see [6], that a polynomial of degree d requires the successful execution of (d+l)
unique test data instances in order to verify the correctness of its operators, coefficients and exponents.
Clearly, P, requires two distinct test data cases, whereas Ps needs ten.
From [6], assuming that the roots lie between 1 and X. one successful execution of a polynomial of
degree 1 gives a (1 - llX) probability that the computation is correct. and one successful execution of a
polynomial of degree 9 gives a (1 - 91X) chance of correctness. This illustrates how the value of ~i and
C)i can vary from block to block.

2.2 Execution of a basic block on more than one path


It may be argued, in much the same way as above, that, in general, every time a basic block, BB). is
executed on a new path there is an increase in confidence in that basic block. As before, on average, the
greatest increase in confidence will come with the very first execution of the block on any path; again
this will be Clio Subsequent executions of BB) on other paths will provide smaller and smaller increases
in a similar fashion to that described in section 2.2. The cumulative confidence in BB) that is achieved
by executing different paths through the basic block will be denoted p) and an attempt to depict its
variation with the execution of BB) along such paths is given in Figure 3.
Figure 3 should be interpreted in much the same way as Figure 1: the block is again represented as a
rectangle that is divided into strips corresponding to the different program paths through the block.
Unlike the previous case, a different path is executed by each test case so that the shading is much more
evenly distributed. Note how the execution of one path still implies confidence in others.
iO Journal of Software Testing, Verification and Reliability Vol.i.lssue No.3

area of potential errors


I
I I confidence achieved after first execution of basic block on path 1
'"
"J"-J confidence gained following second execution of basic block on path 2

confidence gained following third execution of basic block path 3

Basic block j
1
C
o
n /
f
i
... ... '"

d '" ...
e '" '"'"'" ...
n '"
r---;- '"'"'"
c ...
e
, I
...
'"
'"'"'"
I
... '"'"'"
,, I
... '"'"'"
, ... '"'"'"
I ... '"'"'"

,,, ... '"'"'"


I

... '"'"'"
, , ... '"'"
'"
... '"'"'"
,, ...
I
'"'"'"
'"
... '" '"
, ...
I '"
'" '"
I '"'"'"
... ... ... ... ... . ... ... . .. . .. ... . ..
.,

,, , '"'"'", '"'"'" '"'"'" '"'"'" '"'"'" '"'"'", '"'"'", '"'"'" '"'"'" '"'"'", '"'"'" '"'"'",
'"'"'"
I

'"'"
, ,, , ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,
I I I I I I I I

o I I ,
1 2 3 4 5 6 ... ., . ... . .. ., . ... m- m
Path number
Figure 3: Execution of a basic block once on each path passing through it (not to scale)

An expression for Pj can be formulated in much the same way as before: when BBj is executed once on
path i, then

Pj = Cji·
If, subsequently, BBj is executed on path i+I,

Pj = I + (Cji - /)(1 - Ilj(i+l))'


A.C. Marshall A Conceptual Model ofSoftware Testing 11

Here, ~j(i+l) is used to denote the proportional increase in confidence in BBj as a result of executing it
along a previously untested path. It should be noted that ~j(i+l) :;; t,(i+l) since the proportional increase
achieved by executing a different path is greater than that achieved by executing the same path again:
executing each of n > 1 different paths in a program once only results in a greater confidence than
executing one path n times.
If BBj is next executed on path i+2,

Pj = Cji + (1 - Cji)~j(i+l) + (1 - Cji + (1 - CjiJ~j(i+1))~j(i+2)

= 1 + (Cji - 1)(1 - ~j(i+ 1))(1 - ~j(i+2l


Continuing then to execute BBj on paths (i+3), (i+4), ... , (i+n-l), it can be shown by induction that:

Pj = 1 + (Cji - 1) n
m=2
(1 -~j(i+m-1)) (2)

Equation (2) involves several undetermined constants: Cji and the ~ji' Clearly, the more of these that
need to be estimated, the less practicable the formula becomes. However, equation (2) can be
simplified. Consider executing BBj on path i and then subsequently on path k :;; i. This yields Pj = p*,
where:

p* = Cji + (1 - CjiJJljk

If, on the other hand, BBj is executed on the same two paths but in reverse order, this gives Pj = p+,
where:
p+ = Cjk + (1 - CjkJ~ji

Clearly, the order of execution is unimportant to the resulting confidence in BBj • that is,
p* = p+ whence:

Cji + (1 - CjiJ~jk = Cjk + (1 - CjkJ~ji

implying:

(Cji - 1)/(Cjk - 1) = (1 - ~jiJ/(1 - ~jkJ

By assuming, not unreasonably, that the rise in confidence due to the first execution is path
independent, then Cji = Cjk = Cj (say), which implies ~ji = ~jk = ~j (say).
Using the above simplification, equation (2) now becomes:
Pj = 1 + (cj-l)(l- ~j)n-l (3)

Resultingly, the confidence growth curve for basic block BBj will look like Figure 2 except that the
vertical axis will denote Pj .
A similar simplification can also be made to equation (1). For, consider BBj executed n times on path i,
whence:
12 Journal of Software Testing. Verification and Reliability Vol.i.lssue No.3

x,']1 -- 1 + (c,'jl -1)(1 - A..


"jl
)n.l

If, instead, BBj is executed n times on path k, then:


lejk" = 1 + (Cjk - 1)(1 - 'AjJJn.l

If it is now assumed that leji = leji , which is to say that equal confidence is achieved for n executions
along the same path, regardless of which path is executed, it is found that:
(Cji - 1)/(Cjk - 1) = (1 - AjJJn.l/(1 - 'Aji)n.l

and by assuming, as above, that Cji = Cjk = Cj , then Aji = Ajk = Aj (say), and leji = lejk = lej (say).
Whence, equation (1) becomes:
(4)

2.3 Execution of a basic block


The two different perspectives of basic block execution, as represented by equations (3) and (4), above
can be combined to form just one relationship between basic block confidence and number of
executions; see [7] and [8] for corroboration.
In much the same way as above, an equation for this' Achieved Block Confidence', Pj , in basic block
BBj can be derived as:

Pj = 1 + (Cj - 1)(1 - Jl.j)p.l(1 - Aill

Here, p is the number of different paths through BBj which have been executed at least once, and q is
the total number of executions of paths through BBj minus p, that is, the total number of executions
which are not the first executions of a path.
Pi reflects both the number of times the testing of any path is duplicated as well as the number of
different paths that have been tested. (It should be noted that a graph similar to that in Figure 2 can also
be drawn for Pj .) The author anticipates that after a small number of executions, the value of Pj would
be in excess of 0.99. If all the blocks in a program were to have Pj = 0.99, then paths involving, for
example, 70 basic blocks, would be predicted to fail at least 50% of the time.

3.0 Implications
How does the above model of software testing relate to current testing / reliability estimation techniques
such as software reliability modelling and structural testing?

3.1 Structural testing


Structural Testing is a highly regarded and well used testing strategy consisting of a number of
hierarchical testing goals. One of the most common testing strategies is to attempt to drive the first few
'test effectiveness ratios' or 'Ter metrics' to unity.
It is often suggested that the first goal after a clean compilation of a program should be to design test
data such that Ter, achieves unity, where:
A.C. Marshall A Conceptual Model ofSoftware Testing 13

Ter, = Number afstatements executed at least once


Total number of executable statements in code)

Traditionally, after having executed all statements, the tester will attempt to perform exhaustive branch
testing, which means that:

Ter2 - Number afbranches executed at least once


Total number of branches present in program

should be forced to unity. Note that Ter2 = 1 implies Ter] = 1, but Ter] = 1 does not imply Ter2 = 1.
All feasible LCSAJs (Linear Code Sequence And Jumps, or 'jump to jump' paths [9]) may be the next
testing goal, where:

Ter3 = Number ofLCSAls executed at least once


Total number of LCSAJs in the program

By definition, Ter, = 1 implies Ter2 = 1, but the converse is not true.


A whole hierarchical family of test metrics can be defined; the first three members are as above, and an
empirical definition for Tern is:

Tern = No afdistincr subpaths length < (n-2l LCSAJs executed


Total no. of distinct subpaths length :s; (n-2) LCSAJs

The hierarchical structure implies that:


Tern =1 Tern_] = 1 ~ ... ~ Ter2 = 1 ~ Ter] = 1
The satisfaction of Ter, =1 ensures that every basic block has been executed at least once. Thus, for
each block, BB j , a degree of confidence of at leastj3j = Cj will have been achieved, and as some blocks
will have been executed more than once, confidence in such blocks will exceed Cj - the execution of a
block more than once is not really reflected by the Ter] metric.
The achievement of Ter2 = 1 corresponds to a further increase in confidence since different paths
through the program have been executed. According to Hennell et al. [9], this means that most blocks
will have been executed, on average, at least twice. As each statement is a member of, on average, a
minimum of eight LCSAJs [9], obtaining Ter3 = 1 means each block will have been executed on about
eight different paths, giving a fairly high confidence in most blocks. (Note: [9] outlines the classes of
errors which, it is thought, the satisfaction of the first three testing goals will remove.)
Using the conceptual model outlined in this paper, it can be seen that each level of the coverage
hierarchy represents a general level of achieved confidence and, even though it is not yet possible to
actually quantify this confidence, it can be seen that the maximisation of the highest possible level of
TER metric is a sound policy which is likely to give an increased level of program confidence.
14 Journal of Software Testing, Verification and Reliability Vol. 1. Issue No.3

3.2 Software reliability models


Software reliability models, such as that of Littlewood [10], rely on failure data gathered under normal
or simulated operating conditions. Such failure data was originally quantified in terms of calendar time
but was later expessed in terms of the much more meaningful CPU time. Furthermore, in [11], it is
shown that 'cumulative counts' of structural units (such as statements, basic blocks, branches or, better
still, LCSAJs [8]) that are executed, can be used instead of CPU time.
If a program is artificially harnessed so that it can be subjected to repeated random testing from an
operational profile [4] then, in general - and it must be emphasised that, due to the random nature of
the operational profile, this is a general trend and not a strict relationship - the following will happen:
the very first successful execution will induce a great leap in confidence in all the basic blocks which
are executed and therefore, implicitly, in the program as a whole. The second execution, assuming that
no errors are found, will also induce a similarly substantial increase in confidence due to the very large
probability that a different path, (and, therefore, basic blocks) will have been exercised. However, it is
also likely that there will be a certain amount of commonality between the two paths which means that,
in general, this second increase in confidence will be slightly less than the first. After a third successful
execution, a similar situation will obtain, with some blocks having undergone their third execution,
some their second and the majority their first, and again, in general, the increase in confidence on this
execution will be slightly less than on the previous one.
Following the above logic, the number of new potential errors shown not to be present per execution
decreases each time that the code is run. Clearly, if this is the case, the failure rate, as perceived by the
tester, is reduced even when no changes are made to the software: the fact that there have been no
failures means that confidence has been gained.
This inferred increase in confidence which results from a period of fault-free execution can be seen to
be Bayesian in nature. It is exactly this behaviour that Littlewood captured in his highly regarded
model, [10].

3.3 Error discovery


Thus far, no mention has been made of actually discovering an error. According to the above model,
how might the probability of finding an error be expected to change with respect to the number of
executions, or to the CPU time?
Each basic block in a program is assumed to contain a large number of possible errors which, in the
main, are shown to be absent by the process of software testing. Occasionally, one such potential error
will manifest itself and cause the program to deviate from its expected course, thereby causing a failure.
If it is assumed that the actual errors are of a similar size, are evenly distributed amongst the potential
errors, and occur in each basic block with a probability, 0 S; t S; 1, then, as testing proceeds (as coverage
of the block increases), using Bayesian logic, the probability of finding an error in a particular block
decreases. In other words, the probability of finding an error in a given block decreases as the number
of executions of that block increases. If n is the number of executions and Qj denotes the probability of
an error in block BBj • then
Qj = 1(1 -~j)
= 1(l-CjJ(l-Jlj)p-l(l-'A-jJq
A.C. Marshall A Conceptual Model ofSoftware Testing 15

If the behaviour detailed by the above relationship is applied to all the blocks on a particular path then,
by following the arguments given in the previous subsection, the probability of locating a bug (or the
size of the 'area' of code which could possibly contain a bug) decreases by an amount which decreases
with each execution, see Figure 4. Therefore, according to the model, the probability of an error
occurring traces an exponential decay with respect to the number of executions. Again, this can be seen
to fit in with expected behaviour [10].

Area containing possible bugs


100~

0%
o 1 2 3 4 5
Number of executions

Figure 4: The decrease in probability of finding errors in a program as the number of executions
increases

4.0 Summary
An attempt has been made to introduce a conceptual model of the process of software testing which
explains phenomena which are experienced in practice. It is intended that this model should help the
reader to visualise what happens during testing by presenting the process in a much less abstract way
than usual. The two popular, yet separate, disciplines of structural and random testing are related by this
model and the proposed relationship between reliability, or confidence in the context of this paper, and
coverage, given in [7] and [8], can also be explained in terms of potential-error removal by testing.

5.0 Acknowledgements
The author would like to thank all the partners of the ESPRIT TRUST project, (Software Engineering
Services Gmb.H, City University, Liverpool Data Research Associates, John Bell Technical Systems
and Liverpool University) and, in particular, Dr. Alan Veevers and Dr. Derek Yates for their useful and
constructive comments.

6.0 References
[1] DeMilio R.A., Lipton R.i. and Sayward F.G. Hints on Test Data Selection: Help for the Practical
Programmer. Computer, vol.l l , 1978.
16 Journal of Software Testing. Verification and Reliability Vol. 1. Issue No.3

[2] Howden W. Weak musation and the Completeness of Test Data Sets. IEEE Trans. on Soft Eng., vol.
SE-8, 2, 1982.
[3] Fergus E., Marshall A.C., Veevers A, Hennell M.A and Hedley D. The Quantification of Software
Reliability. Proc. Second IEE/BCS Conference on Software Engineering, Liverpool, 1988.
[4] Veevers A, Petrova E. and Marshall AC. Statistical Methods for Software Reliability Assessment,
Past. Present and Future. in Achieving Safety and Reliability with Computer Systems, Daniels B.K.,
ed., Elsevier Applied Science, London, 1987.
[5] Wu D., Hennell M.A., Hedley D., and Riddell I.J. A Practical Method for Software Quality Control
via Program Mutation. IEEE Proc. Second Workshop on Software Testing, Verification and Analysis,
Banff, Canada, 1988.
[6] DeMilIo R.A and Lipton R.I. A Probabilistic Remark on Algebraic Program Testing. Inf. Proc.
Lett., vol. 7,4, 1978.
[7] Veevers A and Marshall A.C. A Relationship Between Software Coverage Metrics and Reliability.
submitted to IEEE Trans. on Reliability.
[8] Veevers A Software Coverage Metrics and Operational Reliability. in Safety of Computer Control
Systems, Daniels, B.K., ed., Pergamon Press, London, 1990.
[9] Hennell M.A, Hedley D. and Riddell I.J. Assessing a Class of Software Tools, Proc. Seventh ICSE,
Orlando, Florida, 1984.
[10] Littlewood B. A Bayesian Differential Debugging Model for Software Reliability. Proc. COMPSAC
1980, Chicago, 1980.
[11] Marshall A.C., Beattie B. and Veevers A. An Investigation of Alternatives to Time Based Software
Reliability Measures, Technical Report, University of Liverpool, 1991.

You might also like