0% found this document useful (0 votes)
5 views

An_Empirical_Comparison_of_Backtracking_Algorithms

This document presents an empirical comparison of backtracking algorithms, specifically zero-level, one-level, and two-level search methods. The authors establish criteria for selecting the most appropriate backtracking algorithm based on problem size, showing that different methods are optimal for small, moderate, and large problems. The paper also discusses the theoretical underpinnings and performance measurements of these algorithms in solving NP complete problems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

An_Empirical_Comparison_of_Backtracking_Algorithms

This document presents an empirical comparison of backtracking algorithms, specifically zero-level, one-level, and two-level search methods. The authors establish criteria for selecting the most appropriate backtracking algorithm based on problem size, showing that different methods are optimal for small, moderate, and large problems. The paper also discusses the theoretical underpinnings and performance measurements of these algorithms in solving NP complete problems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO.

3, MAY 1982 309

pictures form simple polygons, since the sampling process


often results in collinear points.
REFERENCES
[1] W. E. Snyder and D. A. Tang, "Finding the extrema of a region,"
IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 266-
269, May 1980.

L
b
Fig. 5. Illustrating the proof of the theorem. An Empirical Comparison of Backtracking Algorithms
CYNTHIA A. BROWN AND PAUL W. PURDOM, JR.
REFERENCES
[11 W. E. Snyder and D. A. Tang, "Finding the extrema of a region,"
IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 266- Abstract-In this paper we report the results of experimental studies
269, May 1980. of zero-level, one-level, and two-level search rearrangement backtracking.
[2] M. I. Shamos, "Geometric complexity," in Proc. 7th ACM Symp. We establish upper and lower limits for the size problem for which one-
Theory of Comput., May 1975, pp. 224-233. level backtracking is preferred over zero-evel and two-level methods,
[31 D. P. Dobkin and L. Snyder, "On a general method for maximiz- thereby showing that the zero-level method is best for very small prob-
ing and minimizing among certain geometric problems," in Proc. leins. The one-level method is best for moderate size problems, and the
20th Annu. Symp. Foundations Comput. Sci., San Juan, Puerto
Rico, Oct. 1979, pp. 9-17. two-evel method is best for extremely large problems. Together with
[4] D. McCallum and D. Avis, "A linear algorithm for finding the con- our theoretical asymptotic formulas, these measurements provide a
vex hull of a simple polygon," Inform. Process. Lett., vol. 9, pp. useful guide for selecting the best search rearrangement method for a
201-206, Dec. 1979. particular problem.
[5] J. Jacobsen, "Biomedical computer analysis of cells," in Proc. 1 7th
Annu. Conf Eng. in Med. Biol., 1964, p. 117. Index Terms-Backtracking, constraint satisfaction, search algorithms,
[6] M. A. Fischler, "Fast algorithms for two maximal distance prob- tree search.
lems with applications to image analysis," Pattern Recog., vol. 12,
pp. 35-40, 1980.
[7] G. T. Toussaint, "Pattern recognition and geometrical complex- I. INTRODUCTION
ity," in Proc. 5th Int. Conf. Pattern Recog., Miami Beach, FL, An important task of computer scientists is devising general
Dec. 1980, pp. 1324-1347. algorithms that can be used to solve any problem from a large
[8] D. Avis, G. T. Toussaint, and B. K. Bhattacharya, "On the multi- set of related problems. Such sets of problems can be divided
modality distances in convex polygons," Comput. Math. Appl., into two classes, sometimes called "easy" and "hard." The easy
to be published.
sets are those for which each problem in the set can be solved
within a time which is a polynomial function of the problem
size. An example of such an easy set is the computation of the
shortest path between two nodes in a graph. An individual
problem in the set consists of a graph whose arcs have nonnega-
tive labels and a distinguished pair of vertices. There are well-
Comments on "A Counterexample to a Diameter known general methods for solving any problem in this set in a
time proportional to the square of the number of nodes in the
Algorithm for Convex Polygons" graph [ 7 1. For naturally occurring easy problem sets, the de-
W. E. SNYDER AND D. A. TANG gree of the polynomial time bound is usually no greater than
three, so rapid solution of large problems is possible.
A hard problem set is one for which the best known algorithm
In our paper [ 1 ] we described an algorithm which efficiently takes more than polynomial time for some sequence of prob-
found the diameter of arbitrary regions, convex or concave, lems in the set. (For some hard problem sets, there are solution
simple or not, filled (boundary not identified) or not. In our methods with small average time.) Many important hard prob-
enthusiastic desire to describe this very general and very effi- lem sets are NP complete. Garey and Johnson [8] give a thor-
cient algorithm, we described, for completeness, some more ough discussion of the NP complete class and list many NP
simple algorithms to handle some of the more simple and complete problem sets.
"uninteresting" (to us) cases, such as convex polygons. An examination of the problems listed by Garey and Johnson
Unfortunately, one of those "more simple" cases was in shows that most of them have a natural representation as a
fact not so simple, and Bhattacharya and Toussaint' have very predicate of the form
accurately pointed out a condition under which one of the
simple techniques we described fails. We are very grateful to p1 A Ri(w,x * * *, Wv) (1)
them for that observation. It should be noted, however, that I < i< m
this flaw does not propagate to our general algorithm, which
converges to the diameter for arbitrary regions. where each R is a relation that is simple in the sense that it
As one further note, it has been our experience that it is depends on only a few of the variables and each w has a finite
unreasonable to assume the boundaries of regions in digital
Manuscript received March 25, 1981; revised November 3, 1981. This
Manuscript received November 16, 1981. work was supported in part by the National Science Foundation under
The authors are with the Department of Electrical Engineering, North Grant MCS 7906110.
Carolina State University, Raleigh, NC 27607. The authors are with the Department of Computer Science, Indiana
1B. K. Bhattacharya and G. T. Toussaint, this issue, pp. 306-309. University, Bloomington, IN 47405.

0162-8828/82/0500-0309$00.75 © 1982 IEEE


Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
310 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982

number of possible values. Moreover, the work of Cook [51


and Karp [ 151 shows that solving any NP problem is equivalent
to determining the satisfiability of a conjunctive normal form
(CNF) predicate with three literals per term where the size of
the CNF formula is polynomially related to the size of the
original problem. This means that any hard problem in NP can
be represented in the form of (1), using simple relations for
the R 's. Fast general methods for solving problems of form (1)
would be useful for any NP complete set. (See also [ 11 1.)
Problems of form (1) where each R depends on a small num-
ber of variables are called constrained labeling problems by
some authors. Each R is then called a constraint, each v a unit, D
and each possible value for a unit is called a label. The problem D B C B
is NP complete even when each R depends on only two variables Fig.1. Zero-level backtrack tree for the predicate A A B AC-A D where
and each variable has only three possible values [ 18 or when A=aV-ic,B=--aVcVd,C=--iaVbVcV -id,D=--ibVcV-,d.
Each node where the predicate is false is labeled with a term that is
each R depends on three variables and each variable has two
possible values [5] . false. Under each node, the variable that is introduced into the search
Backtracking is a natural general method for solving problems at that point is given. In each case, the false branch is to the left.
of form (1) (as well as more difficult problems, such as alter-
nating Turing machine computations [4], that have a similar
structure). We have described our theoretical analyses of vari- In [2] we analyzed the behavior of simple backtracking over
ous backtracking algorithms elsewhere [21, [191. Here we re- an NP complete set of problems consisting of conjunctive nor-
port on empirical studies of these algorithms. The algorithms mal form formulas overv variables with s literals per term and
measured are the most efficient known simple methods for vt terms,1 <a <s. (The parameters s anda are arbitrary
solving many moderate size problems from NP complete sets. numbers; varying them varies the characteristics of the prob-
The results of these studies, in conjunction with our theoretical lems being considered.) We considered the version of back-
work, providepractical criteria for selecting the most appro- tracking that looks for all solutions to a predicate. The average
priate backtracking algorithm for a particular problem. Tree time for such problems is exp 0(v(s-Y(s)-
)) [2]. (To say that
an item g(v) is 0(f(v)) means that there exist positive con-
size estimation [ 161, [ 17] can then be used to judge how long
the chosen method will take. stants C1 and C2 such that C1 I f(v) g(v)< C2 f(v) for all
The remainder of this paper is organized as follows. Section values of v greater than some vo. The related notation
means that there exists a positive constant C such that g(v) <
O(f(v))
IIstudied
indicates some of the types of backtracking that have been
by us and other investigators and summarizes the cur- C f(v) for all values of v greater than some v0.) Since exhaus-
rent theoretical results on their average running time. Section tive search requires time exp 0(v), and since (s - oz)/(s
- 1) < 1,
III gives a more detailed description of the search rearrange- simple backtracking saves an exponential amount of time.
also Haralick and Elliot [ 141 for a nonasymptotic analysis on
(See
ment algorithms whose performance we measured. Section IV
discusses the model problem set on which the average perfor- a slightly different problem set.) Fig. 1 shows an example of a
mance of the algorithms was measured. Section V is a discus- backtrack tree generated by simple backtracking.
sion of our experimental techniques, and the results are given While simple backtracking represents a huge improvement
in Section VI. Section VII presents our conclusions. over exhaustive search, it can still be extremely slow. This has
led to the development of a number of methods for improving
II. BACKTRACKING the efficiency of backtracking. These methods can be divided
Consider a problem that is expressed as a predicate P in form into two categories: basic backtracking and predicate analysis
backtracking.
(1). A solution of the problem is a set of values W1, -, Wn Basic backtracking methods derive all their information
for wl, *, w, that make P true. An intermediate predicate
Pj(w1, w,Wj) for P is a predicate such that *, W1) is
Pi(W1, through the evaluation of intermediate predicates. One advan-
false only if P does not have a solution with wi = W1,*, w; = tage of these methods is their generality. Once a program for
a particular method is written, it can be adapted to any problem
Wi. (When Pj is true, P still may not have such a solution.) An
obvious choice for Pi is the conjunction of all the relations R set by adding routines to evaluate the appropriate intermediate
that depend only on w1, , * w;.If the intermediate predicates
are frequently false, they can be used to greatly increase the
predicates.
A simple, effective, basic backtracking technique was studied
efficiency of a search for solutions to P, using simple backtrack- by Bitner and Reingold [11. It involves testing each unset
ing. In simple backtracking, we begin with all the variables variable to find one with the fewest remaining values (the few-
unset (they may be regarded as having a special value, "unde- est values for which the intermediate predicate is not false) and
fined," at this point). Each variable in order beginning with introducing that variable next. Thus, instead of following a
the first is set to its first value. As variable wj is set, the inter- fixed search order, the order of variables is determined dynam-
mediate predicate Pj is tested. If the value of the predicate is ically, and may vary from branch to branch of the backtrack
false, the variable is set to its next value, until a value for which tree. We analyzed the performance of a simple search rearrange-
P1 is true is found. If the variable has no such value, it is re- ment algorithm over the same problem set that we used for
the analysis of simple backtracking [ 18]. The average time is
turned to the list of unset variables (set to "undefined") and
the previous variable is set to its next value. This continues re-
cursively until all the values of variable w1 have been tried. exp 0(v (s- Ci- I)(S-2)) for 1 i <s/2
In a problem with v binary variables, the number of potential and
solutions is 2v, so an exhaustive search for solutions would take exp O((ln v) (s- 1)/(S- 2) V (s- a - i)/(s- 2)) for /2 <a <s- 1.
time exponential in v. Backtracking examines partial potential
solutions as well as complete ones, so in the worst case, it ex- Thus, search rearrangement represents an exponential improve-
amines twice as many sets of values as the exhaustive search ment over simple backtracking. To obtain more detailed in-
method. In practice, however, backtracking usually performs formation on the performance of search rearrangement, we
much better than exhaustive search. carried out extensive measurements on the behavior of three

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982 311

search rearrangement algorithms, including the analyzed algo-


rithm. This paper reports the results of these measurements.
More complex basic backtracking algorithms are described
by Purdom, Brown, and Robertson [181. In this paper we
report preliminary measurements that indicate that the more
complex algorithms are more efficient for solving large prob-
lems. The algorithms that Haralick et al. develop using Tp
[1 1] can also be programmed as basic backtracking algorithms.
The second approach to improving backtracking involves a
direct analysis of the structure of P. Such predicate analysis
methods are probably significantly faster than the basic meth-
ods, although they can also be more difficult to use.
The most interesting of the predicate analysis methods is the Fig. 2. The backtrack tree produced by the Fast one-level algorithm for
Davis-Putnam procedure [61. Goldberg [10] analyzed the the predicate from Fig. 1. This tree has seven binary nodes, three
average behavior of a simplified version of the Davis-Putnam unary nodes, one zero-degree node, and seven solution nodes. Notice
procedure on an NP complete problem set. For predicates the order in which variables were selected on the right branch. Since
where there is a fixed probability that each literal appears in a c was a good first variable on the left branch, it was also selected for
given clause, it was shown that the Davis-Putnam procedure the right branch (all variables result in a binary node at this point).
takes polynomial average time. The algorithm also selected d before b on the right branch. The selec-
tion of c before b converted one binary node into a unary node and
Some of the algorithms of Gaschnig [9] and of Haralick and eliminated a zero-degree node. The selection of d before b did not
co-workers [ 11 ] -[ 14] are also predicate analysis methods. We lead to any savings.
hope to study the performance of various predicate analysis
methods over the problem set we have used for our other anal-
yses in the near future. Step 4 (Binary Node): Choose an element w of S'. (The
method for selecting this element depends on the specific
III. ONE-LEVEL SEARCH REARRANGEMENT algorithm.) Set S" v- S" U {w} and S' v- S' - {w}. Set
The search rearrangement algorithm of Bitner and Reingold Value[w] v- false, mark w as binary, and go to Step 2.
[1 ] examines each unset variable, and selects the one with the Step 5 (Unary Node): Set Value[w] to the unique value that
fewest remaining values for introduction. A more sophisticated makes Ps- u {w} true. (This value is remembered from Step 3.)
search rearrangement method would consider sets of unset
variables of some predetermined size (say k), and introduce Set S " - S " U {w} and S'- S'- {w}. Mark w as unary, and go
the variable which is the root of the smallest k-level subtree to Step 2.
[ 181 . We call an algorithm that considers k element sets a k- Step 6 (Next Value): If S" is empty, stop. Otherwise, set
level algorithm. Bitner and Reingold's method is thus a one- w v- top(S"). If w is marked as binary and Value [w] = false,
level algorithm and simple backtracking is zero-level. set Value[w] <- true and go to Step 2.
There are several distinct one-level algorithms, which vary in Step 7 (Backtrack): Set S " S" - {w}, S' - S' U {w}, and
the details of how ties are broken and in the order in which go to Step 6.
variables are tested, but whose general form is the same. We In our implementations of one-level search rearrangement,
present the general form first, and then discuss the individual we found it convenient to encode the following four states into
differences. Value [ w]: no value (w is an element of S'), unary false, binary
To use search rearrangement, it is necessary to have, for each false, and true. (There is no need to know whether a true node
subset S* of the set S of predicate variables, an intermediate is unary or binary, since in either case it has no next value.)
predicate Ps *. This predicate must be consistent with P in the We measured the performance of three one-level search rear-
sense that it is false only for sets of values that cannot be ex- rangement algorithms. The specification given above for one-
tended to a solution to P. If P is in the form of (1), the natural level search rearrangement left certain details involving the order
choice for Ps* is the conjunction of all the relations that de- of testing and selecting variables vague. Specifying how these
pend only on the variables in S*. In the following specifica- details are to be handled completes a precise statement of a
tions, let S' be the set of variables whose value is "undefined" particular one-level search rearrangement algorithm. In the
(the unset variables) and S" the set of variables with defined following paragraphs, we provide these specifications for the
values. Let w range over the predicate variables (units). We three one-level algorithms whose performance we measured.
denote the value of predicate variable w by Value [w] . The set We call the best of these algorithms the Fast algorithm. In
S " is maintained as a stack. In this specification, it is assumed this version, S' is treated as a stack with regard to insertions
that the predicate variables are Boolean; trivial changes are (deletions can be done anywhere). Thus, in Step 7, all inser-
needed to accommodate a larger set of values. tions are at the top, and in Step 3, the variables are tested
from top to bottom. Normally the testing in Step 3 starts at
One-Level Search Rearrangement the top of the stack, but when Step 3 is entered from Step 5
(via Step 2), the search starts just after the former position of
Step 1 (Initialize): Set S " to empty and S' to S. the newly discovered unary node and wraps around from bot-
Step 2 (Solution?): If S' is not empty, go to Step 3. Other- tom to top if necessary, continuing until all variables have been
wise, the current values in Value constitute a solution. Go to tested. At Step 4, the top element of S' is chosen.
Step 6. Fig. 2 shows a backtrack tree generated by the Fast algo-
Step 3 (Find Best Variable): For each variable w in S', do the rithm. The moderate improvement in speed obtained by the
rest of this step. (The order in which the variables are tested Fast algorithm (in comparison with other one-level algorithms)
depends on the specific algorithm.) For both Value[w] v- false is the result of two factors. First, starting the searches in Step
and Value[w] true, compute PS- u {w}. If the result is 3 just after the point where a unary node was discovered often
false in both cases, exit the loop and go to Step 6. If the result reduces the time to find another unary node, since the part of
is false in one and true in the other, remember the value that the stack that has not been tested recently is more likely to
gives true, exit the loop, and go to Step 5. If the result is true contain a unary node. Second, using a stack for S' permits
in both cases, continue with testing the next value of w. Step 4 to select for a binary node a variable that was promising
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
312 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982

TABLE I
THE CALCULATED AND MEASURED PERFORMANCE OF
ZERO-LEVEL BACKTRACKING
3 Literals Per Term

Var. Terms Cal. Nodes 4.72v3 8exp(.628v314) Measured Nodes Cal. Solutions

1 1 3.00 9. 3.00 1.750


4 8 24.36 47. 24.29 5.498
9 27 227.08 281. 233. 13.915
16 64 1954.80 2029. 2131. 12.735
25 125 17457.30 17679. 17557. 1.891
Fig. 3. The backtrack tree produced by the Analyzed one-level algo- 36 216 183062.69 184482. 175871. 2.046X10-2
rithm for the predicate from Fig. 1. This tree has eight binary nodes, 49 343 2275493.18 2285200. 2075872. 7.232X10-6
two unary nodes, two zero-degree nodes, and seven solution nodes. 64 512 3.326X107 3.331X107 3.OX107 3.750X10-11
81 729 5.675X108 5.669X108 1.280X10-18
100 1000 1.122X1010 1.119X1010 1.291X10-28
on a branch of the tree searched earlier. Since there is a slight 121 1331 2.555X1011 2.542X1011 1.727X10-41
correlation among the branches of the tree, this is better than 144 1728 6.665X1012 6.619X1012 1.37SX10-57
selecting binary nodes in an arbitrary order. 169 2197 1.9981X1014 1.964X1014 2.923X10-77
The Analyzed one-level algorithm has those modifications to 196 2744 6.683X1015 6.613X1015 7.447X10_101
the Fast algorithm required to ensure that the number of binary 225 3375 2.547X1017 2.51SX1017 1.021X10-128
nodes in a search tree generated by that algorithm will be ex- 256 4096 1.092X1019 1.077X1019 3.378X10161
actly the same as the number of binary nodes generated by the
one-level algorithm we analyzed in [ 19]. To accomplish this, 4 Literals Per Term
we change the Fast algorithm so that at Step 4, the variable
with the smallest index is chosen (i.e., select vi before v; if Var. Terms Cal. Nodes 4.12v 5/2exp(.730v 56) Measured Nodes Cal. Solutions
i < j). The Analyzed algorithm tests for unary nodes in a dif- 1 1 3.00 8. 3.00 1.875
ferent order from the algorithm in [191 and usually requires 4 8 28.38 75. 28.24 9.548
fewer predicate evaluations, but it selects binary nodes in ex- 9 27 518.47 980. 523. 89.638
actly the same order. Fig. 3 shows a backtrack tree generated 16 64 15168.57 20536. 15495. 1053.517
by the Analyzed algorithm. 25 125 607176.45 680632. 617119. 10523.481
The Simple one-level algorithm is the same as the Fast algo- 36 216 3.297X107 3.51X107 3.26X107 60656.248
rithm, except that at Step 3, the search always begins at the 49 343 2.608X109 2.76X109 136967.520
top of the stack. This is the first algorithm we measured. Since 64 512 3.083X1011 3.26X1011 82264.657
it is slower than the Fast algorithm and since no theoretical
work has been done on its performance, we did not study it as For zero-level backtracking, the number of nodes is equal to the num-
thoroughly as the other two algorithms. ber of predicate evaluations. The measurements consist of 110 runs for
Our measurements suggest that the asymptotic formula in each case except the one variable case. There were 100 runs for the one-
variable case. The first column gives the number of variables in the pred-
[19] [roughly exp 0(V (s- -l)I(S - 2))] for the exponential part icates being measured, and the second column gives the number of terms.
of the performance of the Analyzed algorithm applies to all Column three gives the number of nodes predicted by the formula in [2]
the one-level algorithms considered in this paper. We have no (calculated nodes). Column four shows the calculated result using the
mathematical proof of this, however. leading term of an asymptotic formula for the number of nodes [21.
Column five shows the average number of nodes obtained in the experi-
IV. THE PROBLEM SET mental measurements, and column six shows the average number of
solutions per problem predicted by the formulas in [2].
Since backtracking is a general strategy applicable to a wide
variety of problems, it was necessary to choose a representative
problem set on which to study its behavior. We performed our to our theoretical work. We believe that this is a realistic prob-
measurements on the same problem set used in our theoretical lem set.
analyses: conjunctive normal form formulas with literals chosen
from a set of v variables and their negations, with s literals per V. EXPERIMENTAL TECHNIQUES
term and t = v°t terms. The values of the parameters v, s, and We studied the performance of the Fast, Analyzed, and Simple
at determine the nature of the problem set. versions of one-level backtracking. For comparison, we also
Conjunctive normal form formulas are a convenient choice made some measurements on zero-level (simple backtracking)
for several reasons. They have natural intermediate predicates: and two-level algorithms [ 18]. We studied each algorithm for
for each subset S * of the variables, PS * is the conjunction of s = 3 and s = 4. Starting with n = 1, we set v = n 2 and t = n 3
the clauses of P that contain only literals of the variables in S *. and increased n in steps of 1. The upper limit on n was deter-
For fixed > 1, fixed s > 3, and increasing v, the problem set
a mined by the memory capacity of our computer (n = 16 for
is NP complete. We have done extensive analytic studies of s = 3) or by our patience. Our 48-bit linear congruential ran-
these problem sets [2], [19], and they have many character- dom number generator, used to generate random literals, was
istics in common with problem sets for which backtracking is always started at the same initial value, so the various algo-
typically used. An interesting empirical question is the choice rithms were tested on exactly the same problems when the runs
of the parameterization that is most representative of common were of the same length. Individual runs were for 10, 100,
problems. Choices that other investigators may wish to consider 1000, 10 000, or 216 problems of each size. The results reported
are t = av or s = log v. The analysis by Goldberg [10] of a in Tables I-V were obtained by combining the data from one
simplified Davis-Putnam procedure uses a problem set similar to three runs.
to the one obtained by using a fixed value of t and s = av for The measurements were done over several months using a
fixed a. We used the problem set parameterized by t = va, s dedicated T1 980 minicomputer. In order to obtain the largest
fixed for our measurements in order to be able to relate them possible number of data points, a great deal of effort was de-
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982 313

TABLE II TABLE IV
THE MEASURED PERFORMANCE OF THE FAST ONE-LEVEL THE MEASURED PERFORMANCE OF THE SIMPLE LEVEL-1
BACKTRACKING ALGORITHM BACKTRACKING ALGORITHM
3 Literals Per Term 3 Literals Per Term
Var. Terms Runs Binary Nodes Unary Nodes Evaluations Solutions

1 1 65536 0.7492+.0017 0.2508+.0017 4.498+4.0034 1.7492+.0017


Var. Terms Runs Total Nodes Evaluations Solutions
4 8 76536 5.0480+.0081 4.1454+.0053 45.446+.046 5.5087+.0087 1 1 100 3.00+0. 4.48+.09 1.74+.04
16
9
64
27 76536
76536
16.697+.039
23.386+.082
27.778+.041
81.30+.16
273.70+.42
846.0+1.6
13.864+.039
12.738+.074
4 8 100 18.58+.47 43.3+1.2 5.28+.23
25 125 76536 19.143+.056 140.76+.79 1800.4+3.5 1.900+.033 9 27 100 87.6+3.5 268.+10. 13.44+.79
36 216 76536 23.849+.055 236.71+.49 3666.0+7.1 0.0237+.0041 16 64 100 179.+11. 780.+40. 8.76+2.7
49 343 76536 33.266+.076 392.7±.84 7125.+14. 0 25 125 100 344.+21. 2075.+113. 1.54+.52
64 312 76536 46.94±.11 641.3±1.4 13285.+27. 0
36 216 100 637.+37. 4979.+253. 0.01+.01
81
100
729
1000
76536
76536
66.17±.15
93.13±.21
1028.0+2.3
1623.2±3.6
23869.±49.
41634.±86.
0
49 343 100 927.+54. 9008.+476. 0
121 1331 11000 132.35+.80 255.±+15. 71691.+395.
0

0
64 512 100 1615.+87. 18501.+899 0
144 1728 11000 185.4±1.1 3937.±24. 119644.±675. 0 81 729 100 2520.+139. 34088.+1712. 0
169 2197 11000 260.7±1.6 6047±.36. 197455.±1109. 0
100 1000 100 3945.+262. 61052.+3715. 0
196 2744 1000 365.1±8.7 9179.±217. 319591.±6943. 0
121 1331 100 6021.+415. 105477.+6632. 0
225
256
3375
4096
1000
1000
526.±10.
708.+15.
14240.±282.
20570.+438.
527996.±9860.
810792.+16297.
0
144 1728 100 9025.+486. 177123.+8641. 0
12568.+683. 274116.+13853. 0
0

169 2197 100


4 Literals Per Tarm 196 2744 100 22794.+2550. 523414.+51058. 0
Var. Terms Runs Binary Nodes Unary Nodes Evaluations Solutions 225 3375 100 33549.+1898. 851603.+44483. 0
256 4096 100 44533.+3236. 1232146.+82147. 0
1 1 1000 0.880±.010 0.120±.010 4.760±.021 1.880±.010
4 8 1000 8.754+.079 3.617+.048 62.96+.33 9.463+.082
9 27 1000 95.97+.99 69.08+.57 946.4+7.2 90.54+1.00 4 Literals Per Term
16 64 1000 1164.±16. 1242.+12. 14904.+152. 1050.+16.
25 125 1000 12464.±220. 17892±.+222. 207279.±2681. 10821.+209. Var. Terms Runs Total Nodes Evaluations Solutions
36 216 1000 78108.±2021. 173078.±2570. 2043044.+31322. 61795.+1870.
49 343
10001251045.6325. 1252922.+17575. 16243331.±220762. 136193±^5321. 1 1 100 3.00+0 4.72+.08 1.87+.03
4 8 100 25.24+.37 61.50+.99 9.21+.26
The order of variables is controlled by a stack. A circular search is 9 27 100 328.9+8.4 941.+23. 88.8+3.1
used to find the unary nodes. 16 64 100 4794.+181. 15036.+527. 1022.+56.
25 125 100 60007.+2320. 210916.+22339. 10442.+592.
TABLE III 36 216 100 513737. +27591. 2212728.+100828. 62927.+5764
THE MEASURED PERFORMANCE OF THE ANALYZED ONE-LEVEL 49 343 100 3089249.+151617. 18415912.+796432. 148034.+19859.
BACKTRACKING ALGORITHM
3 Literals Per Term
The order of the variables is controlled by a stack. A linear search is
Var. Terms Runs Binary Nodes Unary Nodes Evaluations Solutions
used to fimd the unary nodes. This is the method published in [181.
1 1 10000 0.750+.004 0.250+.004 4.500+.009 1.750+.004
4 8 11000 5.015+.022 4.148+.014 45.37+.12 voted to making the program fast. For large problems, nearly
5.495+.023
9 27 11000 16.59+.10 27.87+.11 278.5+1.1
all the time was spent doing intermediate predicate evaluations,
13.73+.10
16
25
64
125
11000
11000 23.15+.22
81.38+.44
19.09+.15
141.34+.81
876.6+4.5
1894.+10.
12 .59+ .19
1.853+.082
so optimization of predicate evaluations received particular
36 11000
216 236.8+1.3
24.06+.15 3949. +21. 0.026+.013
attention. Backtracking proceeds by setting or changing one
49 343
11000 33.38+.21
390.1+2.3 7755.+44. variable at a time. For our purposes, a clause is true if at least
0

64 11000
512 47.51+.32
640.8+4.0 14730.+88. one literal in it is true or unknown. We keep a list for each
0

81 11000
729 67.63+.47
1034.1+6.8 26953.+166. literal. Each clause is on the list for one of the literals that
0

100 11000
1000 1618.+11.
94.27+.64 46999.+287. causes it to be true. Initially, when no variables are set, any
0

121 11000
1331 134.93+.95
2559.+17. 81931.+523. literal in a clause causes it to be true. When a variable is set or
0

144 1728
11000 3946.+29.
189.5+1.5 137759.+948. changed, one literal becomes false. The clauses on the list for
0

169 11000
2197 6083.+44.
267.8+2.0 229475.+1554.
367960.+8324.
that literal are examined. If a clause contains other literals that
0

196
225
1000
2744
3375
1000
9098.+228.
369.9+9.5
518.+12.
13763.+324. 595712.+13221.
cause it to be true, then the clause is moved to the list for that
0

literal. If all the literals in a clause are false, then the predicate
0

256 4096
1000 20172.+495.
710.+18. 923832.+21182. 0

is false, and it is time to backtrack. After backtracking, the


4 Literals Per Ters clause that caused the predicate to become false is again true,
Var. Terse Runs Binary Nodes Unary Nodes Evaluations Solutions so it is not necessary to move it (or any other clauses remaining
0.120+.010 4.760+.021 1.880+.011
on its list) to a new list. Also, there is no need to move any
1000 0.880+.010
1

4
1

1000 8.751+.0791 3.624+.048 63.06+.33 9.463+.082


other clauses when backtracking. Since a clause can be left on
list of any literal that makes it true, there is no need to
8

9 27 1000 95.81+.99 69.12+.59 965.4+7.4 90.54+1.00 the


16 64 1000 1166.+16. 1239.+13. 15611.+164. 1050.+16. restore other clauses to their former lists. This method of
25 125 12493.+221.
1000 18026.+241. 222722.+2979. 10821.+209. evaluating intermediate predicates allows us to evaluate large
36 216 1000 78834.+2046. 178528.+3056. 2259275.+38904. 61795.+1870. predicates rapidly. For v = 256, t = 4096, and s = 3, we esti-
49 343 1000 255661.+6531. 1288497.+21337. 18727862.+297338. 136193.+5321. mate that only 3 5 ,s are required for a typical evaluation.
The variables have a fixed order. A circular search is used to find the
We also performed other operations on the predicate, such
unary nodes. For s = 3, the following is the calculated number of binary
as removing tautological clauses and repeated literals within
nodes: one variable, 0.75 binary nodes; four variables, 5.021 binary
clauses. Although our methods examine the detailed structure
nodes; nine variables, 16.75 binary nodes. This method was analyzed of the predicate to facilitate rapid evaluation, they do so in a
in [19]. way that is compatible with our aim of measuring the perfor-

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
314 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982

TABLE V With s = 4, the same holds true for problems with 25 or more
THE MEASIJRED PERFORMANCE OF LEVEL-2 BACKTRACKING variables.
3 Literals Per Term Table V gives preliminary measurements on the two-level
algorithm published in [ 1 8 1. It appears to be possible to speed
up this algorithm by at least a factor of two. We plan to publish
Var. Terms Runs Total Nodes Evaluations Solutions
more extensive measurements after we have investigated this
1 1 100 3.00+0 4.48+.09 1.74+.04 possibility, and after we have developed a theoretical analysis
4 8 100 17.16+.05 150.4+6.0 5.28+.23 for two-level algorithms. Even without improvement, the two-
9 27 100 61.1+3.1 1418.5+7.4 13.44+.79 level algorithm is preferred for very large problems.
16 64 100 67.5+7.2 3435.+34. 8.76+2.7 Our theoretical work [ 21, [ 191 suggests that the number of
25 125 100 51.5+3.8 6107.+54. 1.54+.52
binary nodes for all the algorithms we measured (except per-
haps the two-level) is asymptotically of the form a1 Va2 exp
36 216 100 57.3+3.1 11803.+76. 0.01+.01
(a3 va4 ). For one-level backtracking, the number of unary nodes
49 343 100 70.9+3.2 21217.+117. 0
must be between zero and v times the number of binary nodes,
64 512 100 89.4+4.3 42143.+224. 0 and the number of evaluations must be between one and v
81 729 100 115.1+5.1 63582.+350. 0 times the number of nodes (between one and v2 times the
100 1000 100 128.1+5.9 70672.+474. 0 number of binary nodes). All four constants, a1, a2, a3, and
121 1331 100 161.6+5.4 120407.+597. 0 a4, are needed to reliably predict the actual number of nodes
144 1728 100 182.4+7.0 166134.+842. 0 for large values of v. We attempted to determine these constants
169 2197 100 234.0+8.3 314029.+1265. 0 by doing a least squares fit to our data using STEPIT [3 1. We
196 2744 100 256.+10. 428160.+1816. 0 found that it is difficult to do extrapolations with data of the
225 3375 100 297.+11. 567570.+2194. 0 type we have gathered. It is presently impractical to obtain
256 4096 100 343.+11. 772527.+2660. 0 much data for problems larger than the ones we studied. The
asymptotic formula is not valid for small problems. Statistical
4 Literals Per Term
fluctuations make it difficult to achieve high accuracy for any
data point. With a limited range of v and with inaccurate data,
many sets of values for a1, a2, a3, and a4 give nearly equally
Var. Terms Runs Total Nodes Evaluations Solutions
good fits: the collinearity problems are severe. For s = 3, our
1 1t 100 3.00+0 4.74+.007 1.87+.03 data are sufficient to determine two parameters when the
4 8 100 24.52+.04 219.5+4.2 9.21+.26 other two are fixed. In a few cases, it may be adequate for
9 27 100 286.1+7.8 5580.+128. 88.8+3.1
three, but in no case can all four be determined. For s = 4, we
have not extended our measurements to large enough v to
16 64 100 3393.+151. 65270.+3476. 1022.+56.
obtain reliable asymptotic fits.
25 125 100 34329.+1655. 728498.+30133. 10442.+592. The upper part of Table VI shows the results of fitting a3
36 216 100 204709.+15998. 5927770.+304356. 62927.+5764. and a4 when a1 and a2 are held fixed. (They are set to the
49 J 343 100 516182.+55303. 27663834.+1509539. 148034.+19859. natural values of one and zero, respectively.) For any of the
algorithms, the values of a3 and a4 should be the same whether
This is the method published in [18]1. Refimements can probably re- binary nodes or evaluations are being measured because these
duce the number of predicate evaluations by a factor of two. quantities are related polynomially. The variation in the fitted
values gives one indication of the inaccuracy of the fitting pro-
mance of basic backtracking algorithms. We plan future studies cess. For zero-level backtracking, the theoretical value of a4 is
of predicate analysis methods. 0.75, so the fitted value is low by 0.15. For the Analyzed algo-
VI. RESULTS
rithm, the theoretical value of a4 is 0.50, so the fits are low by
0.11-0.14. Thus, although the fits to our quite extensive mea-
Tables I-V give the results of the measurements. Table I surements give a rough indication of the upper exponent, the
shows the performance of the zero-level algorithm as measured error is large compared to the range of values (zero to one)
and as calculated from the formulas in [2]. The theory for permitted by a naive analysis.
zero-level backtracking is quite complete; the measurements The middle part of Table VI gives fits for a2 and a3 with a4
were done to provide a test for the statistical methods that set to its theoretical value and a1 set to one. (For the Fast and
were used on the other cases. For zero-level backtracking, the Simple algorithms, we use the theoretical value from the Ana-
number of nodes and the number of predicate evaluations lyzed algorithm.) For zero-level backtracking, the theoretical
are identical. value of a3 is 0.730, so the fitted value is low by 0.20. For the
Tables II-IV give the measured performance of the Fast, Analyzed algorithm, theory gives 0.1 82 < a3 < 0.320 (for a >
Analyzed, and Simple algorithms. For the first two algorithms, s/2). All fits for the various one-level algorithms were in this
the number of binary nodes, unary nodes, and predicate evalua- range, although the values for the various cases spanned most
tions were measured. For the Simple algorithm "total" nodes of the range. The measurements were consistent with a3 =
were measured, where total nodes is defined to be 2 * (binary 0.320, but not with a3 = 0.1 82 (except for the Simple algo-
nodes + unary nodes) + 1. This definition corresponds to the rithm, for which we do not have much data).
one used in [181. We suspect that the true value of a3 for one-level backtrack-
The number of predicate evaluations gives a good indication ing is equal to the upper limit. The lower part of Table VI
of the relative speed of the various methods. Even with our shows the results of fitting a1 and a2 with a3 and a4 set accord-
very rapid predicate evaluation methods, over 90 percent of the ing to this conjecture. The resulting formulas are probably the
time was spent evaluating predicates. The measurements indi- most reliable for predicting the performance for large v. For
cate that the Fast algorithm is indeed the fastest of the three. zero-level backtracking, the theoretical value of a2 is 8, so the
The differences, however, are not large, and they increase only fit is low by 0.12.
slowly with problem size. For large problems, there are many The various fits for one-level algorithms suggest that the num-
more unary nodes than binary nodes, and there are many more ber of unary nodes is proportional to v"2 * (binary nodes).
evaluations than nodes. For s = 3 and problems of 16 variables Tables II and III clearly indicate that the Fast algorithm is bet-
or more, the one-level methods are much faster than zero-level. ter than the Analyzed, but Table VI gives little indication of the

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982 315

TABLE VI
VARIOUS LEAST SQUARE FITS TO THE MEASURED NUMBER OF BINARY
NODES, UNARY NODES, AND EVALUATIONS FOR VARIOUS BACKTRACKING
ALGORITHMS ON PROBLEMS WITH THREE LITERALS PER TERM
Algorithm Binary Nodes Unary Nodes Evaluations Range
level zero exp(1.41v'601) 25-64
x2 = 2.16
fast exp(.763v'387) exp(l.75v'313) exp(3.20v 261) 100-2561
x2 = 10.5 X2 = 5.40 X2 = 5.23

analyzed exp(.748v 392) exp(l.75v'313) exp(3.24v 261) 100-255


x2 0.907 x2 0.873 X2 = 1.70

simple exp(2.24vr282) exp(3.29v 262) 100-256


x2 = 6.98 x2 = 5.56
level two exp(2.29v.168) exp(5.43v'165) 169-256
0.985 - 0.220

level zero vl22exp(.528v31/4 ) 4-64


x2 = 6.81

fast v'397exp(2.71v1/2) vl*0Sexp(.257vl/2) vl'90exp(1.89vl/2) 100-256


X2 = 8.14 X2 = 6.88 X2 = 23.2

analyzed v.383exp(.278vl/2) v1 05eSpe(.258vl/2) v' 91exp(.195vl/2) 100-256


X2 = 1.39 X2 = 0.836 X2 = 2.87
simple vl *33exp(.209vl/2) vl 92exp(.211vl/2) 100-256
X2 = 7.45 X2 = 6.42

level zero 6.88v'256exp(.628v3/4) 16-64


x2 = 0.0435

fast 2.27V-.113 exp(.320v 1/2 ) 3.29v- 651 exp(.320v 1/2 l30exp(.320v 1/2 )
9.32v 130 49-256
x2 = 1S.5 X2 = 14.0 X2 = 41.2
analyzed 2.19v'124exp(.320vl/2) 3.16v-661exp(.320vl/2) 8.52vl.l76exp(.320vl/2) 49-256
x2 = 90 2 5.13 X2 = 12.1

simple l0.1v594exp(.320vl/2) 8.26vl 233exp(.320vl/2) 49-256


x2 = 7.04 x2 =-6.45
_~~~~ ~
For the simple and level-two algorithm, the fits are for total nodes and evaluations.
The first group of fits is of the form exp (a1 va2). The second group is of the form val exp
(a2v0) where 0 = -3 for level zero and 0 = 2 for level one. The third group is of the form
alVa2 exp (-yv9) where y = 0.730, 0 = ~~~~~~~31
23 for level zero, and y = 0.320103, 0 = 2 for level
one. Good fits for level one were not obtained with the form alva2 exp (0.182 v112). The
"range" column shows the range of number of variables (range of v) over which the fits
were done. Small values of v were omitted because the asymptotic behavior does not have
much influence on the function value at these points. The range was chosen to be as large
as possible subject to keeping x2 reasonably small while using the same range for the various
level-one algorithms. For most cases, the statistical accuracy of the data was not adequate
for stable fits with three parameters. For s = 4, the range of the data did not permit reliable
fits to asymptotic formulas.

functional form of the improvement. It is likely that some level algorithms, including the Fast algorithm. We have shown
slowly growing function (such as log v) is involved. that the Fast algorithm is faster than the Analyzed algorithm.
Fig. 4 is a graph comparing the performance of the zero-level, Our measurements indicate that the number of unary nodes is
Fast, and two-level algorithms. Curves for the other one-level roughly v 1/2 times the number of binary nodes, and the num-
algorithms would be too close to the curve for the Fast for con- ber of evaluations is roughly v times the number of binary
venient display. The scales were chosen so that the function nodes. Most of these results would have been extremely diffi-
a1 va2 exp (a3 va4) would approach a straight line for large v. cult to obtain using theoretical techniques.
On the other hand, these experimental studies are quite lim-
VII. CONCLUSIONS ited in their ability to establish the asymptotic behavior of any
The experimental studies reported in this paper supplement method. When choosing or developing an algorithm for a prac-
our theoretical results. We have established an upper and lower tical problem, both the theoretical asymptotic studies and the
limit for the size problem for which the one-level search re- results of measurements on problems of reasonable size should
arrangement algorithms are preferred over zero-level and two- be considered. The results in this paper combined with those
level methods. We have shown that our theoretical results are in [21 and [191 should be of help in selecting the most appro-
compatible with the measured performance of the various one- priate basic backtracking algorithm for a particular problem.
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.
316 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-4, NO. 3, MAY 1982

3xlO I I .. I

3xJ06

3 x4O

3X104

3000 [

1000 r

300 r

/ O-Level -
//
c
100 Fast
2- Level
o 50 I,
//
//
w 30 /
20 .-
4 9 16 25 36 49 64
.

81 100 144 196 256


Vari ables
Fig. 4. The average number of intermediate predicate evolutions used by various backtracking algorithms as a function
of the number of variables. For each algorithm, the lower curve is for problems with three literals per term and the
upper curve is for four literals per term. On the evaluation axis, linear distance is proportional to log evaluations. On the
variable axis, linear distance is proportional to log log variables. These scales result in curves of the form a,va2 exp

(a3 Va4) approaching a straight line for large v.

The methods in [161 and [171 can then be used to estimate [13] "The consistent labeling problem: Part II," IEEE Trans.
,

how long the chosen method will take to solve the problem. Pattern Anal. Mach. Intell., vol. PAMI-2, pp. 193-203, 1980.
In the future, we hope to perform theoretical analyses and [14] R. M. Haralick and G. L. Elliot, "Increasing tree search efficiency
practical measurements on two-level backtracking and on vari- for constraint satisfaction problems," Virginia Polytech. Inst.,
Blacksburg, Rep., 1979.
ous predicate analysis techniques. Together with our present R. M. Karp, "Reducibility among combinatorial problems," in
[15]
results, these will provide a rational basis for selecting an algo- Complexity in Computer Computations, R. E. Miller and J. W.
rithm to solve a large problem. Thatcher, Eds. New York: Plenum, 1972, pp. 85-103.
[16] D. E. Knuth, "Estimating the efficiency of backtracking pro-
REFERENCES grams," Math. Comput., vol. 29, pp. 121-136, 1975.
[1] J. R. Bitner and E. M. Reingold, "Backtrack programming tech- [17] P. W. Purdom, "Tree size by partial backtracking," SIAM J. Com-
niques," Commun. Ass. Comput. Mach., vol. 18, pp. 651-655, put., vol. 7, pp. 481-491, 1977.
1975. [18] P. W. Purdom, Jr., C. A. Brown, and E. L. Robertson, "Backtrack-
[2] C. A. Brown and P. W. Purdom, Jr., "An average time analysis of ing with multi-level search rearrangement," Acta Informatica, vol.
backtracking," SIAMJ. Comput., vol. 10, pp. 583-593, 1981. 15, pp. 99-114, 1981.
[3] J. P. Chandler, "STEPIT," Quantum Chem. Program Exchange, [19] P. W. Purdom, Jr. and C. A. Brown, "An analysis of backtracking
Dep. Chem., Indiana Univ., Bloomington. with search rearrangement," SIAM J. Comput., to be published.
[4] A. K. Chandra, D. C. Kozen, and L. J. Stockmayer, "Alternation,"
J. Ass. Comput. Mach., vol. 28, pp. 114-133, 1981.
[5] S. A. Cook, "The complexity of theorem-proving procedures," in
Proc. 3rd Annu. ACM Symp. Theory of Computing. New York:
Ass. Comput. Mach., 1971, pp. 151-158.
[6] M. Davis and H. Putnam, "A computing procedure for quantifica- A Method for Finding Pairs of Antiparallel Straight Lines
tion theory," J. Ass. Comput. Mach., vol. 7, pp. 201-215, 1960.
[7] E. W. Dijkstra, "A note on two problems in connexion with ANN SCHER, MICHAEL SHNEIER, AND AZRIEL ROSENFELD
graphs," Numer. Math., vol. 1, pp. 269-271, 1959.
[8] M. R. Garey and D. S. Johnson, Computers and Intractability: A
Guide to the Theory of NP- Completeness. San Francisco, CA:
Freeman, 1979. Abstract-A method of pairing antiparallel straight lines is presented
[9] J. Gaschnig, "Performance measurement and analysis of certain and discussed. The pairing is based on the distance between the lines,
search algorithms" Ph.D. dissertation, Carnegie-Mellon Univ., the amount by which they overlap, and on whether or not other lines
Pittsburgh, PA, 1979. are interposed. Examples are shown of applying the method to high-
[10] A. Goldberg, P. Purdom, and C. A. Brown, "Average time analyses resolution aerial photographs. Results indicate that cultural features
of simplified Davis-Putnam procedures," submitted to Inform.
Processing Lett.
[111 R. M. Haralick, L. S. Davis, A. Rosenfeld, and D. L. Milgram, Manuscript received February 4, 1980; revised November 4, 1981.
"Reduction operations for constraint satisfaction," Inform. Sci., This work was supported by the Defense Advanced Research Projects
vol. 14, pp. 199-219, 1978. Agency and the U.S. Army Night Vision Laboratory under Contract
[12] R. M. Haralick and L. G. Shapiro, "The consistent labeling prob- DAAG-53-76C-0138 (DARPA Order 3206).
lem: Part I," IEEE Trans. Pattern Anal. Machine Intell., vol. The authors are with the Computer Vision Laboratory, Computer
PAMI-1, pp. 173-184, 1979. Science Center, University of Maryland, College Park, MD 20742.

0162-8828/82/0500-0316$00.75 1982 IEEE

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO ESPIRITO SANTO. Downloaded on April 03,2025 at 12:06:42 UTC from IEEE Xplore. Restrictions apply.

You might also like