Combinatorial Search From Algorithms To Systems Youssef Hamadi download
Combinatorial Search From Algorithms To Systems Youssef Hamadi download
https://ebookbell.com/product/combinatorial-search-from-
algorithms-to-systems-youssef-hamadi-36521020
https://ebookbell.com/product/local-search-in-combinatorial-
optimization-emile-aarts-editor-jan-karel-lenstra-editor-50731276
https://ebookbell.com/product/combinatorial-algorithms-generation-
enumeration-and-search-1st-donald-l-kreher-20668760
https://ebookbell.com/product/information-theory-combinatorics-and-
search-theory-in-memory-of-rudolf-ahlswede-1st-edition-christian-heup-
auth-4242050
https://ebookbell.com/product/combinatorial-game-theory-richard-j-
nowakowski-bruce-m-landman-45150316
Combinatorial Optimization 7th International Symposium Isco 2022
Virtual Event May 1820 2022 Revised Selected Papers Ivana Ljubi
https://ebookbell.com/product/combinatorial-optimization-7th-
international-symposium-isco-2022-virtual-event-may-1820-2022-revised-
selected-papers-ivana-ljubi-48671898
https://ebookbell.com/product/combinatorial-mathematics-douglas-b-
west-48866824
https://ebookbell.com/product/combinatorial-reciprocity-theorems-an-
invitation-to-enumerative-geometric-combinatorics-graduate-studies-in-
mathematics-matthias-beck-48950642
https://ebookbell.com/product/combinatorial-commutative-algebra-2nd-
edition-ezra-miller-bernd-sturmfels-49055580
https://ebookbell.com/product/combinatorial-optimization-under-
uncertainty-reallife-scenarios-in-allocation-problems-ritu-
arora-50418864
Youssef Hamadi
Combinatorial
Search: From
Algorithms to
Systems
Combinatorial Search: From Algorithms to Systems
Youssef Hamadi
Combinatorial Search:
From Algorithms to Systems
Youssef Hamadi
Microsoft Research Cambridge
Cambridge, UK
To solve a problem as efficiently as possible, a user selects a type of solver (MIP, CP,
SAT), then defines a model and selects a method of resolution. The model expresses
the problem in a way understandable for the solver. The method of resolution can be
complete (one is certain not to miss solutions) or incomplete (it uses a heuristic, i.e.,
a method that favors the chances of finding a solution but offers no completeness
guarantee).
Since solvers exist, researchers try to simplify the task of the end user, helping
her in these keys steps: the creation of the model, and the finding of a method of
resolution. In this book, Youssef Hamadi helps the user on the second point by
presenting ways to automatically select and adjust resolution strategies.
This book proposes several methods for both SAT and CP solvers. Firstly, the au-
thor demonstrates the benefit of parallelism through the duplication of search strate-
gies. In the best case, this can provide super linear speed up in the resolution process.
In most cases, this results in a more robust resolution method, to the point that such
a solver is never beaten by a solver using the best method. The solver ManySAT,
co-developed by Mr. Hamadi, is based on this idea and has won numerous prizes in
SAT competitions. Its fame goes far beyond the SAT solving domain and this line
of work is now a reference for the domain.
Any resolution method must be guided by the user through the definition of a
resolution strategy which typically defines the next decision to be made, i.e., which
variable must be assigned to which value? This book considers the automatic learn-
ing of the parameters of resolution strategies. It shows how to extract knowledge
from the information available during search. The difficulty is to determine the rel-
evant information and decide how they can be exploited. A particularly novel ap-
proach is proposed. It considers the successive resolutions of similar problems to
gradually build an efficient strategy.
This is followed by the presentation of Autonomous Search, a major contribution
of the book. In that formalism, the solver determines itself the best way to find solu-
tions. This is a very important topic, which has often been approached too quickly,
and which is finally well defined in this book. Many researchers should benefit from
this contribution.
v
vi Foreword
This book is fun to follow and the reader can understand the continuity of the
proposed approaches. Youssef Hamadi is able to convey his passion and conviction.
It is a pleasure to follow him on his quest for a fully automated resolution procedure.
The problem is gradually understood and better resolved through the book.
The quality, diversity and originality of the proposed methods should satisfy
many readers and this book will certainly become a reference in the field. I highly
recommend its reading.
Nice, France Jean-Charles Régin
September 2013
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Boosting Distributed Constraint Networks . . . . . . . . . . . . . . 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Technical Background . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Distributed Constraint Satisfaction Problems . . . . . . . 8
2.3.2 DisCSP Algorithms . . . . . . . . . . . . . . . . . . . . 9
2.3.3 Performance of DisCSP Algorithms . . . . . . . . . . . . 10
2.4 Risks in Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.1 Randomization Risk . . . . . . . . . . . . . . . . . . . . 11
2.4.2 Selection Risk . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Boosting Distributed Constraint Satisfaction . . . . . . . . . . . 14
2.5.1 Utilizing Competition with Portfolios . . . . . . . . . . . 15
2.5.2 Utilizing Cooperation with Aggregation . . . . . . . . . 16
2.5.3 Categories of Knowledge . . . . . . . . . . . . . . . . . 16
2.5.4 Interpretation of Knowledge . . . . . . . . . . . . . . . . 17
2.5.5 Implementation of the Knowledge Sharing Policies . . . 17
2.5.6 Complexity . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Empirical Evaluation . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.1 Basic Performance . . . . . . . . . . . . . . . . . . . . . 20
2.6.2 Randomization Risk . . . . . . . . . . . . . . . . . . . . 21
2.6.3 Selection Risk . . . . . . . . . . . . . . . . . . . . . . . 22
2.6.4 Performance with Aggregation . . . . . . . . . . . . . . 23
2.6.5 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6.6 Idle Time . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Parallel Tree Search for Satisfiability . . . . . . . . . . . . . . . . . 27
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
vii
viii Contents
5.4.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.4.3 Computing Weak Dependencies . . . . . . . . . . . . . . 75
5.4.4 The domFD Dynamic Variable Ordering . . . . . . . . . 75
5.4.5 Complexities of domFD . . . . . . . . . . . . . . . . . . 76
5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.5.1 The Problems . . . . . . . . . . . . . . . . . . . . . . . 78
5.5.2 Searching for All Solutions or for an Optimal Solution . . 78
5.5.3 Searching for a Solution with a Classical
Branch-and-Prune Strategy . . . . . . . . . . . . . . . . 79
5.5.4 Searching for a Solution with a Restart-Based
Branch-and-Prune Strategy . . . . . . . . . . . . . . . . 80
5.5.5 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6 Continuous Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.3 Technical Background . . . . . . . . . . . . . . . . . . . . . . . 85
6.3.1 Constraint Satisfaction Problems . . . . . . . . . . . . . 85
6.3.2 Supervised Machine Learning . . . . . . . . . . . . . . . 86
6.4 Continuous Search in Constraint Programming . . . . . . . . . . 87
6.5 Dynamic Continuous Search . . . . . . . . . . . . . . . . . . . . 88
6.5.1 Representing Instances: Feature Definition . . . . . . . . 88
6.5.2 Feature Pre-processing . . . . . . . . . . . . . . . . . . . 90
6.5.3 Learning and Using the Heuristics Model . . . . . . . . . 90
6.5.4 Generating Examples in Exploration Mode . . . . . . . . 91
6.5.5 Imbalanced Examples . . . . . . . . . . . . . . . . . . . 91
6.6 Experimental Validation . . . . . . . . . . . . . . . . . . . . . . 92
6.6.1 Experimental Settings . . . . . . . . . . . . . . . . . . . 92
6.6.2 Practical Performances . . . . . . . . . . . . . . . . . . 93
6.6.3 The Power of Adaptation . . . . . . . . . . . . . . . . . 96
6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7 Autonomous Search . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Solver Architecture . . . . . . . . . . . . . . . . . . . . . . . . 101
7.2.1 Problem Modeling/Encoding . . . . . . . . . . . . . . . 102
7.2.2 The Evaluation Function . . . . . . . . . . . . . . . . . . 103
7.2.3 The Solving Algorithm . . . . . . . . . . . . . . . . . . 103
7.2.4 Configuration of the Solver: The Parameters . . . . . . . 104
7.2.5 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.2.6 Existing Classifications and Taxonomies . . . . . . . . . 105
7.3 Architecture of Autonomous Solvers . . . . . . . . . . . . . . . 107
7.3.1 Control by Self-adaptation . . . . . . . . . . . . . . . . . 107
7.3.2 Control by Supervised Adaptation . . . . . . . . . . . . . 108
7.3.3 Searching for a Solution vs. Solutions for Searching . . . 108
x Contents
xi
xii List of Figures
Fig. 4.7 Runtime comparison using parallel local search portfolios made
of respectively one, four, and eight identical copies of PAWS
(same random seed and no cooperation). Black diamonds indicate
the performance of four cores vs. one core. Red triangles indicate
the performance of eight cores vs. one core, points above the blue
line indicate that one core is faster . . . . . . . . . . . . . . . . . 68
Fig. 5.1 Classic propagation engine . . . . . . . . . . . . . . . . . . . . . 73
Fig. 5.2 Variables and propagators . . . . . . . . . . . . . . . . . . . . . 75
Fig. 5.3 Schedule(Queue Q, Propagator p, Variable Xi ) . . . . . . . . . . 75
Fig. 6.1 Continuous search scenario . . . . . . . . . . . . . . . . . . . . 87
Fig. 6.2 dyn-CS: selecting the best heuristic at each restart point . . . . . 88
Fig. 6.3 Langford number (lfn): Number of instances solved in less than 5
minutes with dyn-CS, wdeg, and dom-wdeg. Dashed lines
illustrate the performance of dyn-CS for a particular instance
ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Fig. 6.4 Number of instances solved in less than 5 minutes . . . . . . . . 95
Fig. 7.1 The general architecture of a solver . . . . . . . . . . . . . . . . 102
Fig. 7.2 Control taxonomy proposed by Eiben et al. [EHM99] . . . . . . . 105
Fig. 7.3 Classification of hyper-heuristics by Burke et al. [BHK+09b] . . . 106
Fig. 7.4 The global architecture of an Autonomous Search System . . . . 108
Fig. 7.5 The solver and its action with respect to different spaces . . . . . 109
List of Tables
xiii
Chapter 1
Introduction
Combinatorial search algorithms are typically concerned with the solving of NP-
hard problems. Such problems are not believed to be solvable in general. In other
words there is no known algorithm that efficiently solves all instances of NP-hard
problems. However, tractability results from complexity theory along decades of
experimental analysis suggest that instances coming from practical application do-
mains can often be efficiently solved. Combinatorial search algorithms are devised
to efficiently explore the usually large solution space of these instances. They rely
on several techniques able to reduce the search space to feasible regions and use
heuristics to efficiently explore these regions.
Combinatorial search problems can be cast into general mathematical definitions.
This involves finding a finite set of homogeneous objects or variables whose state
must satisfy a finite set of constraints and preferences. Variables have a domain of
potential values, and constraints or preferences are used to either restrict or order
combinations of values between variables. Dedicated algorithms are able to effi-
ciently enumerate combinations or potential solutions over these definitions.
There are several mathematical formalisms used to express and tackle combina-
torial problems. This book will consider the Constraint Satisfaction Problem (CSP)
and the Propositional Satisfiability problem (SAT), two successful formalisms at the
intersection of Artificial Intelligence, Operations Research, and Propositional Cal-
culus. Despite the fact that these formalisms can express exactly the same set of
problems, as proved by complexity theory, they can be differentiated by their prac-
tical degree of expressiveness. CSP is able to exploit more general combinations
of values and more general constraints; SAT on the other hand focuses on Boolean
variables, and on one class of constraints. These degrees of expressiveness offer dif-
ferent algorithmic trade-offs. SAT can rely on more specialized and finely tuned data
structures and heuristics. On the other hand, algorithms operating on CSP modeling
have to trigger different classes of constraints and variables and therefore have to
deal with the associated overhead. These algorithms or constraint solvers, if differ-
ent, are based on the same principles. They apply search space reduction through
inference techniques, use activity-based heuristics to guide their exploration, diver-
sify their search through frequent restarts, and often learn from their mistakes.
LAKE TAHOE,
Via Stage fourteen miles from TRUCKEE
DONNER LAKE,
Three miles from either TRUCKEE or
SUMMIT.
THROUGH TICKETS:
422 CALIFORNIA
C. P. R. R.: OFFICE,
STREET.
— OAKLAND
C. P. R. R.: OFFICE,
WHARF.
C. & N. W. Ry. 445 CALIFORNIA
OFFICE, STREET.
C. B. & M. R. R. 214 MONTGOMERY
OFFICE, STREET.
C. R. I. & P. R. R. 208 MONTGOMERY
OFFICE, STREET.
K. C. St. J. &
306 MONT. ST.
C. B. R. R. OFFICE,
*** END OF THE PROJECT GUTENBERG EBOOK BANCROFT'S
TOURIST'S GUIDE. YOSEMITE. SAN FRANCISCO AND AROUND THE
BAY, (SOUTH.) ***
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
ebookbell.com