Metaheuristics in Combinatorial Optimization Overv
Metaheuristics in Combinatorial Optimization Overv
net/publication/221900771
CITATIONS READS
2,888 11,028
2 authors:
Some of the authors of this publication are also working on these related projects:
Development of a new metaheuristic algorithm for combinatorial optimization based on instance reduction View project
Parameters calibration for the Artificial Bioindicator Model, a new classifier for network intrusion detection View project
All content following this page was uploaded by Christian Blum on 10 June 2014.
AND
ANDREA ROLI
Università degli Studi di Bologna
C. Blum acknowledges support by the “Metaheuristics Network,” a Research Training Network funded by
the Improving Human Potential program of the CEC, contract HPRN-CT-1999-00106.
A. Roli acknowledges support by the CEC through a “Marie Curie Training Site” fellowship, contract HPMT-
CT-2000-00032.
The information provided is the sole responsibility of the authors and does not reflect the Community’s opin-
ion. The Community is not responsible for any use that might be made of data appearing in this publication.
Authors’ addresses: C. Blum, Université Libre de Bruxelles, IRIDIA, Avenue Franklin Roosevelt 50, CP
194/6, 1050 Brussels, Belgium; email: [email protected]; A. Roli, DEIA—Università degli Studi di Bologna,
Viale Risorgimento, 2-Bologna, Italy; email: [email protected].
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or direct commercial advantage and
that copies show this notice on the first page or initial screen of a display along with the full citation. Copy-
rights for components of this work owned by others than ACM must be honored. Abstracting with credit is
permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component
of this work in other works requires prior specific permission and/or a fee. Permissions may be requested
from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or
[email protected].
°2003
c ACM 0360-0300/03/0900-0268 $5.00
ACM Computing Surveys, Vol. 35, No. 3, September 2003, pp. 268–308.
Metaheuristics in Combinatorial Optimization 269
call ŝ a strict locally minimal solution if heuristics may be high (or low) level pro-
f (ŝ) < f (s) ∀ s ∈ N (ŝ). cedures, or a simple local search, or just a
construction method.” [Voß et al. 1999].
In the last 20 years, a new kind “Metaheuristics are typically high-level
of approximate algorithm has emerged strategies which guide an underlying,
which basically tries to combine basic more problem specific heuristic, to in-
heuristic methods in higher level frame- crease their performance. The main goal
works aimed at efficiently and effec- is to avoid the disadvantages of iterative
tively exploring a search space. These improvement and, in particular, multiple
methods are nowadays commonly called descent by allowing the local search to es-
metaheuristics.2 The term metaheuristic, cape from local optima. This is achieved by
first introduced in Glover [1986], derives
either allowing worsening moves or gener-
from the composition of two Greek words.
ating new starting solutions for the local
Heuristic derives from the verb heuriskein
search in a more “intelligent” way than
(²υρισ κ²ιν) which means “to find”, while
just providing random initial solutions.
the suffix meta means “beyond, in an up-
Many of the methods can be interpreted
per level”. Before this term was widely
as introducing a bias such that high qual-
adopted, metaheuristics were often called
ity solutions are produced quickly. This
modern heuristics [Reeves 1993].
bias can be of various forms and can be
This class of algorithms includes3 —but
cast as descent bias (based on the ob-
is not restricted to—Ant Colony Opti-
jective function), memory bias (based on
mization (ACO), Evolutionary Computa-
previously made decisions) or experience
tion (EC) including Genetic Algorithms
bias (based on prior performance). Many
(GA), Iterated Local Search (ILS), Sim-
of the metaheuristic approaches rely on
ulated Annealing (SA), and Tabu Search
probabilistic decisions made during the
(TS). Up to now there is no commonly ac-
search. But, the main difference to pure
cepted definition for the term metaheuris-
random search is that in metaheuris-
tic. It is just in the last few years that some
tic algorithms randomness is not used
researchers in the field tried to propose a
blindly but in an intelligent, biased form.”
definition. In the following we quote some
[Stützle 1999b].
of them:
“A metaheuristic is a set of concepts that
“A metaheuristic is formally defined
can be used to define heuristic methods
as an iterative generation process which
that can be applied to a wide set of dif-
guides a subordinate heuristic by combin-
ferent problems. In other words, a meta-
ing intelligently different concepts for ex-
heuristic can be seen as a general algo-
ploring and exploiting the search space,
rithmic framework which can be applied to
learning strategies are used to struc-
different optimization problems with rel-
ture information in order to find effici-
atively few modifications to make them
ently near-optimal solutions.” [Osman and
adapted to a specific problem.” [Meta-
Laporte 1996].
heuristics Network Website 2000].
“A metaheuristic is an iterative master
Summarizing, we outline fundamen-
process that guides and modifies the op-
tal properties which characterize meta-
erations of subordinate heuristics to effi-
heuristics:
ciently produce high-quality solutions. It
may manipulate a complete (or incom- —Metaheuristics are strategies that
plete) single solution or a collection of so- “guide” the search process.
lutions at each iteration. The subordinate —The goal is to efficiently explore the
search space in order to find (near-)
2 The increasing importance of metaheuristics is un- optimal solutions.
derlined by the biannual Metaheuristics Interna-
tional Conference (MIC). The 5th is being held in
—Techniques which constitute meta-
Kyoto in August 2003 (http://www-or.amp.i.kyoto- heuristic algorithms range from sim-
u.ac.jp/mic2003/). ple local search procedures to complex
3 In alphabetical order. learning processes.
—Metaheuristic algorithms are approxi- plored or which do not provide high quality
mate and usually non-deterministic. solutions.
—They may incorporate mechanisms to The search strategies of different meta-
avoid getting trapped in confined areas heuristics are highly dependent on the
of the search space. philosophy of the metaheuristic itself.
Comparing the strategies used in differ-
—The basic concepts of metaheuristics
ent metaheuristics is one of the goals
permit an abstract level description.
of Section 5. There are several differ-
—Metaheuristics are not problem-specific. ent philosophies apparent in the existing
—Metaheuristics may make use of metaheuristics. Some of them can be seen
domain-specific knowledge in the form as “intelligent” extensions of local search
of heuristics that are controlled by the algorithms. The goal of this kind of meta-
upper level strategy. heuristic is to escape from local minima
—Todays more advanced metaheuristics in order to proceed in the exploration of
use search experience (embodied in the search space and to move on to find
some form of memory) to guide the other hopefully better local minima. This
search. is for example the case in Tabu Search,
Iterated Local Search, Variable Neighbor-
In short, we could say that metaheuris- hood Search, GRASP and Simulated An-
tics are high level strategies for explor- nealing. These metaheuristics (also called
ing search spaces by using different meth- trajectory methods) work on one or sev-
ods. Of great importance hereby is that a eral neighborhood structure(s) imposed on
dynamic balance is given between diversi- the members (the solutions) of the search
fication and intensification. The term di- space.
versification generally refers to the explo- We can find a different philosophy in
ration of the search space, whereas the algorithms like Ant Colony Optimization
term intensification refers to the exploita- and Evolutionary Computation. They in-
tion of the accumulated search experience. corporate a learning component in the
These terms stem from the Tabu Search sense that they implicitly or explicitly
field [Glover and Laguna 1997] and it is try to learn correlations between deci-
important to clarify that the terms ex- sion variables to identify high quality ar-
ploration and exploitation are sometimes eas in the search space. This kind of
used instead, for example in the Evo- metaheuristic performs, in a sense, a bi-
lutionary Computation field [Eiben and ased sampling of the search space. For in-
Schippers 1998], with a more restricted stance, in Evolutionary Computation this
meaning. In fact, the notions of exploita- is achieved by recombination of solutions
tion and exploration often refer to rather and in Ant Colony Optimization by sam-
short-term strategies tied to randomness, pling the search space in every iteration
whereas intensification and diversifica- according to a probability distribution.
tion also refer to medium- and long-term The structure of this work is as follows:
strategies based on the usage of mem- There are several approaches to classify
ory. The use of the terms diversification metaheuristics according to their proper-
and intensification in their initial mean- ties. In Section 2, we briefly list and sum-
ing becomes more and more accepted by marize different classification approaches.
the whole field of metaheuristics. There- Section 3 and Section 4 are devoted to a
fore, we use them throughout the article. description of the most important meta-
The balance between diversification and heuristics nowadays. Section 3 describes
intensification as mentioned above is im- the most relevant trajectory methods and,
portant, on one side to quickly identify re- in Section 4, we outline population-based
gions in the search space with high qual- methods. Section 5 aims at giving a unify-
ity solutions and on the other side not ing view on metaheuristics with respect
to waste too much time in regions of the to the way they achieve intensification
search space which are either already ex- and diversification. This is done by the
introduction of a unifying framework, the the use of memory in Tabu Search is not
I&D frame. Finally, Section 6 offers some nature-inspired as well.
conclusions and an outlook to the future. Population-based vs. single point search.
We believe that it is hardly possible to Another characteristic that can be used
produce a completely accurate survey of for the classification of metaheuristics is
metaheuristics that is doing justice to ev- the number of solutions used at the same
ery viewpoint. Moreover, a survey of an time: Does the algorithm work on a popu-
immense area such as metaheuristics has lation or on a single solution at any time?
to focus on certain aspects and therefore Algorithms working on single solutions
has unfortunately to neglect other aspects. are called trajectory methods and encom-
Therefore, we want to clarify at this point pass local search-based metaheuristics,
that this survey is done from the concep- like Tabu Search, Iterated Local Search
tual point of view. We want to outline the and Variable Neighborhood Search. They
different concepts that are used in differ- all share the property of describing a tra-
ent metaheuristics in order to analyze the jectory in the search space during the
similarities and the differences between search process. Population-based meta-
them. We do not go into the implementa- heuristics, on the contrary, perform search
tion of metaheuristics, which is certainly processes which describe the evolution of
an important aspect of metaheuristics a set of points in the search space.
research with respect to the increas- Dynamic vs. static objective function.
ing importance of efficiency and software Metaheuristics can also be classified ac-
reusability. We refer the interested reader cording to the way they make use of the
to Whitley [1989], Grefenstette [1990], objective function. While some algorithms
Fink and Voß [1999], Schaerf et al. [2000], keep the objective function given in the
and Voß and Woodruff [2002]. problem representation “as it is”, some
others, like Guided Local Search (GLS),
modify it during the search. The idea be-
2. CLASSIFICATION OF METAHEURISTICS
hind this approach is to escape from lo-
There are different ways to classify and cal minima by modifying the search land-
describe metaheuristic algorithms. De- scape. Accordingly, during the search the
pending on the characteristics selected objective function is altered by trying to in-
to differentiate among them, several corporate information collected during the
classifications are possible, each of them search process.
being the result of a specific viewpoint. One vs. various neighborhood structures.
We briefly summarize the most important Most metaheuristic algorithms work on
ways of classifying metaheuristics. one single neighborhood structure. In
other words, the fitness landscape topol-
Nature-inspired vs. non-nature in- ogy does not change in the course of the
spired. Perhaps, the most intuitive way algorithm. Other metaheuristics, such as
of classifying metaheuristics is based on Variable Neighborhood Search (VNS), use
the origins of the algorithm. There are a set of neighborhood structures which
nature-inspired algorithms, like Genetic gives the possibility to diversify the search
Algorithms and Ant Algorithms, and by swapping between different fitness
non nature-inspired ones such as Tabu landscapes.
Search and Iterated Local Search. In Memory usage vs. memory-less methods.
our opinion this classification is not very A very important feature to classify meta-
meaningful for the following two reasons. heuristics is the use they make of the
First, many recent hybrid algorithms do search history, that is, whether they use
not fit either class (or, in a sense, they memory or not.4 Memory-less algorithms
fit both at the same time). Second, it is
sometimes difficult to clearly attribute an 4 Here we refer to the use of adaptive memory, in con-
algorithm to one of the two classes. So, trast to rather rigid memory, as used for instance in
for example, one might ask the question if Branch & Bound.
to avoid cycles. The short term memory while it is decreased if there are no im-
is implemented as a tabu list that keeps provements (thus intensification should
track of the most recently visited solu- be boosted). More advanced ways to cre-
tions and forbids moves toward them. The ate dynamic tabu tenure are described
neighborhood of the current solution is in Glover [1990].
thus restricted to the solutions that do not However, the implementation of short
belong to the tabu list. In the following term memory as a list that contains com-
we will refer to this set as allowed set. plete solutions is not practical, because
At each iteration the best solution from managing a list of solutions is highly in-
the allowed set is chosen as the new cur- efficient. Therefore, instead of the solu-
rent solution. Additionally, this solution is tions themselves, solution attributes are
added to the tabu list and one of the so- stored.10 Attributes are usually compo-
lutions that were already in the tabu list nents of solutions, moves, or differences
is removed (usually in a FIFO order). Due between two solutions. Since more than
to this dynamic restriction of allowed so- one attribute can be considered, a tabu
lutions in a neighborhood, TS can be con- list is introduced for each of them. The set
sidered as a dynamic neighborhood search of attributes and the corresponding tabu
technique [Stützle 1999b]. The algorithm lists define the tabu conditions which are
stops when a termination condition is met. used to filter the neighborhood of a solu-
It might also terminate if the allowed set tion and generate the allowed set. Storing
is empty, that is, if all the solutions in N (s) attributes instead of complete solutions is
are forbidden by the tabu list.8 much more efficient, but it introduces a
The use of a tabu list prevents from loss of information, as forbidding an at-
returning to recently visited solutions, tribute means assigning the tabu status to
therefore it prevents from endless cycling9 probably more than one solution. Thus, it
and forces the search to accept even up- is possible that unvisited solutions of good
hill moves. The length l of the tabu list quality are excluded from the allowed set.
(i.e., the tabu tenure) controls the mem- To overcome this problem, aspiration cri-
ory of the search process. With small tabu teria are defined which allow to include a
tenures the search will concentrate on solution in the allowed set even if it is for-
small areas of the search space. On the bidden by tabu conditions. Aspiration cri-
opposite, a large tabu tenure forces the teria define the aspiration conditions that
search process to explore larger regions, are used to construct the allowed set. The
because it forbids revisiting a higher num- most commonly used aspiration criterion
ber of solutions. The tabu tenure can be selects solutions which are better than the
varied during the search, leading to more current best one. The complete algorithm,
robust algorithms. An example can be as described above, is reported in Figure 4.
found in Taillard [1991], where the tabu Tabu lists are only one of the possible
tenure is periodically reinitialized at ran- ways of taking advantage of the history
dom from the interval [l min , l max ]. A more of the search. They are usually identi-
advanced use of a dynamic tabu tenure is fied with the usage of short term memory.
presented in Battiti and Tecchiolli [1994] Information collected during the whole
and Battiti and Protasi [1997], where search process can also be very useful,
the tabu tenure is increased if there especially for a strategic guidance of the
is evidence for repetitions of solutions algorithm. This kind of long-term mem-
(thus a higher diversification is needed), ory is usually added to TS by referring to
four principles: recency, frequency, quality
and influence. Recency-based memory
8 Strategies for avoiding to stop the search when the records for each solution (or attribute)
allowed set is empty include the choice of the least
recently visited solution, even if it is tabu.
9 Cycles of higher period are possible, since the tabu 10 In addition to storing attributes, some longer term
list has a finite length l which is smaller than the TS strategies also keep complete solutions (e.g., elite
cardinality of the search space. solutions) in the memory.
s ← GenerateInitialSolution()
InitializeTabuLists(TL1 , . . . , TLr )
k←0
while termination conditions not met do
AllowedSet(s, k) ← {s0 ∈ N (s) | s does not violate a tabu condition,
or it satisfies at least one aspiration condition}
s ← ChooseBestOf(AllowedSet(s, k))
UpdateTabuListsAndAspirationConditions()
k ←k+1
endwhile
the most recent iteration it was involved proaches dominate the Job Shop Schedu-
in. Orthogonally, frequency-based mem- ling (JSS) problem area (see, e.g., Nowicki
ory keeps track of how many times each and Smutnicki [1996]) and the Vehicle
solution (attribute) has been visited. This Routing (VR) area [Gendreau et al. 2001].
information identifies the regions (or the Further current applications can be found
subsets) of the solution space where the at [Tabu Search website 2003].
search was confined, or where it stayed for
a high number of iterations. This kind of 3.4. Explorative Local Search Methods
information about the past is usually ex-
ploited to diversify the search. The third In this section, we present more recently
principle (i.e., quality) refers to the ac- proposed trajectory methods. These are
cumulation and extraction of information the Greedy Randomized Adaptive Search
from the search history in order to identify Procedure (GRASP), Variable Neighbor-
good solution components. This informa- hood Search (VNS), Guided Local Search
tion can be usefully integrated in the so- (GLS) and Iterated Local Search (ILS).
lution construction. Other metaheuristics
(e.g., Ant Colony Optimization) explicitly 3.4.1. GRASP. The Greedy Randomized
use this principle to learn about good com- Adaptive Search Procedure (GRASP), see
binations of solution components. Finally, Feo and Resende [1995] and Pitsoulis and
influence is a property regarding choices Resende [2002], is a simple metaheuristic
made during the search and can be used that combines constructive heuristics and
to indicate which choices have shown to be local search. Its structure is sketched in
the most critical. In general, the TS field is Figure 5. GRASP is an iterative procedure,
a rich source of ideas. Many of these ideas composed of two phases: solution construc-
and strategies have been and are currently tion and solution improvement. The best
adopted by other metaheuristics. found solution is returned upon termina-
TS has been applied to most CO prob- tion of the search process.
lems; examples for successful applica- The solution construction mechanism
tions are the Robust Tabu Search to the (see Figure 6) is characterized by two
QAP [Taillard 1991], the Reactive Tabu main ingredients: a dynamic constructive
Search to the MAXSAT problem [Battiti heuristic and randomization. Assuming
and Protasi 1997], and to assignment that a solution s consists of a subset of
problems [Dell’Amico et al. 1999]. TS ap- a set of elements (solution components),
plest scheme is, trivially, to keep α con- cited in Pitsoulis and Resende [2002], and an ex-
ample for a metaheuristic method using an adap-
stant; it can also be changed at each iter- tive greedy procedure depending on search history
ation, either randomly or by means of an is Squeaky Wheel Optimization (SWO) [Joslin and
adaptive scheme. Clements 1999].
Select a set of neighborhood structures Nk , k = 1, . . . , kmax becomes the local search starting point.
s ← GenerateInitialSolution()
while termination conditions not met do
The local search can use any neighbor-
k←1 hood structure and is not restricted to the
while k < kmax do % Inner loop set of neighborhood structures Nk , k =
s0 ← PickAtRandom(Nk (s))
00
% Shaking phase 1, . . . , kmax . At the end of the local search
s ← LocalSearch(s0 ) process (terminated as soon as a prede-
00
if ( f (s ) < f (s)) then
00 fined termination condition is verified) the
s←s
k←1
new solution s00 is compared with s and, if
else it is better, it replaces s and the algorithm
k ←k+1 starts again with k = 1. Otherwise, k is in-
endif cremented and a new shaking phase starts
endwhile
using a different neighborhood.
endwhile
The objective of the shaking phase is
Fig. 7. Algorithm: Variable Neighborhood Search to perturb the solution so as to provide a
(VNS). good starting point for the local search.
The starting point should belong to the
[Binato et al. 2001], the graph pla- basin of attraction of a different local min-
narization problem [Resende and Ribeiro imum than the current one, but should
1997] and assignment problems [Prais not be “too far” from s, otherwise the al-
and Ribeiro 2000]. A detailed and anno- gorithm would degenerate into a simple
tated bibliography references many more random multi-start. Moreover, choosing s0
applications [Festa and Resende 2002]. in the neighborhood of the current best so-
lution is likely to produce a solution that
maintains some good features of the cur-
3.4.2. Variable Neighborhood Search. Vari-
rent one.
able Neighborhood Search (VNS) is a
The process of changing neighborhoods
metaheuristic proposed in Hansen and
in case of no improvements corresponds
Mladenović [1999, 2001], which explicitly
to a diversification of the search. In par-
applies a strategy based on dynamically
ticular the choice of neighborhoods of
changing neighborhood structures. The al-
increasing cardinality yields a progres-
gorithm is very general and many degrees
sive diversification. The effectiveness of
of freedom exist for designing variants and
this dynamic neighborhood strategy can
particular instantiations.12
be explained by the fact that a “bad”
At the initialization step, a set of neigh-
place on the search landscape given by
borhood structures has to be defined.
one neighborhood could be a “good” place
These neighborhoods can be arbitrarily
on the search landscape given by an-
chosen, but often a sequence |N1 | < |N2 | <
other neighborhood.14 Moreover, a solu-
· · · < |Nkmax | of neighborhoods with increas-
tion that is locally optimal with respect
ing cardinality is defined.13 Then an initial to a neighborhood is probably not locally
solution is generated, the neighborhood optimal with respect to another neighbor-
index is initialized and the algorithm iter- hood. These concepts are known as “One
ates until a stopping condition is met (see Operator, One Landscape” and explained
Figure 7). VNS’ main cycle is composed in Jones [1995a, 1995b]. The core idea
of three phases: shaking, local search and is that the neighborhood structure deter-
move. In the shaking phase a solution s0 mines the topological properties of the
in the kth neighborhood of the current search landscape, that is, each neighbor-
solution s is randomly selected. Then, s0 hood defines one landscape. The proper-
ties of a landscape are in general different
12 The variants described in the following are also de-
from those of other landscapes, therefore
scribed in Hansen and Mladenović [1999, 2001].
13 In principle they could be one included in the other,
N1 ⊂ N2 ⊂ · · · ⊂ Nkmax . Nevertheless, such a se-
quence might produce an inefficient search, because 14 A
“good” place in the search space is an area from
a large number of solutions could be revisited. which a good local minimum can be reached.
Fig. 8. Two search landscapes defined by two different neighborhoods. On the landscape that is
shown in the graphic on the left, the best improvement local search stops at sˆ1 , while it proceeds
till a better local minimum sˆ2 on the landscape that is shown in the graphic on the right.
a search strategy performs differently on ture Nk is defined. Local search only re-
them (see an example in Figure 8). gards changes on the variables belonging
This property is directly exploited by to the sub-problem it is applied to. The in-
a local search called Variable Neighbor- ner loop of VNDS is the following:
hood Descent (VND). In VND a best im- s0 ← PickAtRandom(Nk (s)) % s and s0 differ in k
provement local search (see Section 3.1) attributes
is applied, and, in case a local minimum s00 ← LocalSearch(s0 ,Attributes) % only moves
is found, the search proceeds with an- involving the k
other neighborhood structure. The VND attributes are
algorithm can be obtained by substituting allowed
the inner loop of the VNS algorithm (see if ( f (s00 ) < f (s)) then
Figure 7) with the following pseudo-code: s ← s00
k←1
s0 ← ChooseBestOf(Nk (s)) else
0
if ( f (s ) < f (s)) k ←k+1
then % i.e., if a better solution is found in Nk (s) endif
0
s←s
else % i.e., s is a local minimum The decision whether to perform a move
k ←k+1 can be varied as well. The acceptance cri-
endif terion based on improvements is strongly
As can be observed from the description steepest descent-oriented and it might not
as given above, the choice of the neigh- be suited to effectively explore the search
borhood structures is the critical point of space. For example, when local minima
VNS and VND. The neighborhoods cho- are clustered, VNS can quickly find the
sen should exploit different properties and best optimum in a cluster, but it has no
characteristics of the search space, that is, guidance to leave that cluster and find an-
the neighborhood structures should pro- other one. Skewed VNS (SVNS) extends
vide different abstractions of the search VNS by providing a more flexible accep-
space. A variant of VNS is obtained by se- tance criterion that takes also into account
lecting the neighborhoods in such a way the distance from the current solution.15
The new acceptance criterion is the fol-
as to produce a problem decomposition
lowing: besides always accepting improve-
(the algorithm is called Variable Neigh-
ments, worse solutions can be accepted if
borhood Decomposition Search—VNDS).
the distance from the current one is less
VNDS follows the usual VNS scheme, but
than a value αρ(s, s00 ). The function ρ(s, s00 )
the neighborhood structures and the lo-
measures the distance between s and s00
cal search are defined on sub-problems.
For each solution, all attributes (usually
variables) are kept fixed except for k of 15 A distance measure between solutions has thus to
them. For each, k, a neighborhood struc- be formally defined.
Fig. 9. Basic GLS idea: Escaping from a valley in the landscape by increas-
ing the objective function value of its solutions.
and α is a parameter that weights the The basic GLS principle is to help the
importance of the distance between the search to gradually move away from lo-
two solutions in the acceptance criterion. cal minima by changing the search land-
The inner loop of SVNS can be sketched as scape. In GLS, the set of solutions and
follows: the neighborhood structure are kept fixed,
00 00 while the objective function f is dynami-
if ( f (s ) − αρ(s, s ) < f (s)) then
s←s
00 cally changed with the aim of making the
k←1 current local optimum “less desirable”. A
else pictorial description of this idea is given in
k ←k+1 Figure 9.
endif The mechanism used by GLS is based
on solution features, which may be any
VNS and its variants have been success- kind of properties or characteristics that
fully applied to graph based CO problems can be used to discriminate between solu-
such as the p-Median problem [Hansen tions. For example, solution features in the
and Mladenović 1997], the degree con- TSP could be arcs between pairs of cities,
strained minimum spanning tree prob- while in the MAXSAT problem they could
lem [Ribeiro and Souza 2002], the Steiner be the number of unsatisfied clauses. An
tree problem [Wade and Rayward-Smith indicator function Ii (s) indicates whether
1997] and the k-Cardinality Tree (KCT) the feature i is present in solution s:
problem [Mladenović and Urošević 2001].
References to more applications can be (
1 : if feature i is present in
found in Hansen and Mladenović [2001].
Ii (s) = : solution s
0 : otherwise.
3.4.3. Guided Local Search. Tabu Search
and Variable Neighborhood Search ex- The objective function f is modified to
plicitly deal with dynamic neighborhoods 0
yield a new objective function f by adding
with the aim of efficiently and effectively a term that depends on the m features:
exploring the search space. A different
approach for guiding the search is to 0
X
m
dynamically change the objective func- f (s) = f (s) + λ pi · Ii (s),
tion. Among the most general methods i=1
that use this approach is Guided Local
Search (GLS) [Voudouris and Tsang 1999; where pi are called penalty parame-
Voudouris 1997]. ters and λ is called the regularization
Fig. 12. A desirable ILS step: the local minimum ŝ is perturbed, then LS
is applied and a new local minimum is found.
the pheromone trails of the used compo- rule. With this rule, some moves are cho-
nents and/or connections according to the sen deterministically (in a greedy man-
quality of the solution it has built. This is ner), others are chosen probabilistically
called online delayed pheromone update. with the usual decision rule. Third, in
Pheromone evaporation is the process by ACS, ants perform only online step-
means of which the pheromone trail in- by-step pheromone updates. These up-
tensity on the components decreases over dates are performed to favor the emer-
time. From a practical point of view, phe- gence of other solutions than the best so
romone evaporation is needed to avoid a far.
too rapid convergence of the algorithm to- MAX -MIN Ant System (MMAS).
ward a sub-optimal region. It implements MMAS is also an extension of AS. First,
a useful form of forgetting, favoring the ex- the pheromone trails are only updated of-
ploration of new areas in the search space. fline by the daemon (the arcs that were
DaemonActions(): Daemon actions can used by the iteration best ant or the best
be used to implement centralized actions ant since the start of the algorithm receive
which cannot be performed by single ants. additional pheromone). Second, the phero-
Examples are the use of a local search pro- mone values are restricted to an interval
cedure applied to the solutions built by the [τmin , τmax ] and the pheromone trails are
ants, or the collection of global informa- initialized to their maximum value τmax .
tion that can be used to decide whether it Explicit bounds on the pheromone trails
is useful or not to deposit additional phe- prevent that the probability to construct a
romone to bias the search process from solution falls below a certain value greater
a nonlocal perspective. As a practical ex- than 0. This means that the chance of find-
ample, the daemon can observe the path ing a global optimum never vanishes dur-
found by each ant in the colony and choose ing the course of the algorithm.
to deposit extra pheromone on the compo- Recently, researchers have been deal-
nents used by the ant that built the best ing with finding similarities between
solution. Pheromone updates performed ACO algorithms and probabilistic learn-
by the daemon are called offline phero- ing algorithms such as EDAs. An im-
mone updates. portant step into this direction was the
Within the ACO metaheuristic frame- development of the Hyper-Cube Frame-
work, as shortly described above, the cur- work for Ant Colony Optimization (HC-
rently best performing versions in prac- ACO) [Blum et al. 2001]. An extensive
tise are Ant Colony System (ACS) [Dorigo study on this subject has been presented
and Gambardella 1997] and MAX -MIN in Zlochin et al. [2004], where the au-
Ant System (MMAS) [Stützle and Hoos thors present a unifying framework for
2000]. In the following, we are going to so-called Model-Based Search (MBS) al-
briefly outline the peculiarities of these al- gorithms. Also, the close relation of algo-
gorithms. rithms like Population-Based Incremen-
Ant Colony System (ACS). The ACS al- tal Learning (PBIL) [Baluja and Caruana
gorithm has been introduced to improve 1995] and the Univariate Marginal Dis-
the performance of AS. ACS is based on tribution Algorithm (UMDA) [Mühlenbein
AS but presents some important differ- and Paaß 1996] to ACO algorithms in the
ences. First, the daemon updates phero- Hyper-Cube Framework has been shown.
mone trails offline: At the end of an iter- We refer the interested reader to Zlochin
ation of the algorithm—once all the ants et al. [2004] for more information on this
have built a solution—pheromone is added subject. Furthermore, connections of ACO
to the arcs used by the ant that found the algorithms to Stochastic Gradient Descent
best solution from the start of the algo- (SGD) algorithms are shown in Meuleau
rithm. Second, ants use a different deci- and Dorigo [2002].
sion rule to decide to which component to Successful applications of ACO include
move next in the construction graph. The the application to routing in communi-
rule is called pseudo-random-proportional cation networks [Di Caro and Dorigo
1998], the application to the Sequential fication). However, the terms exploitation
Ordering Problem (SOP) [Gambardella and exploration have a somewhat more re-
and Dorigo 2000], and the application to stricted meaning. In fact, the notions of
Resource Constraint Project Scheduling exploitation and exploration often refer to
(RCPS) [Merkle et al. 2002]. Further refer- rather short term strategies tied to ran-
ences to applications of ACO can be found domness, whereas intensification and di-
in Dorigo and Stützle [2002, 2003]. versification refer to rather medium and
long term strategies based on the usage of
memory. As the various different ways of
5. A UNIFYING VIEW ON INTENSIFICATION
using memory become increasingly impor-
AND DIVERSIFICATION
tant in the whole field of metaheuristics,
In this section, we take a closer look at the terms intensification and diversifica-
the concepts of intensification and diversi- tion are more and more adopted and un-
fication as the two powerful forces driving derstood in their original meaning.
metaheuristic applications to high perfor- An implicit reference to the concept of
mance. We give a view on metaheuristics “locality” is often introduced when inten-
that is characterized by the way inten- sification and diversification are involved.
sification and diversification are imple- The notion of “area” (or “region”) of the
mented. Although the relevance of these search space and of “locality” can only be
two concepts is commonly agreed, so far expressed in a fuzzy way, as they always
there is no unifying description to be found depend on the characteristics of the search
in the literature. Descriptions are very space as well as on the definition of metrics
generic and metaheuristic specific. There- on the search space (distances between
fore most of them can be considered incom- solutions).
plete and sometimes they are even oppos- The literature provides several high
ing. Depending on the paradigm behind level descriptions of intensification and
a particular metaheuristic, intensification diversification. In the following, we cite
and diversification are achieved in differ- some of them.
ent ways. Even so, we propose a unifying
view on intensification and diversification. “Two highly important components
Furthermore, this discussion could lead to of Tabu Search are intensification and
the goal-directed development of hybrid diversification strategies. Intensification
algorithms combining concepts originat- strategies are based on modifying choice
ing from different metaheuristics. rules to encourage move combinations
and solution features historically found
good. They may also initiate a return to
5.1. Intensification and Diversification
attractive regions to search them more
Every metaheuristic approach should be thoroughly. Since elite solutions must be
designed with the aim of effectively and recorded in order to examine their imme-
efficiently exploring a search space. The diate neighborhoods, explicit memory is
search performed by a metaheuristic ap- closely related to the implementation of
proach should be “clever” enough to both intensification strategies. The main dif-
intensively explore areas of the search ference between intensification and
space with high quality solutions, and to diversification is that during an in-
move to unexplored areas of the search tensification stage the search focuses
space when necessary. The concepts for on examining neighbors of elite solu-
reaching these goals are nowadays called tions. [· · · ] The diversification stage
intensification and diversification. These on the other hand encourages the
terms stem from the TS field [Glover and search process to examine unvisited
Laguna 1997]. In other fields—such as the regions and to generate solutions
EC field—related concepts are often de- that differ in various significant ways
noted by exploitation (related to intensifi- from those seen before.” [Glover and
cation) and exploration (related to diversi- Laguna 1997]
Later in the same book, Glover and tion play balanced roles.” Yagiura and
Laguna write: “In some instances we may Ibaraki [2001].
conceive of intensification as having the
“Holland frames adaption as a ten-
function of an intermediate term strat-
sion between exploration (the search for
egy, while diversification applies to con-
new, useful adaptations) and exploitation
siderations that emerge in the longer
(the use and propagation of these adapta-
run.”
tions). The tension comes about since any
Furthermore, they write: “Strategic os- move toward exploration—testing previ-
cillation is closely linked to the origins ously unseen schemas or schemas whose
of tabu search, and provides a means instances seen so far have low fitness—
to achieve an effective interplay be- takes away from the exploitation of tried
tween intensification and diversifica- and true schemas. In any system (e.g., a
tion.” population of organisms) required to face
“After a local minimizer is encountered, environments with some degree of un-
all points in its attraction basin lose predictability, an optimal balance be-
any interest for optimization. The search tween exploration and exploitation
should avoid wasting excessive computing must be found. The system has to keep
time in a single basin and diversification trying out new possibilities (or else it could
should be activated. On the other hand, “over-adapt” and be inflexible in the face
in the assumptions that neighbors have of novelty), but it also has to continually
correlated cost function values, some ef- incorporate and use past experience as a
fort should be spent in searching for better guide for future behavior.”—M. Mitchell
points located close to the most recently citing J.H. Holland in Mitchell [1998].
found local minimum point (intensifica- All these descriptions share the com-
tion). The two requirements are con- mon view that there are two forces for
flicting and finding a proper balance which an appropriate balance has to be
of diversification and intensification found. Sometimes these two forces were
is a crucial issue in heuristics.” described as opposing forces. However,
[Battiti 1996]. lately some researchers raised the ques-
“A metaheuristic will be successful tion on how opposing intensification and
on a given optimization problem if it diversification really are.
can provide a balance between the ex- In 1998, Eiben and Schippers [1998]
ploitation of the accumulated search started a discussion about that in the field
experience and the exploration of the of Evolutionary Computation. They ques-
search space to identify regions with tion the common opinion about EC algo-
high quality solutions in a problem rithms, that they explore the search space
specific, near optimal way.” [Stützle by the genetic operators, while exploita-
1999b]. tion is achieved by selection. In their pa-
per they give examples of operators that
“Intensification is to search carefully one cannot unambiguously label as be-
and intensively around good solutions ing either intensification or diversifica-
found in the past search. Diversification, tion. So, for example, an operator using
on the contrary, is to guide the search a local search component to improve in-
to unvisited regions. These terminologies dividuals is not merely a mechanism of di-
are usually used to explain the basic ele- versification because it also comprises a
ments of Tabu Search, but these are essen- strong element of intensification (e.g., in
tial to all the metaheuristic algorithms. In Memetic Algorithms). Another example is
other words, various metaheuristic ideas the heuristically guided recombination of
should be understood from the viewpoint good quality solutions. If the use of the
of these two concepts, and metaheuris- accumulated search experience is identi-
tic algorithms should be designed so fied with intensification, then a recombi-
that intensification and diversifica- nation operator is not merely a means of
Fig. 18. The I&D frame provides a unified view on intensification and diversification in meta-
heuristics (OG = I&D components solely guided by the objective function, NOG = I&D components
solely guided by one or more function other than the objective function, R = I&D components solely
guided by randomness).
diversification, it also—as in the exam- ponent, and vice versa. To clarify this,
ple above—has a strong intensification we developed a framework to put I&D
component. components of different metaheuristics
Especially the TS literature advocates into relation with each other. We called
the view that intensification and diversifi- this framework—shown in Figure 18—the
cation cannot be characterized as opposing I&D frame.
forces. For example, in Glover and Laguna We depict the space of all I&D compo-
[1997], the authors write: “Similarly, as we nents as a triangle with the three corners
have noted, intensification and diversifica- corresponding to three extreme examples
tion are not opposed notions, for the best of I&D components. The corner denoted by
form of each contains aspects of the other, OG corresponds to I&D components solely
along a spectrum of alternatives.” guided by the objective function of the
Intensification and diversification can problem under consideration. An example
be considered as effects of algorithm of an I&D component which is located very
components. In order to understand sim- close to the corner OG is the steepest de-
ilarities and differences among meta- scent choice rule in local search. The cor-
heuristics, a framework may be helpful in ner denoted by NOG covers all I&D com-
providing a unified view on intensification ponents guided by one or more functions
and diversification components. We define other than the objective function, again
an I&D component as any algorithmic or without using any random component. An
functional component that has an inten- example for such a component is a de-
sification and/or a diversification effect terministic restart mechanism based on
on the search process. Accordingly, exam- global frequency counts of solution com-
ples of I&D components are genetic opera- ponents. The third corner, which is de-
tors, perturbations of probability distribu- noted by R, comprises all I&D compo-
tions, the use of tabu lists, or changes in nents that are completely random. This
the objective function. Thus, I&D compo- means that they are not guided by any-
nents are operators, actions, or strategies thing. For example, a restart of an EC
of metaheuristic algorithms. approach with random individuals is lo-
In contrast to the still widely spread cated in that corner. From the description
view that there are components that have of the corners, it becomes clear that cor-
either an intensification or a diversifica- ner OG corresponds to I&D components
tion effect, there are many I&D compo- with a maximum intensification effect and
nents that have both. In I&D components a minimum diversification effect. On the
that are commonly labelled as intensifi- other hand, corners NOG, R and the seg-
cation, the intensification component is ment between the two corners correspond
stronger than the diversification com- to I&D components with a maximum
diversification effect and a minimum in- ponents that are inherent to a metaheuris-
tensification effect.19 All I&D components tic and explain them in the context of the
can be located somewhere on or inbetween I&D frame. With that, we show that most
the three corners, where the intensifica- of the basic I&D components have an in-
tion effect is becoming smaller the further tensification character as well as a diver-
away a mechanism is located from OG. At sification character.
the same time the diversification effect is For many components and strategies of
growing. In step with this gradient is the metaheuristics, it is obvious that they in-
use I&D components make of the objec- volve an intensification as well as a diver-
tive function. The less an I&D component sification component, because they make
is using the objective function, the further an explicit use of the objective function.
away from corner OG it has to be located. For example, the basic idea of TS is a
There is also a second gradient to be found neighbor choice rule using one or more
in this frame (which is shown in the second tabu lists. This I&D component has two ef-
graphic of Figure 18). Corner R stands for fects on the search process. The restriction
complete randomness. The less random- of the set of possible neighbors in every
ness is involved in an I&D component, the step has a diversifying effect on the search,
further away from corner R it has to be lo- whereas the choice of the best neighbor
cated. Finally, a third gradient describes in the restricted set of neighbors (the best
the influence of criteria different from the non-tabu move) has an intensifying effect
objective function, which generally stem on the search. The balance between these
from the exploitation of the search history two effects can be varied by the length of
that is in some form kept in the memory. In the tabu list. Shorter tabu lists result in
the following, we analyze some basic I&D a lower influence of the diversifying ef-
components intrinsic to the basic versions fect, whereas longer tabu lists result in an
of the metaheuristics with respect to the overall higher influence of the diversifying
I&D frame. effect. The location of this component in
Figure 18 is on the segment between cor-
5.2. Basic I&D Components ner OG and NOG. The shorter the tabu
of Metaheuristics lists, the closer is the location to corner
OG, and vice versa.
The I&D components occurring in meta- Another example for such an I&D com-
heuristics can be divided in basic (or in- ponent is the probabilistic acceptance cri-
trinsic) ones and strategic ones. The basic terion in conjunction with the cooling
I&D components are the ones that are de- schedule in SA. The acceptance criterion
fined by the basic ideas of a metaheuris- is guided by the objective function and it
tic. On the other side, strategic I&D com- also involves a changing amount of ran-
ponents are composed of techniques and domness. The decrease of the tempera-
strategies the algorithm designer adds to ture parameter drives the system from di-
the basic metaheuristic in order to im- versification to intensification eventually
prove the performance by incorporating leading to a convergence20 of the system.
medium- and long-term strategies. Many Therefore, this I&D component is located
of these strategies were originally devel- in the interior of the I&D space between
oped in the context of a specific meta- corners OG, NOG and R.
heuristic. However, it becomes more and A third example is the following one.
more apparent that many of these strate- Ant Colony Optimization provides an I&D
gies can also be very useful when applied component that manages the update of the
in other metaheuristics. In the following, pheromone values. This component has
we exemplary choose some basic I&D com-
19 There is no quantitative difference between 20 Here, we use the term convergence in the sense
corners NOG and R. The difference is rather of getting stuck in the basin of attraction of a local
qualitative. minimum.
the effect of changing the probability dis- one. Different landscapes differ in their
tribution that is used to sample the search ruggedness. A landscape with small (av-
space. It is guided by the objective func- erage) fitness differences between neigh-
tion (solution components found in better boring points is called smooth and it will
solutions than others are updated with usually have just a few local optima. In
a higher amount of pheromone) and it is contrast, a landscape with large (average)
also influenced by a function applying the fitness differences is called rugged and
pheromone evaporation. Therefore, this it will be usually characterized by many
component is located on the line between local optima. Most of the neighborhoods
corners OG and NOG. The effect of this used in metaheuristics provide some de-
mechanism is basically the intensification gree of smoothness that is higher than the
of the search, but there is also a diversify- one of a fitness landscape defined by a ran-
ing component that depends on the greed- dom neighborhood. This means that such
iness of the pheromone update (the less a neighborhood is, in a sense, preselecting
greedy or deterministic, the higher is the for every solution a set of neighbors for
diversifying effect). which the average fitness is not too dif-
For other strategies and components ferent. Therefore, even when a solution is
of metaheuristics, it is not immediately randomly selected from a set of neighbors,
obvious that they have both, an intensi- the objective function guidance is implic-
fication and a diversification effect. An itly present. The consequence is that even
example is the random selection of a for a random kick-move there is some de-
neighbor from the neighborhood of a cur- gree of intensification involved, as far as
rent solution, as it is done for example in a nonrandom neighborhood is considered.
the kick-move mechanism of ILS. Initially For a mutation operator of an EC
one might think that there is no intensifi- method that is doing a random change of
cation involved and that this mechanism a solution, it is neither immediately clear
has a pure diversification effect caused that it can have both, an intensification
by the use of randomness. However, for as well as a diversification effect. In the
the following reason, this is not the case. following, we assume a bit-string repre-
Many strategies (such as the kick-move sentation and a mutation operator that
operator mentioned above) involve the ex- is characterized by flipping every bit of
plicit or implicit use of a neighborhood. A a solution with a certain probability. The
neighborhood structures the search space implicit neighborhood used by this opera-
in the sense that it defines the topology tor is the completely connected neighbor-
of the so-called fitness landscape [Stadler hood. However, the neighbors have differ-
1995, 1996; Jones 1995a; Kauffman ent probabilities to be selected. The ones
1993], which can be visualized as a that are (with respect to the Hamming dis-
labelled graph. In this graph, nodes are tance) closer to the solution to which the
solutions (labels indicate their objective operator is applied to, have a higher proba-
function value) and arcs represent the bility to be generated by the operator. With
neighborhood relation between states.21 this observation, we can use the same ar-
A fitness landscape can be analyzed by gument as above in order to show an im-
means of statistical measures. One of the plicit use of objective function guidance.
common measures is the auto-correlation, The balance between intensification and
that provides information about how diversification is determined by the proba-
much the fitness will change when a move bility to flip each bit. The higher this prob-
is made from one point to a neighboring ability, the higher the diversification effect
of the operator. In contrast, the lower this
probability, the higher the intensification
21 The discussion of definitions and analysis of fitness
effect of this operator.
landscapes is beyond the scope of this article. We for-
ward the interested reader to Stadler [1995,1996];
On the other side, there are some strate-
Jones [1995a, 1995b]; Fonlupt et al. [1999], Hordijk gies that are often labelled as intensi-
[1996], Kauffman 1993], and Reeved [1999]. fication supposedly without having any
diversifying effect. One example is the se- Table 1. I&D-components intrinsic to the basic
lection operator in EC algorithms. How- metaheuristics
Metaheuristic I&D component
ever, nearly all selection operators involve
SA acceptance criterion
some degree of randomness (e.g., propor- + cooling schedule
tionate selection, tournament selection) TS neighbor choice (tabu lists)
and are therefore located somewhere be- aspiration criterion
tween corners OG, NOG and R of the I&D EC recombination
mutation
frame. This means that they also have a di- selection
versifying effect. The balance between in- ACO pheromone update
tensification and diversification depends probabilistic construction
on the function that assigns the selection ILS black box local search
probabilities. If the differences between kick-move
acceptance criterion
the selection probabilities are quite high, VNS black box local search
the intensification effect is higher, and neighborhood choice
similarly for the other extreme of having shaking phase
only small differences between the selec- acceptance criterion
tion probabilities. GRASP black box local search
restricted candidate list
Even an operator like the neighbor GLS penalty function
choice rule of a steepest descent local
search, which might be regarded as pure
intensification, has a diversifying compo- itly and explicitly, when strategies to guide
nent in the sense that the search is “mov- search algorithms are discussed.
ing” in the search space with respect to a The distinction between intensification
neighborhood. A neighborhood can be re- and diversification is often interpreted
garded as a function other than the objec- with respect to the temporal horizon of
tive function, making implicit use of the the search. Short-term search strategies
objective function. Therefore, a steepest can be seen as the iterative application of
descent local search is located between cor- tactics with a strong intensification char-
ners OG and NOG, and has both, a strong acter (for instance, the repeated applica-
intensification effect but also a weak di- tion of greedy moves). When the horizon
versification character. is enlarged, usually strategies referring
Based on these observations we con- to some sort of diversification come into
clude that probably most of the basic I&D play. Indeed, a general strategy usually
components used in metaheuristics have proves its effectiveness especially in the
both, an intensification and a diversifica- long term.
tion effect. However, the balance between The simplest strategy that coordinates
intensification and diversification might the interplay of intensification and diver-
be quite different for different I&D com- sification and can achieve an oscillating
ponents. Table 1 attempts to summarize balance between them is the restart mech-
the basic I&D components that are inher- anism: under certain circumstances (e.g.,
ent to the different metaheuristics. local optimum is reached, no improve-
ments after a specific number of algorithm
5.3. Strategic Control of Intensification
cycles, stagnation, no diversity) the algo-
and Diversification
rithm is restarted. The goal is to achieve
a sufficient coverage of the search space
The right balance between intensification in the long run, thus the already visited
and diversification is needed to obtain regions should not be explored again. The
an effective metaheuristic. Moreover, this computationally least expensive attempt
balance should not be fixed or only chang- to address this issue is a random restart.
ing into one direction (e.g., continuously Every algorithm applying this naive di-
increasing intensification). This balance versification mechanism therefore incor-
should rather be dynamical. This issue is porates an I&D component located in cor-
often treated in the literature, both implic- ner R of the I&D frame.
Usually, the most effective restart ap- (ACS). This ACO algorithm uses an ad-
proaches make use of the search history. ditional I&D component aimed at intro-
Examples for such restart strategies are ducing diversification during the solution
the ones based on concepts such as global construction phase. While an ant is walk-
frequency and global desirability. The con- ing on the construction graph to construct
cept of global frequency is well known from a solution it reduces the pheromone val-
TS applications. In this concept, the num- ues on the nodes/arcs of the construction
ber of occurrences of solution components graph that it visits. This has the effect to
is counted during the run of the algorithm. reduce for the other ants the probability of
These numbers, called the global fre- taking the same path. This additional phe-
quency numbers, are then used for chang- romone update mechanism is called step-
ing the heuristic constructive method, for by-step online pheromone update rule. The
example to generate a new population for interplay between this component and the
restarting an EC method or the initial so- other pheromone update rules (online de-
lution for restarting a trajectory method. layed pheromone update rules and online
Similarly, the concept of global desirability pheromone update rule) leads to an oscil-
(which keeps for every solution component lating balance between intensification and
the objective function value of the best so- diversification.
lution it had been a member of) can be used Some more advanced strategies can be
to restart algorithms with a bias toward found in the literature. Often, they are
good quality solutions. I&D components described with respect to the particular
based on global frequency can be located metaheuristic in which they are applied.
in corner NOG, while global desirability- However, many of them are very general
based components are located along the and can be easily adapted and reused also
segment NOG-OG. Examples of the use in a different context. A very effective ex-
of nonrandom restarts can be found also ample is Strategic Oscillation [Glover and
in population-based methods. In EC algo- Laguna 1997].23 This strategy can be ap-
rithms, the new population can be gener- plied both to constructive methods and
ated by applying constructive heuristics22 improvement algorithms. Actions are in-
(line R-OG). In ACO, this goal is ad- voked with respect to a critical level (os-
dressed by smoothing or resetting phero- cillation boundary), which usually corre-
mone values [Stützle and Hoos 2000]. In sponds to a steady state of the algorithm.
the latter case, if the pheromone reset is Examples for steady states of an algorithm
also based on the search history, the ac- are local minima, completion of solution
tion is located inside the I&D frame. constructions, or the situation were no
There are also strategies explicitly components can be added to a partial solu-
aimed at dynamically changing the bal- tion such that it can be completed to a fea-
ance between intensification and diversi- sible solution. The oscillation strategy is
fication during the search. A fairly simple defined by a pattern indicating the way to
strategy is used in SA, where an increase approach the critical level, to cross it and
in diversification and simultaneous de- to cross it again from the other side. This
crease in intensification can be achieved pattern defines the distance of moves from
by “reheating” the system and then cool- the boundary and the duration of phases
ing it down again (which corresponds to (of intensification and diversification). Dif-
increasing parameter T and decreasing ferent patterns generate different strate-
it again according to some scheme). Such gies; moreover, they can also be adap-
a cooling scheme is called nonmonotonic tive and change depending on the current
cooling scheme (e.g., see Lundy and Mees state and history of the search process.
[1986] or Osman [1993]). Another exam- Other representative examples of general
ple can be found in Ant Colony System
23 Indeed, in Glover and Laguna [1997] and in the lit-
22 See,
for example, Freisleben and Merz [1996] and erature related to TS, many strategies are described
Grefenstette [1987]. and discussed.
strategies that dynamically coordinate in- the successful applications that we have
tensification and diversification can be cited in previous sections are hybridiza-
found in Battiti and Protasi [1997] and tions. In the following, we distinguish dif-
Blum [2002a, 2002b]. ferent forms of hybridization. The first one
Furthermore, strategies are not re- consists of including components from one
stricted to single actions (e.g., variable as- metaheuristic into another one. The sec-
signments, moves), but may also guide ond form concerns systems that are some-
the application of coordinated sequences times labelled as cooperative search. They
of moves. Examples of such a strategy consist of various algorithms exchanging
are given by so-called ejection chain pro- information in some way. The third form
cedures [Glover and Laguna 1997; Rego is the integration of approximate and sys-
1998, 2001]. These procedures provide a tematic (or complete) methods. For a tax-
mechanism to perform compound moves, onomy of hybrid metaheuristics, see Talbi
that is, compositions of different types of [2002].
moves. For instance, in a problem defined Component Exchange Among Meta-
over a graph (e.g., the VRP), it is possi- heuristics. One of the most popular ways
ble to define two different moves: insertion of hybridization concerns the use of trajec-
and exchange of nodes; a compound move tory methods in population-based meth-
can thus be defined as the combination of ods. Most of the successful applications
an insertion and an exchange move. These of EC and ACO make use of local search
procedures describe general strategies to procedures. The reason for that becomes
combine the application of different neigh- apparent when analyzing the respec-
borhood structures, thus they provide tive strengths of trajectory methods and
an example of a general diversification/ population-based methods.
intensification interplay. Further exam- The power of population-based methods
ples of strategies that can be interpreted is certainly based on the concept of re-
as mechanisms to produce compositions combining solutions to obtain new ones.
of interlinked moves can also be found In EC algorithms and Scatter Search, ex-
in the literature concerning the integra- plicit recombinations are implemented by
tion of metaheuristics and complete tech- one or more recombination operators. In
niques [Caseau and Laburthe 1999; Shaw ACO and EDAs, recombination is implicit,
1998]. because new solutions are generated by
In conclusion, we would like to stress using a distribution over the search space
again that most metaheuristic compo- which is a function of earlier populations.
nents have both an intensification and a This allows to make guided steps in the
diversification effect. The higher the objec- search space, which are usually “larger”
tive function bias, the higher the intensi- than the steps done by trajectory meth-
fication effect. In contrast, diversification ods. In other words, a solution resulting
is achieved by following guiding criteria from a recombination in population-based
other than the objective function and also methods is usually more “different” from
by the use of randomness. With the in- the parents than, say, a predecessor solu-
troduction of the I&D frame, metaheuris- tion to a successor solution (obtained by
tics can be analyzed by their signature in applying a move) in TS. We also have “big”
the I&D frame. This can be a first step to- steps in trajectory methods like ILS and
ward the systematic design of metaheuris- VNS, but in these methods the steps are
tics, combining I&D components of differ- usually not guided (these steps are rather
ent origin. called “kick move” or “perturbation” indi-
cating the lack of guidance). It is interest-
ing to note, that in all population-based
5.4. Hybridization of Metaheuristics
methods there are mechanisms in which
We conclude our work by discussing a very good solutions found during the search
promising research issue: the hybridiza- influence the search process in the hope
tion of metaheuristics. In fact, many of to find better solutions in-between those
heuristics based on the way they imple- ory and Practice. Oxford University Press, New
York.
ment the two main concepts for guiding
BÄCK, T., FOGEL, D. B., AND MACHALEWICZ, Z., EDS.
the search process: Intensification and di- 1997. Handbook of Evolutionary Computation.
versification. This comparison is founded Institute of Physics Publishing Ltd, Bristol, UK.
on the I&D frame, where algorithmic com- BALUJA, S. 1994. Population-based incremental
ponents can be characterized by the crite- learning: A method for integrating genetic
ria they depend upon (objective function, search based function optimization and compet-
itive learning. Tech. Rep. No. CMU-CS-94-163,
guiding functions and randomization) and Carnegie Mellon University, Pittsburgh, Pa.
their effect on the search process. Al- BALUJA, S. AND CARUANA, R. 1995. Removing the
though metaheuristics are different in the genetics from the standard genetic algorithm.
sense that some of them are population- In The International Conference on Machine
based (EC, ACO), and others are trajec- Learning 1995, A. Prieditis and S. Russel,
tory methods (SA, TS, ILS, VNS, GRASP), Eds. Morgan-Kaufmann Publishers, San Mateo,
Calif., 38–46.
and although they are based on differ-
BAR-YAM, Y. 1997. Dynamics of Complex Systems.
ent philosophies, the mechanisms to effi- Studies in nonlinearity. Addison–Wesley, Read-
ciently explore a search space are all based ing, Mass.
on intensification and diversification. Nev- BATTITI, R. 1996. Reactive search: Toward self-
ertheless, it is possible to identify “sub- tuning heuristics. In Modern Heuristic Search
DI CARO, G. AND DORIGO, M. 1998. AntNet: Dis- application to industrial engineering problems.
tributed stigmergetic control for communication Comput. Indust. Eng. 37, 281–284.
networks. J. Artif. Int. Res. 9, 317–365. FLEISCHER, M. 1995. Simulated Annealing: past,
DORIGO, M. 1992. Optimization, learning and nat- present and future. In Proceedings of the 1995
ural algorithms (in italian). Ph.D. thesis, DEI, Winter Simulation Conference, C. Alexopoulos,
Politecnico di Milano, Italy. pp. 140. K. Kang, W. Lilegdon, and G. Goldsman, Eds.
DORIGO, M. AND DI CARO, G. 1999. The ant colony 155–161.
optimization meta-heuristic. In New Ideas FOCACCI, F., LABURTHE, F., AND LODI, A. 2002. Local
in Optimization, D. Corne, M. Dorigo, and search and constraint programming. In Hand-
F. Glover, Eds. McGraw-Hill, 11–32. book of Metaheuristics, F. Glover and G. Kochen-
DORIGO, M., DI CARO, G., AND GAMBARDELLA, L. M. berger, Eds. International Series in Opera-
1999. Ant algorithms for discrete optimization. tions Research & Management Science, vol. 57.
Art. Life 5, 2, 137–172. Kluwer Academic Publishers, Norwell, MA.
DORIGO, M. AND GAMBARDELLA, L. M. 1997. Ant FOGEL, D. B. 1994. An introduction to simulated
colony system: A cooperative learning approach evolutionary optimization. IEEE Trans. Neural
to the travelling salesman problem. IEEE Trans. Netw. 5, 1 (Jan.), 3–14.
Evolution. Comput. 1, 1 (Apr.), 53–66. FOGEL, G. B., PORTO, V. W., WEEKES, D. G., FOGEL, D. B.,
DORIGO, M., MANIEZZO, V., AND COLORNI, A. 1996. GRIFFEY, R. H., MCNEIL, J. A., LESNIK, E., ECKER,
Ant system: Optimization by a colony of cooper- D. J., AND SAMPATH, R. 2002. Discovery of RNA
ating agents. IEEE Trans. Syst. Man Cybernet.— structural elements using evolutionary compu-
Part B 26, 1, 29–41. tation. Nucleic Acids Res. 30, 23, 5310–5317.
DORIGO, M. AND STÜTZLE, T. 2002. The ant colony FOGEL, L. J. 1962. Toward inductive inference au-
optimization metaheuristic: Algorithms, appli- tomata. In Proceedings of the International Fed-
cations and advances. In Handbook of Meta- eration for Information Processing Congress.
heuristics, F. Glover and G. Kochenberger, Eds. Munich, 395–399.
International Series in Operations Research & FOGEL, L. J., OWENS, A. J., AND WALSH, M. J. 1966.
Management Science, vol. 57. Kluwer Academic Artificial Intelligence through Simulated Evolu-
Publishers, Norwell, MA, 251–285. tion. Wiley, New York.
DORIGO, M. AND STÜTZLE, T. 2003. Ant Colony Opti- FONLUPT, C., ROBILLIARD, D., PREUX, P., AND TALBI,
mization. MIT Press, Boston, MA. To appear. E. 1999. Fitness landscapes and performance
of meta-heuristics. In Meta-heuristics: advances
DUECK, G. 1993. New Optimization Heuristics. J.
and trends in local search paradigms for op-
Comput. Phy. 104, 86–92.
timization, S. Voß, S. Martello, I. Osman, and
DUECK, G. AND SCHEUER, T. 1990. Threshold accept- C. Roucairol, Eds. Kluwer Academic.
ing: A general purpose optimization algorithm
FREISLEBEN, B. AND MERZ, P. 1996. A genetic lo-
appearing superior to simulated annealing. J.
cal search algorithm for solving symmetric and
Comput. Phy. 90, 161–175.
asymmetric traveling salesman problems. In In-
EIBEN, A. E., RAUÉ, P.-E., AND RUTTKAY, Z. 1994. ternational Conference on Evolutionary Compu-
Genetic algorithms with multi-parent recombi- tation. 616–621.
nation. In Proceedings of the 3rd Conference
FREUDER, E. C., DECHTER, R., GINSBERG, M. L.,
on Parallel Problem Solving from Nature, Y.
SELMAN, B., AND TSANG, E. P. K. 1995. System-
Davidor, H.-P. Schwefel, and R. Manner, Eds.
atic versus stochastic constraint satisfaction. In
Lecture Notes in Computer Science, vol. 866.
Proceedings of the 14th International Joint Con-
Springer, Berlin, 78–87.
ference on Artificial Intelligence, IJCAI 1995.
EIBEN, A. E. AND RUTTKAY, Z. 1997. Constraint Vol. 2. Morgan-Kaufmann, 2027–2032.
satisfaction problems. In Handbook of Evolu- GAMBARDELLA, L. M. AND DORIGO, M. 2000. Ant
tionary Computation, T. Bäck, D. Fogel, and colony system hybridized with a new local search
M. Michalewicz, Eds. Institute of Physics Pub- for the sequential ordering problem. INFORMS
lishing Ltd, Bristol, UK. J. Comput. 12, 3, 237–255.
EIBEN, A. E. AND SCHIPPERS, C. A. 1998. On evo- GAREY, M. R. AND JOHNSON, D. S. 1979. Computers
lutionary exploration and exploitation. Fund. and Intractability; A Guide to the Theory of NP-
Inf. 35, 1–16. Completeness. W.H. Freeman.
FELLER, W. 1968. An Introduction to Probability GENDREAU, M., LAPORTE, G., AND POTVIN, J.-Y. 2001.
Theory and Its Applications. Wiley, New York. Metaheuristics for the vehicle routing problem.
FEO, T. A. AND RESENDE, M. G. C. 1995. Greedy ran- In The Vehicle Routing Problem, P. Toth and
domized adaptive search procedures. J. Global D. Vigo, Eds. SIAM Series on Discrete Mathe-
Optim. 6, 109–133. matics and Applications, vol. 9. 129–154.
FESTA, P. AND RESENDE, M. G. C. 2002. GRASP: An GINSBERG, M. L. 1993. Dynamic backtracking. J.
annotated bibliography. In Essays and Surveys Artif. Int. Res. 1, 25–46.
on Metaheuristics, C. C. Ribeiro and P. Hansen, GLOVER, F. 1977. Heuristics for integer program-
Eds. Kluwer Academic Publishers, 325–367. ming using surrogate constraints. Dec. Sci. 8,
FINK, A. AND VOß, S. 1999. Generic metaheuristics 156–166.
GLOVER, F. 1986. Future paths for integer pro- HARIK, G. 1999. Linkage learning via probabilistic
gramming and links to artificial intelligence. modeling in the ECGA. Tech. Rep. No. 99010,
Comput. Oper. Res. 13, 533–549. IlliGAL, University of Illinois.
GLOVER, F. 1990. Tabu search Part II. ORSA J. HARVEY, W. D. 1995. Nonsystematic backtrack-
Comput. 2, 1, 4–32. ing search. Ph.D. thesis, CIRL, University of
GLOVER, F. 1999. Scatter search and path relink- Oregon.
ing. In New Ideas in Optimization, D. Corne, HARVEY, W. D. AND GINSBERG, M. L. 1995. Limited
M. Dorigo, and F. Glover, Eds. Advanced topics discrepancy search. In Proceedings of the 14th
in computer science series. McGraw-Hill. International Joint Conference on Artificial In-
GLOVER, F. AND LAGUNA, M. 1997. Tabu Search. telligence, IJCAI 1995 (Montréal, Qué, Canada).
Kluwer Academic Publishers. C. S. Mellish, Ed. Vol. 1. Morgan-Kaufmann,
607–615.
GLOVER, F., LAGUNA, M., AND MARTı́, R. 2000. Fun-
damentals of scatter search and path relinking. HERTZ, A. AND KOBLER, D. 2000. A framework
Control, 29, 3, 653–684. for the description of evolutionary algorithms.
Europ. J. Oper. Res. 126, 1–12.
GLOVER, F., LAGUNA, M., AND MARTı́, R. 2002. Scat-
ter search and path relinking: Advances and HOGG, T. AND HUBERMAN, A. 1993. Better than the
applications. In Handbook of Metaheuristics, best: The power of cooperation. In SFI 1992 Lec-
F. Glover and G. Kochenberger, Eds. Interna- tures in Complex Systems. Addison-Wesley, 163–
tional Series in Operations Research & Manage- 184.
ment Science, vol. 57. Kluwer Academic Publish- HOGG, T. AND WILLIAMS, C. 1993. Solving the really
ers, Norwell, MA. hard problems with cooperative search. In Pro-
GOLDBERG, D. E. 1989. Genetic Algorithms in ceedings of AAAI93. AAAI Press, 213–235.
Search, Optimization and Machine Learning. HOLLAND, J. H. 1975. Adaption in natural and artifi-
Addison Wesley, Reading, MA. cial systems. The University of Michigan Press,
GOLDBERG, D. E., DEB, K., AND KORB, B. 1991. Don’t Ann Harbor, MI.
worry, be messy. In Proceedings of the 4th In- HORDIJK, W. 1996. A measure of landscapes. Evo-
ternational Conference on Genetic Algorithms. lut. Comput. 4, 4, 335–360.
Morgan-Kaufmann, La Jolla, CA. INGBER, L. 1996. Adaptive simulated annealing
GOLDBERG, D. E. AND RICHARDSON, J. 1987. Genetic (ASA): Lessons learned. Cont. Cybernet.—
algorithms with sharing for multimodal function Special Issue on Simulated Annealing Applied
optimization. In Genetic Algorithms and their to Combinatorial Optimization 25, 1, 33–54.
Applications, J. J. Grefenstette, Ed. Lawrence JOHNSON, D. S. AND MCGEOCH, L. A. 1997. The trav-
Erlbaum Associates, Hillsdale, NJ, 41–49. eling salesman problem: a case study. In Local
GOMES, C. P., SELMAN, B., CRATO, N., AND KAUTZ, H. Search in Combinatorial Optimization, E. Aarts
2000. Heavy-Tayled phenomena in Satisfiabil- and J. Lenstra, Eds. Wiley, New York, 215–310.
ity and Constraint Satisfaction Prpblems. J. JONES, T. 1995a. Evolutionary algorithms, fitness
Automat. Reason. 24, 67–100. landscapes and search. Ph.D. thesis, Univ. of
GREFENSTETTE, J. J. 1987. Incorporating problem New Mexico, Albuquerque, NM.
specific knowledge into genetic algorithms. In JONES, T. 1995b. One operator, one landscape.
Genetic Algorithms and Simulated Annealing, Santa Fe Institute Tech. Rep. 95-02-025, Santa
L. Davis, Ed. Morgan-Kaufmann, 42–60. Fe Institute.
GREFENSTETTE, J. J. 1990. A user’s guide to GEN- JOSLIN, D. E. AND CLEMENTS, D. P. 1999. “Squeaky
ESIS 5.0. Tech. rep., Navy Centre for Applied Wheel” Optimization. J. Artif. Int. Res. 10, 353–
Research in Artificial Intelligence, Washington, 373.
D.C. JUSSIEN, N. AND LHOMME, O. 2002. Local search
HANSEN, P. 1986. The steepest ascent mildest de- with constraint propagation and conflict-based
scent heuristic for combinatorial programming. heuristics. Artif. Int. 139, 21–45.
In Congress on Numerical Methods in Combina- KAUFFMAN, S. A. 1993. The Origins of Order: Self-
torial Optimization. Capri, Italy. Organization and Selection in Evolution. Oxford
HANSEN, P. AND MLADENOVIĆ, N. 1997. Variable University Press.
neighborhood search for the p-median. Loc. KILBY, P., PROSSER, P., AND SHAW, P. 1999. Guided
Sci. 5, 207–226. Local Search for the Vehicle Routing Prob-
HANSEN, P. AND MLADENOVIĆ, N. 1999. An introduc- lem with time windows. In Meta-heuristics: Ad-
tion to variable neighborhood search. In Meta- vances and trends in local search paradigms for
heuristics: Advances and trends in local search optimization, S. Voß, S. Martello, I. Osman, and
paradigms for optimization, S. Voß, S. Martello, C. Roucairol, Eds. Kluwer Academic, 473–486.
I. Osman, and C. Roucairol, Eds. Kluwer Aca- KIRKPATRICK, S., GELATT, C. D., AND VECCHI, M. P.
demic Publishers, Chapter 30, 433–458. 1983. Optimization by simulated annealing.
HANSEN, P. AND MLADENOVIĆ, N. 2001. Variable Science, 13 May 1983 220, 4598, 671–680.
neighborhood search: Principles and applica- LAGUNA, M., LOURENÇO, H., AND MARTı́, R. 2000. As-
tions. Europ. J. Oper. Res. 130, 449–467. signing Proctors to Exams with Scatter Search.
In Computing Tools for Modeling, Optimization lems. In SAT2000, I. Gent, H. van Maaren, and
and Simulation: Interfaces in Computer Science T. Walsh, Eds. IOS Press, 89–106.
and Operations Research, M. Laguna and J. L. MITCHELL, M. 1998. An Introduction to Genetic
González-Velarde, Eds. Kluwer Academic Pub- Algorithms. MIT press, Cambridge, MA.
lishers, Boston, MA, 215–227. MLADENOVIĆ, N. AND UROŠEVIĆ, D. 2001. Variable
LAGUNA, M. AND MARTı́, R. 1999. GRASP and path neighborhood search for the k-cardinality tree.
relinking for 2-layer straight line crossing mini- In Proceedings of MIC’2001—Meta–heuristics
mization. INFORMS J. Comput. 11, 1, 44–52. International Conference. Vol. 2. Porto, Portugal,
LAGUNA, M., MARTı́, R., AND CAMPOS, V. 1999. In- 743–747.
tensification and diversification with elite tabu MOSCATO, P. 1989. On evolution, search, optimiza-
search solutions for the linear ordering problem. tion, genetic algorithms and martial arts: To-
Comput. Oper. Res. 26, 1217–1230. ward memetic algorithms. Tech. Rep. Caltech
LARRAÑAGA, P. AND LOZANO, J. A., Eds. 2002. Es- Concurrent Computation Program 826, Califor-
timation of Distribution Algorithms: A New nia Institute of Technology,Pasadena, Calif.
Tool for Evolutionary Computation. Kluwer Aca- MOSCATO, P. 1999. Memetic algorithms: A short in-
demic Publishers, Boston, MA. troduction. In New Ideas in Optimization, F. G.
LOURENÇO, H. R., MARTIN, O., AND STÜTZLE, T. D. Corne and M. Dorigo, Eds. McGraw-Hill.
2001. A beginner’s introduction to Iterated MÜHLENBEIN, H. 1991. Evolution in time and
Local Search. In Proceedings of MIC’2001— space—The parallel genetic algorithm. In Foun-
Meta–heuristics International Conference. Vol. 1. dations of Genetic Algorithms, G. J. E. Rawlins,
Porto—Portugal, 1–6. Ed. Morgan-Kaufmann, San Mateo, Calif.
LOURENÇO, H. R., MARTIN, O., AND STÜTZLE, T. 2002. MÜHLENBEIN, H. AND PAAß, G. 1996. From recombi-
Iterated local search. In Handbook of Meta- nation of genes to the estimation of distributions.
heuristics, F. Glover and G. Kochenberger, Eds. In Proceedings of the 4th Conference on Paral-
International Series in Operations Research & lel Problem Solving from Nature—PPSN IV, H.-
Management Science, vol. 57. Kluwer Academic M. Voigt, W. Ebeling, I. Rechenberg, and H.-P.
Publishers, Norwell, MA, 321–353. Schwefel, Eds. Lecture Notes in Computer Sci-
LUNDY, M. AND MEES, A. 1986. Convergence of an ence, vol. 1411. Springer, Berlin, 178–187.
annealing algorithm. Math. Prog. 34, 1, 111–124. MÜHLENBEIN, H. AND VOIGT, H.-M. 1995. Gene pool
MARTIN, O. AND OTTO, S. W. 1996. Combining sim- recombination in genetic algorithms. In Proc. of
ulated annealing with local search heuristics. the Metaheuristics Conference, I. H. Osman and
Ann. Oper. Res. 63, 57–75. J. P. Kelly, Eds. Kluwer Academic Publishers,
MARTIN, O., OTTO, S. W., AND FELTEN, E. W. 1991. Norwell, USA.
Large-step Markov chains for the traveling NEMHAUSER, G. L. AND WOLSEY, A. L. 1988. Integer
salesman problem. Complex Syst. 5, 3, 299– and Combinatorial Optimization. Wiley, New
326. York.
MERKLE, D., MIDDENDORF, M., AND SCHMECK, H. NOWICKI, E. AND SMUTNICKI, C. 1996. A fast taboo
2002. Ant colony optimization for resource- search algorithm for the job-shop problem. Man-
constrained project scheduling. IEEE Trans. age. Sci. 42, 2, 797–813.
Evolut. Comput. 6, 4, 333–346. OSMAN, I. H. 1993. Metastrategy simulated an-
METAHEURISTICS NETWORK WEBSITE 2000. http://www. nealing and tabu search algorithms for the ve-
metaheuristics.net/. Visited in January 2003. hicle routing problem. Ann. Oper. Res. 41, 421–
MEULEAU, N. AND DORIGO, M. 2002. Ant colony opti- 451.
mization and stochastic gradient descent. Artif. OSMAN, I. H. AND LAPORTE, G. 1996. Metaheuristics:
Life 8, 2, 103–121. A bibliography. Ann. Oper. Res. 63, 513–623.
MICHALEWICZ, Z. AND MICHALEWICZ, M. 1997. Evolu- PAPADIMITRIOU, C. H. AND STEIGLITZ, K. 1982. Com-
tionary computation techniques and their ap- binatorial Optimization—Algorithms and Com-
plications. In Proceedings of the IEEE Inter- plexity. Dover Publications, Inc., New York.
national Conference on Intelligent Processing PELIKAN, M., GOLDBERG, D. E., AND CANTÚ-PAZ, E.
Systems, (Beijing, China). Institute of Elec- 1999a. BOA: The Bayesian optimization al-
trical & Electronics Engineers, Incorporated, gorithm. In Proceedings of the Genetic and
14–24. Evolutionary Computation Conference GECCO-
MILANO, M. AND ROLI, A. 2002. On the relation 99 (Orlando, Fla.). W. Banzhaf, J. Daida,
between complete and incomplete search: An A. E. Eiben, M. H. Garzon, V. Honavar,
informal discussion. In Proceedings of CP-AI- M. Jakiela, and R. E. Smith, Eds. Vol. I. Morgan-
OR’02—Fourth Int. Workshop on Integration of Kaufmann Publishers, San Fransisco, CA, 525–
AI and OR techniques in Constraint Program- 532.
ming for Combinatorial Optimization Problems PELIKAN, M., GOLDBERG, D. E., AND LOBO, F. 1999b.
(Le Croisic, France). 237–250. A survey of optimization by building and using
MILLS, P. AND TSANG, E. 2000. Guided local search probabilistic models. Tech. Rep. No. 99018, Illi-
for solving SAT and weighted MAX-SAT Prob- GAL, University of Illinois.
PESANT, G. AND GENDREAU, M. 1996. A view of local ternational Joint Conference on Artificial Intelli-
search in Constraint Programming. In Princi- gence, IJCAI 1997. Morgan-Kaufmann Publish-
ples and Practice of Constraint Programming— ers, San Mateo, CA, 1254–1259.
CP’96. Lecture Notes in Computer Science, vol. SCHAERF, A., CADOLI, M., AND LENZERINI, M. 2000.
1118. Springer-Verlag, 353–366. LOCAL++: a C++ framework for local search al-
PESANT, G. AND GENDREAU, M. 1999. A constraint gorithms. Softw. Pract. Exp. 30, 3, 233–256.
programming framework for local search meth- SHAW, P. 1998. Using constraint programming and
ods. J. Heuristics 5, 255–279. local search methods to solve vehicle routing
PITSOULIS, L. S. AND RESENDE, M. G. C. 2002. problems. In Principle and Practice of Constraint
Greedy randomized adaptive search proce- Programming—CP98, M. Maher and J.-F. Puget,
dure. In Handbook of Applied Optimization, P. Eds. Lecture Notes in Computer Science, vol.
Pardalos and M. Resende, Eds. Oxford Univer- 1520. Springer.
sity Press, 168–183. SIPPER, M., SANCHEZ, E., MANGE, D., TOMASSINI, M.,
PRAIS, M. AND RIBEIRO, C. C. 2000. Reactive PÉREZ-URIBE, A., AND STAUFFER, A. 1997. A
GRASP: An application to a matrix decompo- phylogenetic, ontogenetic, and epigenetic view
sition problem in TDMA traffic assignment. of bio-inspired hardware systems. IEEE Trans.
INFORMS J. Comput. 12, 164–176. Evolut. Comput. 1, 1, 83–97.
PRESTWICH, S. 2002. Combining the scalability of SONDERGELD, L. AND VOß, S. 1999. Cooperative in-
local search with the pruning techniques of telligent search using adaptive memory tech-
systematic search. Ann. Oper. Res. 115, 51– niques. In Meta-Heuristics: Advances and
72. Trends in Local Search Paradigms for Optimiza-
RADCLIFFE, N. J. 1991. Forma Analysis and Ran- tion, S. Voss, S. Martello, I. Osman, and C.
dom Respectful Recombination. In Proceed- Roucairol, Eds. Kluwer Academic Publishers,
ings of the Fourth International Conference Chapter 21, 297–312.
on Genetic Algorithms, ICGA 1991. Morgan- SPEARS, W. M., JONG, K. A. D., BÄCK, T., FOGEL, D. B.,
Kaufmann, San Mateo, Calif., 222–229. AND DE GARIS, H. 1993. An overview of evolu-
RAYWARD-SMITH, V. J. 1994. A unified approach to tionary computation. In Proceedings of the Euro-
tabu search, simulated annealing and genetic al- pean Conference on Machine Learning (ECML-
gorithms. In Applications of Modern Heuristics, 93), P. B. Brazdil, Ed. Vol. 667. Springer Verlag,
V. J. Rayward-Smith, Ed. Alfred Waller Limited, Vienna, Austria, 442–459.
Publishers. STADLER, P. F. 1995. Towards a theory of land-
RECHENBERG, I. 1973. Evolutionsstrategie: Opti- scapes. In Complex Systems and Binary Net-
mierung technischer Systeme nach Prinzip- works, R. Lopéz-Peña, R. Capovilla, R. Garcı́a-
ien der biologischen Evolution. Frommann- Pelayo, H. Waelbroeck, and F. Zertuche, Eds.
Holzboog. Lecture Notes in Physics, vol. 461. Springer-
REEVES, C. R., Ed. 1993. Modern Heuristic Tech- Verlag, Berlin, New York, 77–163. Also available
niques for Combinatorial Problems. Blackwell as SFI preprint 95-03-030.
Scientific Publishing, Oxford, England. STADLER, P. F. 1996. Landscapes and their corre-
REEVES, C. R. 1999. Landscapes, operators and lation functions. J. Math. Chem. 20, 1–45. Also
heuristic search. Ann. Oper. Res. 86, 473–490. available as SFI preprint 95-07-067.
REEVES, C. R. AND ROWE, J. E. 2002. Genetic Al- STÜTZLE, T. 1999a. Iterated local search for the
gorithms: Principles and Perspectives. A Guide quadratic assignment problem. Tech. rep. aida-
to GA Theory. Kluwer Academic Publishers, 99-03, FG Intellektik, TU Darmstadt.
Boston (USA). STÜTZLE, T. 1999b. Local Search Algorithms for
REGO, C. 1998. Relaxed Tours and Path Ejections Combinatorial Problems—Analysis, Algorithms
for the Traveling Salesman Problem. Europ. J. and New Applications. DISKI—Dissertationen
Oper. Res. 106, 522–538. zur Künstliken Intelligenz. infix, Sankt
REGO, C. 2001. Node-ejection chains for the ve- Augustin, Germany.
hicle routing problem: Sequential and paral- STÜTZLE, T. AND HOOS, H. H. 2000. MAX -MI N
lel algorithms. Paral. Comput. 27, 3, 201– Ant System. Fut. Gen. Comput. Syst. 16, 8, 889–
222. 914.
RESENDE, M. G. C. AND RIBEIRO, C. C. 1997. A SYSWERDA, G. 1993. Simulated Crossover in Ge-
GRASP for graph planarization. Networks 29, netic Algorithms. In Proceedings of the 2nd
173–189. Workshop on Foundations of Genetic Algorithms,
RIBEIRO, C. C. AND SOUZA, M. C. 2002. Variable L. Whitley, Ed. Morgan-Kaufmann Publishers,
neighborhood search for the degree constrained San Mateo, Calif., 239–255.
minimum spanning tree problem. Disc. Appl. TABU SEARCH WEBSITE. 2003. http://www.tabusearch.
Math. 118, 43–54. net. Visited in January 2003.
SCHAERF, A. 1997. Combining local search and TAILLARD, E. 1991. Robust Taboo Search for the
look-ahead for scheduling and constraint satis- Quadratic Assignment Problem. Paral. Com-
faction problems. In Proceedings of the 15th In- put. 17, 443–455.
TALBI, E.-G. 2002. A Taxonomy of Hybrid Meta- VOß, S. AND WOODRUFF, D., Eds. 2002. Optimization
heuristics. Journal of Heuristics 8, 5, 541–564. Software Class Libraries. Kluwer Academic Pub-
TOULOUSE, M., CRAINIC, T., AND SANSÒ, B. 1999a. lishers, Dordrecht, The Netherlands.
An experimental study of the systemic behav- VOUDOURIS, C. 1997. Guided local search for com-
ior of cooperative search algorithms. In Meta- binatorial optimization problems. Ph.D. disser-
Heuristics: Advances and Trends in Local Search tation, Department of Computer Science, Uni-
Paradigms for Optimization, S. Voß, S. Martello, versity of Essex. pp. 166.
I. Osman, and C. Roucairol, Eds. Kluwer Aca- VOUDOURIS, C. AND TSANG, E. 1999. Guided local
demic Publishers, Chapter 26, 373–392. search. Europ. J. Oper. Res. 113, 2, 469–499.
TOULOUSE, M., THULASIRAMAN, K., AND GLOVER, F. WADE, A. S. AND RAYWARD-SMITH, V. J. 1997. Ef-
1999b. Multi-level cooperative search: A new fective local search for the steiner tree prob-
paradigm for combinatorial optimization and ap- lem. Studies in Locational Analysis 11, 219–
plication to graph partitioning. In Proceedings 241. Also in Advances in Steiner Trees, ed. by
of the 5th International Euro-Par Conference Ding-Zhu Du, J. M.Smith and J.H. Rubinstein,
on Parallel Processing. Lecture Notes in Com- Kluwer, 2000.
puter Science. Springer-Verlag, New York, 533– WATSON, R. A., HORNBY, G. S., AND POLLACK, J. B. 1998.
542.
Modeling building-block interdependency. In
VAN KEMENADE, C. H. M. 1996. Explicit filtering Late Breaking Papers at the Genetic Program-
of building blocks for genetic algorithms. In ming 1998 Conference, J. R. Koza, Ed. Stanford
Proceedings of the 4th Conference on Parallel University Bookstore, University of Wisconsin,
Problem Solving from Nature—PPSN IV, H.- Madison, Wisconsin, USA.
M. Voigt, W. Ebeling, I. Rechenberg, and H.-P.
WHITLEY, D. 1989. The GENITOR algorithm and
Schwefel, Eds. Lecture Notes in Computer Sci- selective pressure: Why rank-based allocation
ence, vol. 1141. Springer, Berlin, 494–503. of reproductive trials is best. In Proceedings of
VAN LAARHOVEN, P. J. M., AARTS, E. H. L., AND LENSTRA, the 3rd International Conference on Genetic Al-
J. K. 1992. Job Shop Scheduling by Simulated gorithms, ICGA 1989. Morgan-Kaufmann Pub-
Annealing. Oper. Res. 40, 113–125. lishers, 116–121.
VOSE, M. 1999. The Simple Genetic Algorithm: YAGIURA, M. AND IBARAKI, T. 2001. On metaheuris-
Foundations and Theory. Complex Adaptive Sys- tic algorithms for combinatorial optimization
tems. MIT Press. problems. Syst. Comput. Japan 32, 3, 33–
VOß, S., MARTELLO, S., OSMAN, I. H., AND ROUCAIROL, 55.
C., Eds. 1999. Meta-Heuristics—Advances ZLOCHIN, M., BIRATTARI, M., MEULEAU, N., AND DORIGO,
and Trends in Local Search Paradigms for M. 2004. Model-based search for combinato-
Optimization. Kluwer Academic Publishers, rial optimization: A critical survey. Ann. Oper.
Dordrecht, The Netherlands. Res. To appear.