0% found this document useful (0 votes)
157 views

Metaheuristics: From Design To Implementation: Chap 2 Single-Solution Based Metaheuristics

This document discusses single-solution based metaheuristics. It begins by defining single-solution metaheuristics as those that improve a single solution through iterative exploration of neighborhoods. It then provides a taxonomy of common single-solution metaheuristics like hill climbing, simulated annealing, tabu search, and variable neighborhood search. The document also covers important concepts for single-solution metaheuristics like neighborhoods, initial solutions, incremental evaluation, and fitness landscape analysis.

Uploaded by

karen dejo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
157 views

Metaheuristics: From Design To Implementation: Chap 2 Single-Solution Based Metaheuristics

This document discusses single-solution based metaheuristics. It begins by defining single-solution metaheuristics as those that improve a single solution through iterative exploration of neighborhoods. It then provides a taxonomy of common single-solution metaheuristics like hill climbing, simulated annealing, tabu search, and variable neighborhood search. The document also covers important concepts for single-solution metaheuristics like neighborhoods, initial solutions, incremental evaluation, and fitness landscape analysis.

Uploaded by

karen dejo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 115

Metaheuristics : from Design to

Implementation

Chap 2
Single-solution based
Metaheuristics

Wiley, 2009 (596pp)


ISBN: 978-0-470-27858-1
Metaheuristics E-G. Talbi
Single solution-based metaheuristics
• “Improvement” of a single solution
• Walks through neighborhoods or search
trajectories in the landscape
• Exploitation Oriented :
– Iterative exploration of the neighborhood.
(intensification)
Neighborhood

Selection Candidates
Local
optima
Metaheuristics E-G. Talbi
High-level template of S-metaheuristics

Metaheuristics E-G. Talbi


High-level template of S-metaheuristics

Metaheuristics E-G. Talbi


Taxonomy (Single solution based
metaheuristics)

Metaheuristics

S-meta

Hill Simulated Tabu Variable neighborhood


climbing annealing search search

Metaheuristics E-G. Talbi


Outline
• Common concepts for S-metaheuristics
– Neighborhood, very large neighborhoods
– Initial solution
– Incremental evaluation of neighbors
• Fitness landscape analysis
• Local Search (Hill Climbing)
– Neighbor selection
• Simulated Annealing & Co
• Tabu Search
• Iterative local search
• Variable Neighborhood search
• Guided local search
• Smoothing & noisy methods, GRASP

Metaheuristics E-G. Talbi


Neighborhoods

• Fundamental property: locality –


The effect on the solution
(phenotype) when performing a
move (small perturbation) to the
representation (genotype): Strong
locality

Metaheuristics E-G. Talbi


Neighborhoods

Metaheuristics E-G. Talbi


Neighborhoods

• Scheduling problems: permutation represents a priority queue. The


relative order in the sequence is important (2-opt Æ weak locality).
• Routing problems: adjacency of the element is important. (2-opt Æ
strong locality).

Metaheuristics E-G. Talbi


Local optimum

Metaheuristics E-G. Talbi


k-distance / k-exchange neighborhoods

Metaheuristics E-G. Talbi


k-distance / k-exchange neighborhoods

Metaheuristics E-G. Talbi


Permutation neighborhoods

• Position-based neighborhoods: insertion


• Order-based neighborhoods: exchange, inversion

Metaheuristics E-G. Talbi


Very large neighborhoods
• Compromise: size (diameter) of the neighborhood
(computational complexity to explore it) and the quality of
solutions

Metaheuristics E-G. Talbi


Efficient algorithms to explore large neighborhoods

• Size of the neighborhood: high-order polynomial (n>2) or


exponential
• Main issue: identifiy improving neighbors or the best neighbor
without enumeration of the whole neighborhood

Metaheuristics E-G. Talbi


Heuristic search in large neighborhoods: ejection
chain in routing
• A partial set of the neighborhood is explored
– LK-heuristic for the TSP (3-opt)
– Ejection chain: sequence of coordinated moves (alternating path)
• Finding the best neighbor (local optimum) is not
guaranteed

Metaheuristics E-G. Talbi


Heuristic search in large neighborhoods:
cyclic exchange in grouping

Metaheuristics E-G. Talbi


Exact search in very large neighborhoods :
Dynasearch
• Exponential size of the neighborhood
• Finding a improving neighbor in polynomial time
– Path finding: shortest path, dynamic programming
– Matching: bi-partite matching

Metaheuristics E-G. Talbi


Polynomial-specific neighborhoods
• Restricting the input class instances
• Adding/deleting constraints to the target problem
– Ex: many graph problems (Steiner tree, TSP) are polynomial for
specific instances (e.g. series-parallel, outerplanar, Halin graphs)

Metaheuristics E-G. Talbi


Initial solution

• Two main strategies:


– Random solution
– Heuristic solution (e.g. Greedy)
– Partially or completely initialized by a user (e.g.
expert)
• Tradeoff: quality – computational time
• Using better initial solutions will not always lead
to better local optima
• Generating random solutions may be difficult for
highly contrained problems

Metaheuristics E-G. Talbi


Incremental evaluation of the neighborhood
• Evaluation of a solution: Most expensive part of a
metaheuristic
• Naive evaluation: complete evaluation of every solution
of the neighborhood

Metaheuristics E-G. Talbi


Local search
(Hill climbing)

Metaheuristics E-G. Talbi


Local search (Hill-climbing)
• Oldest and simplest S-metaheuristic method
• Hill-climbing, descent, iterative improvement, and so on
• Replaces the current solution with an improving one

Metaheuristics E-G. Talbi


Local search

Repeat replace σ
N (σ )
By a better solution in the neighborhood

N ( σ1 ) N (σ 2 ) N (σ 4 )
N (σ 0 )
σ1 σ2 σ4
σ0 σ3

Initial solution N (σ 3 ) Local Optimum

Metaheuristics E-G. Talbi


Local search: Successive improving
solutions

Metaheuristics E-G. Talbi


Local search: Towards a local optima

Metaheuristics E-G. Talbi


Pros and Cons
• Easy to implement.
• It only leads to local optima.
• The found optima depends on the initial solution.
• No mean to estimate the relative error from the global
optimum
• No mean to have an upper bound of the computation time:
complexity in practice acceptable, the worst case is
exponential !!
Local optima Global optimum
Obj

Solutions
Metaheuristics E-G. Talbi
Selection of the neighbor
• Best improvement: Deterministic/full - choosing
the best neighbor (i.e. that improves the most the
objective function).

• First improvement: Deterministic/partial -


choosing the first processed neighbour that is
better than the current solution.

• Random selection: Stochastic/full or partial -


selecting a better neighbor randomly.

Metaheuristics E-G. Talbi


Selection of the neighbor

Metaheuristics E-G. Talbi


2-Dimensional packing problem

Input : A set of n rectangles I = {1,2,…,n};


characterized by
(width, length, ...)
Output : Coordinates x, y of rectangles.

Place all the rectangles on a plan


without overlap to minimize the
Surface used.

Applications : textile, metal, wood, ...

Metaheuristics E-G. Talbi


Packing rectangles into a small area

Metaheuristics E-G. Talbi


LS for the N-Queens problem

f=5 f=2 f=0

Metaheuristics E-G. Talbi


Fitness Landscape
10 10
• Search space G=(S,E) 7
– S: set of solutions
– E: Neighborhood 5 9
6
• f: Objective function
• Landscape : Tuple l=(G,f) 5
4
• Most adapted algorithm for a given class of problems
• No Free Lunch (NFL) Theorem: No optimization algorithm
is superior to any other on all possible optimization
problems
• Help to design better representations, neighborhoods,
objective function, hybridizations, …
Metaheuristics E-G. Talbi
Fitness landscape: Geographic metaphor

• S-metaheuristic: trajectory (walk) in the landscape


Metaheuristics E-G. Talbi
Fitness landscape analysis: connexity

Metaheuristics E-G. Talbi


Landscape properties

• Global indicators: information about the structure


of the entire landscape – Metaheuristics focus
on good solutions

• Local indicators: local view of the landscape as


explored by a metaheuristic

Metaheuristics E-G. Talbi


Distribution measures
• Distribution in the search space
– Average distance (local optima, random solutions)
DG ( P ) = 2
1
∑ ∑ dist ( s , t )
P .d ( G ) s∈ P t∈ P

• Entropy (diversity) n ⎛n
−1 n n ij ⎞
EG ( P ) = ∑ ∑ ⎜
n log ( n ) i =1 j =1 ⎝ P
ij
log
P ⎠

• Distribution in the objective space


P .(max s ∈ P f ( s ) − min s ∈ P f ( s ))
Amp ( P ) =
• Amplitude ∑ s∈ P f ( s )

∑ ( f ( s ) − f (Π )
*

• Gap
s∈ P
Gap ( P ) =
P . f (Π )
*

Metaheuristics E-G. Talbi


Fitness landscape analysis

Metaheuristics E-G. Talbi


Correlation measures

• Lengths of the walks


l −h
∑ ( f ( x ) − f )( f ( x ) − f ))
• Autocorrelation ϕ (h) =
1 t =1
t t+h

l−h 1 l
2

∑ ( f ( xt) − f )
l t =1

• Correlation length

• Fitness / Distance correlation


FDC ( P ) = Cov P
( f , Δ)
σ P
( f )σ P ( Δ )
Metaheuristics E-G. Talbi
Correlation measures

Metaheuristics E-G. Talbi


Fitness landscape analysis

Metaheuristics E-G. Talbi


Easy or Hard problem

Random conf.

Local optima

Deep valley (TSP) No valley (QAP)

Metaheuristics E-G. Talbi


Case of the Traveling Salesman: Search
operators

• Different neighborhoods:
– City-swap: Hard problem
– 2-opt: big (single) valley
=> design a hybrid meta-heuristic
– Initialize the population of a GA with randomized
local optima
– Use recombination to keep solutions in the valley

Metaheuristics E-G. Talbi


Case of Job Shop: Objective function

• Landscape present large plateaus


• Use of discriminiting second objective
function:
– H1= max Em(x) (makespan)
– H2= S [Em(x)]2 (sum of delay)
• Landscape more smooth

Metaheuristics E-G. Talbi


Case of Quadratic Assignment: Type of
instances

• Classification of benchmarks
• Using of 2 measures:
– Entropy and average distance
type I type II type III uniforme uni-massif
entropie : high low high
diamètre : high low high
multi-massif

Metaheuristics E-G. Talbi


Fitness landscape analysis
• Quadratic assignment problem

Metaheuristics E-G. Talbi


Advanced local search

Escape from local optima

Metaheuristics E-G. Talbi


Escaping from local optima

Metaheuristics E-G. Talbi


Simulated Annealing

Metaheuristics E-G. Talbi


Simulated Annealing (1/2)
• Mimics the physical annealing process
(statistical mechanics).
• Material is heated and slowly cooled towards a
strong crystalline structure (instead of
metastable states).

• The first SA algorithm was developed in 1953


(Metropolis).
• Kirkpatrick et al. (1982) and V. Cerny applied SA
to optimization problems:
– Kirkpatrick, S , Gelatt, C.D., Vecchi, M.P.
1983. “Optimization by Simulated Annealing”.
Science, vol 220, No. 4598, pp 671-680.
Metaheuristics E-G. Talbi
Simulated annealing

Analogy between the physical system and the optimization problem

Metaheuristics E-G. Talbi


Simulated Annealing
• SA allows downwards steps.
• A move is selected at random and its acceptation is conditional
(stochastic Boltzmann distribution)

Metaheuristics E-G. Talbi


Simulated annealing

Metaheuristics E-G. Talbi


Move acceptance
eval ( vn ) − eval ( vc ) − ΔE
P ( ΔE ) = e T = e T

• Role of the temperature :


– T small : local search (end of the search)
– T large : random search (beginning of the search)

• Role of the quality of the evaluation:


– Smaller is the difference between the qualities,
larger is the probability of acceptance.

Metaheuristics E-G. Talbi


Move acceptance: Scenario
P = exp(-c/T) > r
• Where:
– c is change in the evaluation function,
– T the current temperature,
– r is a random number between 0 and 1.
• Example:
Change Temp exp(-C/T) Change Temp exp(-C/T)
0.2 0.95 0.810157735 0.2 0.1 0.135335283
0.4 0.95 0.656355555 0.4 0.1 0.018315639
0.6 0.95 0.53175153 0.6 0.1 0.002478752
0.8 0.95 0.430802615 0.8 0.1 0.000335463
Metaheuristics E-G. Talbi
Cooling schedule

• Main design questions:


– Initial temperature,
– Equilibrium state,
– Cooling,
– Stopping condition.

Metaheuristics E-G. Talbi


Initial temperature
• Not be so hot that we conduct a random search
for a period of time.

• Hot enough to allow moves to almost


neighbourhood state (else hill climbing):
Î 50%-60% of worse moves are accepted.

• If we know the maximum change in the cost


function we can use this to estimate it.

Metaheuristics E-G. Talbi


Equilibrium state
• A sufficient number of move at each
temperature
– In theory might be exponential
– In practice, depend on the neighborhood size

• Static : fixed a priori


• Dynamic:
– Small number of iterations at high temperature,
– Large number of iterations at low temperature.
• Adaptive : depends on the search
(improvements)

Metaheuristics E-G. Talbi


Cooling
• Compromise: quality / search time.
• Different strategies:
– Linear
– Geometric
– Logarithmic
– Very slow decrease
– Non-monotonic
– Adaptive

Metaheuristics E-G. Talbi


Stopping criteria

• Reaching a low temperature (e.g. Tmin=0.01)

• Number of iterations without improvement of the


best found solution or the current solution

• …

Metaheuristics E-G. Talbi


SA Family: Similar methods

Metaheuristics E-G. Talbi


Exercise: Santa Fe trail problem

32*32 Toroidal grid

An artificial ant is placed on a cell.


• Food pellets are mapped on some cells of the Grid.
• Solution : Path planning (move operators)
• Objective ? Maximize the food intake: Number of food pellets lying in the path
minus the amount of food the ant eats during the move.

Metaheuristics E-G. Talbi


Tabu Search

Metaheuristics E-G. Talbi


Main characteristics (1/2)
• Proposed independently by Glover (1986)
and Hansen (1986)
• It behaves like Hill Climbing algorithm
• But it accepts non-improving solutions in
order to escape from local optima (where
all the neighbouring solutions are non-
improving)
• Deterministic algorithm

Metaheuristics E-G. Talbi


Main characteristics (2/2)
• After exploring the neighbouring solutions, we
accept the best one even if it decreases the cost
function.
→ A worse move may be accepted.
• Three goals in the use of memory:
– Preventing the search from revisiting previously
visited solutions (tabu list).
– Exploring the unvisited areas of the solution space
(diversification).
– Exploiting the elite (best found) solutions
(intensification).

Metaheuristics E-G. Talbi


Tabu search

Metaheuristics E-G. Talbi


Design questions

• Tabu list (Short Term Memory)


– Attributive
– Recency-based
• Aspiration criterion
• Medium term memory (Intensification)
• Long Term Memory (Diversification)
– Frequency-based

Metaheuristics E-G. Talbi


Tabu list (Short-term memory)

• Prevent cycling
– Stroring solutions is time and memory consuming Æ k
last solutions, hash codes, …
– Tabu List (short term memory): to record a limited
number of attributes of solutions (moves, selections,
assignments, etc)
– Multiple tabu lists: some ingredients of the visited
solutions and/or the moves are stored
– Tabu tenure (length of tabu list): number of iterations
a tabu move is considered to remain tabu:
• Static
• Dynamic
• Adaptive

Metaheuristics E-G. Talbi


Aspiration criteria

• We can reject non vivited solutions

• If a move is good, but it is tabu, do we still


reject it ?

• Aspiration criteria Ù accepting a solution


even if generated by a tabu move:

– Popular strategy: Best found solution


Metaheuristics E-G. Talbi
Medium-term memory
• Memory related – recency memory:
– to record attributes of elite solutions to be
used in:

• Intensification: giving priority to attributes of a set


of elite solutions (usually in weighted probability
manner).
• Extracting the common features of elite solutions
and then intensifying the search around solutions
sharing those feature

Metaheuristics E-G. Talbi


Long-term memory
• Memory related – frequency memory:
– to record attributes of visited solutions to be
used in:
• Diversification: Discouraging attributes of visited
solutions in order to diversify the search to other
areas of solution space
– Three approaches
• Restart diversification: new search in non explored
region
• Continuous diversification: introduces a bias (e.g.
penalize the objective function)
• Strategic oscillation: explore unfeasible solutions

Metaheuristics E-G. Talbi


Example: TSP using Tabu Search (1/3)
• In our example of TSP:
V1 V2 V3 V4 – Short term memory:
• Maintain a list of t edges and
prevent them from being
V1 X selected for consideration of
moves for a number of
V2 X iterations.
• After a number of iterations,
V3 X release those edges.
• Make use of a matrix to
decide if an edge is tabu or
V4 X not.

Metaheuristics E-G. Talbi


Example: TSP using Tabu Search (2/3)
• In our example of TSP:
– Long term memory:
• Maintaining a list of t edges which have been
considered in the last k best (worst) solutions
• encouraging (or discouraging) their selections
in future solutions.
• using their frequency of appearance in the set
of elite solutions and the quality of solutions
which they have appeared in our selection
function.

Metaheuristics E-G. Talbi


Example: TSP using Tabu Search (3/3)
• In our example of TSP:
– Aspiration:
• If the next moves consider those moves
in the tabu list but generate better
solution than the current one,
• Accept that solution anyway,
• Put it into tabu list.

Metaheuristics E-G. Talbi


ILS: Iterative Local Search

Metaheuristics E-G. Talbi


Iterative local search

Metaheuristics E-G. Talbi


Variable Neighborhood-search

Metaheuristics E-G. Talbi


Variable Neighborhood-search

Metaheuristics E-G. Talbi


Variable Neighborhood-search
• Different neighborhoods
– Ex: Nk Æ Neighborhood in k distance
• Order of exploration
– Ex: Nk(x) Æ Set of solutions in the kth neighborhood of
x. N k

N2
N1

...

we change k move

Metaheuristics E-G. Talbi


VND: Variable Neighbourhood descent
• VND (Variable Neighborhood Descent) changes the
neighbourhoods Nk each time a local optimum is reached.
• It ends when there is no improve with the all
neighbourhoods
• The final solution provided by the algorithm should be a
local optimum with respect to all kmax neighbourhoods.

Metaheuristics E-G. Talbi


VNS: Variable Neighbourhood Search

Nkmax
N2
N1
...

Nk

x’

X’’>X x’’ X’’<=X

Nk
Nk+1

Nk(x’’)
x’
Metaheuristics E-G. Talbi x’’
VNS: Variable Neighbourhood Search

Metaheuristics E-G. Talbi


Guided Local Search
• Dynamic changing of the objective function according to
the generated local optima
• The features of the local optima are used to change the
objective function

Metaheuristics E-G. Talbi


Guided Local Search

Metaheuristics E-G. Talbi


Guided local search: Features

• Identification of suitable features for a


given optimization problem:
– Routing problems: edges (cost: distance,
travel time)
– Assignment problems: pair (i,k), i: object, k:
location, cost=assignment cost of i on k
– Satisfiability (SAT): constraint (cost: constraint
violation)

Metaheuristics E-G. Talbi


Smoothing techniques: Let’s have a
dream
• Change the landscape of the target optimization problem
• Reduce the number of the local optima to smooth the
landscape

Metaheuristics E-G. Talbi


Successive Smoothing

Metaheuristics E-G. Talbi


Smoothing algorithm

Metaheuristics E-G. Talbi


Ex: Smoothing for the TSP
• All equal distances = easy to solve
• Transform: - - α -
d + (dij – d) , dij >= d
dij (α) = {d-

-
(d – dij)α, dij < d
-

• Iterations: α = 6, 3, 2, 1
• This transform clusters distances in the center
• The ranking of the distances is preserved

Metaheuristics E-G. Talbi


Smoothing Transformation

Metaheuristics E-G. Talbi


Noisy method

• Change the landscape of the target optimization problem


(random noise)
• Fluctuate the data converging towards the original ones
• Succession of local searches applied to perturbed data

Metaheuristics E-G. Talbi


Noisy method

Metaheuristics E-G. Talbi


GRASP (Greedy Randomized Adaptive
Search Procedure)

The GRASP metaheuristic has two phases:


• Constructive phase: constructs a good initial solution x.
• Improving phase: applied to the initial solution x.

• Both phases are repeated until the stopping criterion is


met.

Metaheuristics E-G. Talbi


GRASP (Greedy Randomized Adaptive
Search Procedure)

Metaheuristics E-G. Talbi


Greedy randomized algorithm

Metaheuristics E-G. Talbi


RCL: Restricted Candidate List
• Cardinality-based construction:
– p elements with the smallest incremental costs
• Quality-based construction:
– Parameter α defines the quality of the elements in
RCL.
– RCL contains elements with incremental cost
cmin ≤ c(e) ≤ cmin + α (cmax –cmin)
– α = 0 : pure greedy construction
– α = 1 : pure randomized construction
• Select at random from RCL using uniform
probability distribution

Metaheuristics E-G. Talbi


Quadratic assignment problem (QAP)

• Given N facilities f1,f2,…,fN and N locations


l1,l2,…,lN
• Let AN×N = (ai,j) be a positive real matrix
where ai,j is the flow between facilities fi
and fj
• Let BN×N = (bi,j) be a positive real matrix
where bi,j is the distance between locations
li and lj

Metaheuristics E-G. Talbi


Quadratic assignment problem (QAP)

• Let p: {1,2,…,N} →{1,2,…,N} be an assignment


of the N facilities to the N locations
• Define the cost of assignment p to be

c(p)= ∑i=1 ∑
N N
j=1
ai,jbp(i),p(j)

• QAP: Find a permutation vector p ∈ ∏N that


minimizes the assignment cost:
min c(p): subject to p ∈ ∏N
Metaheuristics E-G. Talbi
Quadratic assignment problem (QAP)

10
l1 l2
cost of assignment: 10×1+ 30×10 + 40×5 = 510

1
30 f1 f2
40

10
5

l3 f3

locations and distances facilities and flows

Metaheuristics E-G. Talbi


Quadratic assignment problem (QAP)
1 swap locations of facilities f2 and f3
f1 10 f2
l1 l2

cost of assignment: 10×1+


10×10+30×10
30×1 + 40×5 = 510
330

30
40
10 5

1
f1 f2
f3
l3
10
5
f3

Metaheuristics E-G. Talbi facilities and flows


Quadratic assignment problem (QAP)
10
f1 10 f3 swap locations of facilities f1 and f3
l1 l2

cost of assignment: 10×10+ 30×5 + 40×1 = 290


Optimal!
30 1 40 5

1
f1 f2
f2
l3
10
5
f3

Metaheuristics E-G. Talbi facilities and flows


Construction

• Stage 1: make two assignments {fi→lk ;


fj→ll}

• Stage 2: make remaining N–2


assignments of facilities to locations, one
facility/location pair at a time

Metaheuristics E-G. Talbi


Stage 1 construction

• sort distances bi,j in increasing order:


bi(1),j(1)≤bi(2),j(2) ≤ ⋅ ⋅ ⋅ ≤ bi(N),j(N) .
• sort flows ak,l in decreasing order:
ak(1),l(1)≥ak(2),l(2) ≥ ⋅ ⋅ ⋅ ≥ ak(N),l(N) .
• sort products:
ak(1),l(1) ⋅ bi(1),j(1), ak(2),l(2) ⋅ bi(2),j(2), …, ak(N),l(N) ⋅ bi(N),j(N)
• among smallest products, select ak(q),l(q) ⋅ bi(q),j(q) at
random: corresponding to assignments
{fk(q)→li(q) ; fl(q)→lj(q)}

Metaheuristics E-G. Talbi


Stage 2 construction

• If Ω = {(i1,k1),(i2,k2), …, (iq,kq)} are the q


assignments made so far, then
• Cost of assigning fj→ll is cj,l = ai,jbk,l ∑
i,k∈Γ

• Of all possible assignments, one is selected


at random from the assignments having
smallest costs and is added to Ω

Metaheuristics E-G. Talbi


The Strip Packing Problem (SPP)
The Strip Packing Problem (SPP).
Consider:
• a strip of fixed width w and infinite height, w
• a finite set of rectangles, with at least
one of their sides less than w.
The SPP consists of packing the rectangles
in the strip minimizing the total height h
(The rectangles can be rotate at 90 degrees)

Metaheuristics E-G. Talbi


Representing Solutions
Representing solutions [(ri, ai,bi), i = 1,
..., n]
wi wi

hi ri = 0 hi
ri = 0

ri = 1 wi
ai
bi

hi

Metaheuristics E-G. Talbi


The Bottom-left placing
• We use the usual bottom-left
strategy:
• each rectangle is placed at the
deepest location and, within it, in
the most left possible location.

• Each feasible solution of the


problem is determined by:
• the order in which the rectangles
are introduced in the strip.

Metaheuristics E-G. Talbi


The Solution Space
Given the bottom-left strategy:
each rectangle is placed at the deepest location and,
within it, in the most left possible location.

The space of solutions consists of:


• the permutations of the numbers [1, 2 ···, n]
with a binary vector [ri, ..., rn]
that represents if each rectangle is rotated or not.

Metaheuristics E-G. Talbi


High Quality Solutions

• Small w

waste
areas

• Smooth h

upper
contour

Metaheuristics E-G. Talbi


The GRASP rule

• Choose (at random) one of the best items.

The evaluations by:


• The wasted areas
• The smoothness of the upper contour.

The best items are those that best fit to the upper contour

• Construct a Restricted Candidate List of items that


conteins the best items, then choose one item of the list

Metaheuristics E-G. Talbi


Exercise: Elite set in a iterative S-
metaheuristic
• Design of an iterative S-metaheuristic (Iterative
local search or GRASP)
• Goal: Maintain an elite set of solutions E.
• An iterative S-metaheuristic generates a single
solution at each iteration.
• The elite set has a given maximum size k.
• First k iterations: the obtained solution is
systematically integrated into the elite set.
• Which integration policies can we apply for the
future iterations taking into account the quality of
solutions and the diversity of the elite set E.

Metaheuristics E-G. Talbi


Exercise : TS for the VRP
• Tabu list for the VRP
• Move = moving a customer ck from route
Ri to route Rj.
• The tabu list is represented by the moves
attributes.
• Propose three representations which are
increasing in terms of their severity.

Metaheuristics E-G. Talbi


Guideline to design and implement a
S-metaheuristic

Metaheuristics E-G. Talbi


Metaheuristics E-G. Talbi
Metaheuristics E-G. Talbi

You might also like