0% found this document useful (0 votes)
39 views

Algorithmique D'aide À La Décision: Katyanne Farias de Araújo

The document describes a course on algorithms for decision support. It provides an overview of the course content which includes mathematical modeling, solving methods for discrete optimization problems, and using solvers and heuristics. It then provides more details on some topics within mathematical modeling, including motivations for modeling, linear programming, and examples of problems that can be modeled with integer programming like train scheduling, crew scheduling, production planning, and more.

Uploaded by

lolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Algorithmique D'aide À La Décision: Katyanne Farias de Araújo

The document describes a course on algorithms for decision support. It provides an overview of the course content which includes mathematical modeling, solving methods for discrete optimization problems, and using solvers and heuristics. It then provides more details on some topics within mathematical modeling, including motivations for modeling, linear programming, and examples of problems that can be modeled with integer programming like train scheduling, crew scheduling, production planning, and more.

Uploaded by

lolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 223

Algorithmique d'aide

à la décision

Katyanne Farias de Araújo


Who am I?
BAC+5: production and mechanical engineering
BAC+7: informatics - operational research

Industrial engineer at norfil SA

P.h.D - operational research Post doctoral - operational research


and industry 4.0

Maître de conférences

Contact
• Email: [email protected]
• Office: F103
Katyanne Farias - Data Science for Industry 4.0 - Part I 2
Course content To download support files:
https://drive.uca.fr/d/e9802584e102489b8275/

1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classical problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 3
Mathematical modeling

Katyanne Farias - Algorithmique d'aide à la décision 4


Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classical problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 5
Motivations of modeling

Improve

Define the problem Formulate Implement and


and collect data mathematical model test model

• Define objective
• Identify constraints
• Select and process data Implement use in
the company

Katyanne Farias - Algorithmique d'aide à la décision 6


Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Train scheduling
• Train schedules repeat every hour.
• Travel times between stations are known.
• Time spent in a station must lie within a given time interval.
• Two trains traveling on the same line must be separated by at least a given number
of minutes.
• Connection between two trains must be feasible: waiting
time sufficiently long, but not excessive.

Katyanne Farias - Algorithmique d'aide à la décision 7


Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Airline crew scheduling
• Given the schedule of flights for a particular aircraft type, design weekly schedules for
the crews.
• Each day, a crew must be assigned to a duty period: a set of one or more linking flights.
• Respecting several constraints: limited total flying time, minimum rests between flights,
etc.
• Weekly schedules must satisfy constraints of overnight rests,
flying time, returning the crew, etc.
• So that the amount paid to the crews are minimized (function
of flying time, length of duty periods, etc.)

Katyanne Farias - Algorithmique d'aide à la décision 8


Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Production planning
• A multinational company holds a monthly planning meeting.
• Three-month production and shipping plan is drawn up based on estimated sales.
• There are 200-400 products produced in 5 different factories.
• There are shipments to 50 sales areas.
• Solutions must me generated on the spot, so only about 15 min computation time is
available.
• Each product has a minimum production quantity.
• Production is in batches.
• The objective is to maximize the contribution.
Katyanne Farias - Algorithmique d'aide à la décision 9
Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Electricity generation planning
• Unit commitment problem of developing an hourly scheduling spanning a day or a week
to decide which generators will be producing and at what levels.
• Respecting constraints of
• satisfaction of estimated hourly or half-hourly demand,
• the capacity of the active generators (even if there is a sudden peak of demand),
• rate of change of the output of a generator is not excessive, etc.
• Generators have minimum on- and off-times.
•The start-up costs are a nonlinear function of
the time they have been idle.
Katyanne Farias - Algorithmique d'aide à la décision 10
Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Telecommunications
• Explosion of demand impose the need of increasing capacity so as to satisfy predicted
demand for data/voice transmission.
• Given requirements between different centers, existing capacity, costs of installing new
antennas (available in discrete amounts).
• The objective is to minimize cost taking into account possible failures of lines and centers
due to breakdown or accident.

Katyanne Farias - Algorithmique d'aide à la décision 11


Motivations of modeling
• A wide variety of practical problems can be formulated and solved using integer
programming. For example:
Buses for the handicapped (Dial-a-Ride)
• Service available in which handicapped subscribed can call in several hours beforehand
with a request to be taken from A to B at a certain time.
• Special facilities can be ordered as space for a wheelchair.
• The problem consists of scheduling a fleet of specialized mini-buses so as to satisfy a
maximum number of requests in short-term.
• One long-term problem is to decide the optimal size of
the fleet.

Katyanne Farias - Algorithmique d'aide à la décision 12


Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classical problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 13
Linear Programming
• A LP model is composed of:
• Decision variables;
• An objective function, to be minimized or maximized;
• And a set of constraints.
• The objective function and the constraints are linear functions.
• A LP formulation is written in the standard form if:
• The objective function is to be maximized;
• All constraints are inequalities of type less-than (Ax – b <= 0);
• All decision variables are nonnegative.

Katyanne Farias - Algorithmique d'aide à la décision


Linear Programming
• Suppose that we have a Linear Program (LP):
𝐦𝐚𝐱{𝒄𝒙: 𝑨𝒙 ≤ 𝒃, 𝒙 ≥ 𝟎}, where
• A is a m by n matrix,
• c a n-dimensional row vector,
• b a m-dimensional column vector,
• x a n-dimensional column vector of variables.

𝑚𝑎𝑥 σ𝑛𝑖=1 𝑐𝑖 𝑥𝑖
s.t.: σ𝑛𝑖=1 𝑎𝑖𝑗 𝑥𝑖 ≤ 𝑏𝑗 , ∀𝑗 ∈ {1, … , 𝑚}
𝑥𝑖 ≥ 0, ∀𝑖 ∈ {1, … , 𝑛}

Katyanne Farias - Algorithmique d'aide à la décision


Linear Programming
• A LP model has infinite feasible solutions.
• We are interested in obtaining the best solution of all, i.e., the solution that
optimize (maximize or minimize) the given objective function.
• To optimize a LP model, there exists some methods:
• Graphical method (only for very small problems)
• Simplex method
• Column generation

Katyanne Farias - Algorithmique d'aide à la décision 16


Linear Programming
Example: Diet problem
• Imagine that you went to a nutritionist, and he prescribed you daily minimum
amounts of n nutrients to keep you in good health.
• Then you went straight to the market looking for products that contain these
nutrients.
• You notice that these nutrients are found in different m foods, in different qi,a
amounts, and each food has its pa price in the market.

• The big question is: which items should I buy in the supermarket to keep myself
minimally nourished, spending as little as possible?

Katyanne Farias - Algorithmique d'aide à la décision 17


Linear Programming
Example: Diet problem
Nutritional information table (qi,a), in mg
Vitamin Milk (l) Beef (kg) Fish (kg) Salad (100g) Qi (Min./day)
A 2 2 5 6 11
C 35 50 10 30 70
D 80 70 60 40 250

Cost (pa) 2,00 3,00 1,50 1,75

Katyanne Farias - Algorithmique d'aide à la décision 18


Linear Programming
Example: Diet problem
Variable(s) 𝑚𝑖𝑛 σ𝑚
𝑎=1 𝑝𝑎 𝑥𝑎
• 𝑥𝑎 : Qty of a to be purchased. s.t.: σ𝑛𝑖=1 𝑞𝑖,𝑎 𝑥𝑎 ≥ 𝑄𝑖 , ∀𝑖 ∈ {1, … , 𝑛}
𝑥𝑎 ≥ 0, ∀𝑎 ∈ 1, … , 𝑚

𝑚𝑖𝑛(𝑝𝑙 𝑥𝑙 + 𝑝𝑏 𝑥𝑏 + 𝑝𝑓 𝑥𝑓 + 𝑝𝑠 𝑥𝑠 )
s.t.: 𝑞𝐴,𝑙 𝑥𝑙 + 𝑞𝐴,𝑏 𝑥𝑏 + 𝑞𝐴,𝑓 𝑥𝑓 + 𝑞𝐴,𝑠 𝑥𝑠 ≥ 𝑄𝐴
𝑞𝐵,𝑙 𝑥𝑙 + 𝑞𝐵,𝑏 𝑥𝑏 + 𝑞𝐵,𝑓 𝑥𝑓 + 𝑞𝐵,𝑠 𝑥𝑠 ≥ 𝑄𝐵
𝑞𝐶,𝑙 𝑥𝑙 + 𝑞𝐶,𝑏 𝑥𝑏 + 𝑞𝐶,𝑓 𝑥𝑓 + 𝑞𝐶,𝑠 𝑥𝑠 ≥ 𝑄𝐶
𝑥𝑙 , 𝑥𝑏 , 𝑥𝑓 , 𝑥𝑑 ≥ 0

Katyanne Farias - Algorithmique d'aide à la décision 19


Linear Programming
Example: Diet problem Excel solver

Nutritional information table (qi,a), in mg


Salad
Vitamin Milk (l) Beef (kg) Fish (kg) (100g) LHS Min./day
A 2 2 5 6 14,73 11
C 35 50 10 30 70,00 70
D 80 70 60 40 250,00 250

Cost (pa) 2,00 3,00 1,50 1,75

Total cost
Xa 1,31 0,00 2,42 0,00 6.25

Is it a good solution?
Katyanne Farias - Algorithmique d'aide à la décision 20
Linear Programming
Example: Production mix
• A company manufactures 2 types of door: wood and aluminum.
• Each door undergoes 3 operations: cutting, assembly and finishing.
• The time spent on each of these operations for each type of door is known.
• Determine the daily production of each type of door to maximize the
company's profit, respecting the daily availability of time of the machines that
performs each operation.
Assembly Finishing
Cut (h/door) (h/door) (h/door) Profit/door
Wood 1,5 3,0 1,0 € 4,00
Aluminum 4,0 1,5 1,0 € 6,00

Availability 24h 21h 8h

Katyanne Farias - Algorithmique d'aide à la décision 21


Linear Programming
Example: Production mix
Variable(s) 𝑚𝑎𝑥 σ𝑛𝑖=1 𝑝𝑖 𝑥𝑖
• 𝑥𝑖 : Qty of i to be manufactured. s.t.: σ𝑛𝑖=1 𝑡𝑖,𝑜 𝑥𝑖 ≤ 𝑇𝑜 , ∀𝑜 ∈ {1, … , 𝑚}
𝑥𝑖 ≥ 0, ∀𝑖 ∈ 1, … , 𝑛
Instance
• 𝑛: number of types of doors. 𝑚𝑎𝑥(𝑝𝑤 𝑥𝑤 + 𝑝𝑎 𝑥𝑎 )
• 𝑚: number of operations. s.t.: 𝑡𝑤,𝐶 𝑥𝑤 + 𝑡𝑎,𝐶 𝑥𝑎 ≤ 𝑇𝐶 (Cut)
• 𝑝𝑖 : sell price of door of type i. 𝑡𝑤,𝐴 𝑥𝑤 + 𝑡𝑎,𝐴 𝑥𝑎 ≤ 𝑇𝐴 (Assembly)
• 𝑡𝑖,𝑜 : time of operation o for door i. 𝑡𝑤,𝐹 𝑥𝑤 + 𝑡𝑎,𝐹 𝑥𝑎 ≤ 𝑇𝐹 (Finishing)
• 𝑇𝑜 : availability of time for operation o. 𝑥𝑤 , 𝑥𝑎 ≥ 0

Katyanne Farias - Algorithmique d'aide à la décision 22


Linear Programming
Example: Production mix Excel solver
• A company manufactures 2 types of door: wood and aluminum.
• Each door undergoes 3 operations: cutting, assembly and finishing.
• The time spent on each of these operations for each type of door is known.
• Determine the daily production of each type of door to maximize the
company's profit, respecting the daily availability of time of the machines that
performs each operation.
Assembly Finishing
Cut (h/door) (h/door) (h/door) Profit/door xi
Wood 1,5 3,0 1,0 € 4,00 3,2
Aluminum 4,0 1,5 1,0 € 6,00 4,8

Total Profit
Availability 24h 21h 8h € 41,60
Use 24h 16,8h 8h
Katyanne Farias - Algorithmique d'aide à la décision
Is it a good solution? 23
Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classical problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 24
Integer Linear Programming
• If some variables are integer, we have a Mixed Integer Linear Program (MILP):
𝒎𝒂𝒙 𝒄𝒙 + 𝒉𝒚
s.t.: 𝑨𝒙 + 𝑮𝒚 ≤ 𝒃
𝒙 ≥ 𝟎, 𝐲 ∈ ℤ+ , where
• G is a m by p matrix,
• h is a p row vector,
• y is a p column-vector of integer variables.
𝑝
𝑚𝑎𝑥 σ𝑛𝑖=1 𝑐𝑖 𝑥𝑖 + σ𝑘=1 ℎ𝑘 𝑦𝑘
𝑝
s.t.: σ𝑛𝑖=1 𝑎𝑖𝑗 𝑥𝑖 + σ𝑘=1 𝑔𝑗𝑘 𝑦𝑘 ≤ 𝑏𝑗 , ∀𝑗 ∈ {1, … , 𝑚}
𝑥𝑖 ≥ 0, ∀𝑖 ∈ 1, … , 𝑛
𝑦𝑘 ∈ ℤ+ , ∀𝑘 ∈ {1, … , 𝑝}

Katyanne Farias - Algorithmique d'aide à la décision 25


Integer Linear Programming
• If all variables are integer, we have a Integer Linear Program (ILP):
𝑚𝑎𝑥 𝑐𝑥
s.t.: 𝐴𝑥 ≤ 𝑏
𝑥 ∈ ℤ+
• If all variables are restricted to 0-1 values, we have a 0-1 or Binary Integer
Linear Program (BILP):
𝑚𝑎𝑥 𝑐𝑥
s.t.: 𝐴𝑥 ≤ 𝑏
𝑥 ∈ {0,1}
• Many problems involve the occurrence or not of an event.
• These situations are modeled by using binary variables.

Katyanne Farias - Algorithmique d'aide à la décision 26


Binary Integer Linear Program
There are several practical situations in which the occurrence of an
event implies the occurrence of another event. For example:

If-then implications
• The production of an item (shipping goods, deciding to take a taxi, using a warehouse,
etc.) implies a fixed cost.
K
• Fixed cost is activated only in case at least one item is produced.
𝐾 = 𝑠𝑦 + 𝑐𝑥
𝑥 ≤ 𝑀𝑦 cx
+
𝑥 ∈ ℤ , 𝑦 ∈ {0,1}
• 𝑥 is the amount produced;
• 𝑠 represents the fixed cost; s
𝑦 = 0 implies 𝑥 = 0
• 𝑐 is the unit cost; 𝑦 = 1 implies 𝑥 > 0
• 𝑀 is a big number. 0 x

Katyanne Farias - Algorithmique d'aide à la décision 27


Binary Integer Linear Program
There are several practical situations in which the occurrence of an
event implies the occurrence of another event. For example:

If-then implications
• Consider the case that if product 1 is manufactured, then at least m units of product 2
must also be manufactured.
𝑥1 ≤ 𝑀𝑦2 𝑥1 ≥ 0 implies y2 = 1
𝑥2 ≥ 𝑚𝑦2
𝑦2 = 1 implies 𝑥2 ≥ 𝑚
𝑥 ∈ ℤ+ , 𝑦 ∈ {0,1}
• 𝑥 is the amount produced;
• 𝑀 is a big number.

Katyanne Farias - Algorithmique d'aide à la décision 28


Binary Integer Linear Program
There are several practical situations in which the occurrence of an
event implies the occurrence of another event. For example:

Constraints enabled or disabled


• We can use binary variables to enable or disable problem constraints.
• Given an inequality in the form: 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) ≤ 0
• We can define a binary variable 𝑦:
𝑦1 = 1 implies inequality activated
𝑦2 = 0 implies inequality disabled

The constraint can be expressed as: Note that if 𝑦=0, 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) can take
𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) ≤ 𝑀(1 − 𝑦) any value up to its limit

Katyanne Farias - Algorithmique d'aide à la décision 29


Binary Integer Linear Program
There are several practical situations in which the occurrence of an
event implies the occurrence of another event. For example:

Disjunctive constraints
• Sometimes we only want to apply one constraint from a set of constraints.
• Example:
We define a binary variable 𝑦
• I want a car that does 20km/liter;
If 𝑦 = 1, only C1 is active;
• or that reaches 100km/h in 9s. If 𝑦 = 0, only C2 is active

(C1) 𝑓 𝑥1 , 𝑥2 , … , 𝑥𝑛 ≤ 0 ; or 𝑓 𝑥1 , 𝑥2 , … , 𝑥𝑛 ≤ 𝑀(1 − 𝑦);


(C2) 𝑔 𝑥1 , 𝑥2 , … , 𝑥𝑛 ≤ 0 𝑔 𝑥1 , 𝑥2 , … , 𝑥𝑛 ≤ 𝑀𝑦

Katyanne Farias - Algorithmique d'aide à la décision 30


Binary Integer Linear Program
There are several practical situations in which the occurrence of an
event implies the occurrence of another event. For example:

Representation of discrete values


• Consider a problem where a variable 𝑥 can only take values from the discrete set
{4, 6, 8, 12, 20, 24}.
• To represent this situation:
• We define the binary variables 𝑦𝑖 , 𝑖 = 1, … , 6.
• We set the constraints:.
𝑥 = 4𝑦1 + 6𝑦2 + 8𝑦3 + 12𝑦4 + 20𝑦5 + 24𝑦6
𝑦1 + 𝑦2 + 𝑦3 + 𝑦4 + 𝑦5 + 𝑦6 = 1

Katyanne Farias - Algorithmique d'aide à la décision 31


Mathematical modeling
• What decision should be taken?
• What constrains the decisions to be made should satisfy?
• What is the goal?
• What data is available?
Improve

Define the problem Formulate Implement and


and collect data mathematical model test model

• Define objective
• Identify constraints
• Select and process data Implement use in
the company

Katyanne Farias - Algorithmique d'aide à la décision 32


Basic definitions
Hard Constraints: conditions that have to be satisfied. Ex.:
• Employee rostering problem: no employee can be allocated to two different
shifts at the same time.
• Knapsack Problem: The total weight of chosen items cannot exceed the
knapsack capacity.
• Vehicle Routing Problem: The vehicle cannot travel more than a certain
distance without refueling.
• etc.

Katyanne Farias - Algorithmique d'aide à la décision 33


Basic definitions
Soft Constraints: conditions that we would like to satisfy but which is not
absolutely essential. Ex.:
• Employee rostering problem: employee preferences about which shifts they
would like to work.
• Vehicle Routing Problem: route duration must respect labor laws, otherwise a
penalty must be paid.
• Classrooms assignment problem.
• Outsourcing decisions.
• etc.
It just means that if one solution meets the condition more than another (e.g., more
employees have their working preferences met), it would be of higher quality.

Katyanne Farias - Algorithmique d'aide à la décision 34


Classical problems
1. Knapsack problem
2. Bin Packing Problem
3. Assignment Problem
4. Traveling Salesman Problem

Katyanne Farias - Algorithmique d'aide à la décision 35


Knapsack problem
• Given a set of items, each with a weight and a value.
• Determine which items to include in a collection (knapsack) so that the
total weight is less than or equal to a given limit (knapsack capacity)
and the total value is as large as possible.

Instance
• Number of items: 𝑛
• Set of items: 𝑁 = 1, 2, … , 𝑛
• (Item) values: 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ]
• (Item) weights: 𝑤 = [ 𝑤1 , 𝑤2 , … , 𝑤𝑛 ]
• (Knapsack) capacity: 𝐶

Katyanne Farias - Algorithmique d'aide à la décision 36


Instance
Knapsack problem • Number of items: 𝑛
• Set of items: 𝑁 = 1, 2, … , 𝑛
• (Item) values: 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ]
Decision variable(s) • (Item) weights: 𝑤 = [ 𝑤1 , 𝑤2 , … , 𝑤𝑛 ]
1, if item 𝑖 is included in the knapsack, • (Knapsack) capacity: 𝐶
𝑥𝑖 =
0, otherwise.

Mathematical formulation
𝑛

(KP) max ෍ 𝑣𝑖 𝑥𝑖
𝑖=1
𝑛

s.t. ෍ 𝑤𝑖 𝑥𝑖 ≤ 𝐶
𝑖=1

𝑥𝑖 ∈ 0,1 , ∀𝑖 ∈ 𝑁

Katyanne Farias - Algorithmique d'aide à la décision 37


Knapsack problem
Other variants
1. Multidimensional Knapsack Problem (multiple resources)

2. Multiple Knapsack Problem (more than one knapsack available)

3. Fractional Knapsack Problem (continuous variables)

Katyanne Farias - Algorithmique d'aide à la décision 38


Multidimensional Knapsack Problem
• The knapsack and the items have multiple (D) dimensions (e.g., length,
width, thickness, volume, time, etc.).
• 𝑤𝑖,𝑑 : represents the weight associated to the dimension d of item i.
• 𝐶𝑑 : represents the capacity of the knapsack regarding the dimension d.
𝑛

(MDKP) max ෍ 𝑣𝑖 𝑥𝑖
𝑖=1
Instance 𝑛
• Number of items: 𝑛
• Set of items: 𝑁 = 1, 2, … , 𝑛 s.t. ෍ 𝑤𝑖,𝑑 𝑥𝑖 ≤ 𝐶𝑑 , ∀𝑑 ∈ 𝐷
• (Item) values: 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ] 𝑖=1
• (Item) weights: 𝑤 = [ 𝑤1 , 𝑤2 , … , 𝑤𝑛 ]
• (Knapsack) capacity: 𝐶 𝑥𝑖 ∈ 0,1 , ∀𝑖 ∈ 𝑁
• Set of dimensions: 𝐷

Katyanne Farias - Algorithmique d'aide à la décision 39


Multiple Knapsack Problem
• m knapsacks of (not necessarily) distinct capacities are available.
• 𝐶𝑗 : capacity of knapsack j.

Decision variable(s): 𝑥𝑖,𝑗 = 1, if item i is included in knapsack j,


0, otherwise.

(MKP) max σ𝑛𝑖=1 𝑣𝑖 𝑥𝑖,𝑗


Instance
• Number of items: 𝑛 s.t. σ𝑛𝑗=1 𝑤𝑖 𝑥𝑖,𝑗 ≤ 𝐶𝑗 , ∀𝑗 ∈ 𝑀
• Set of items: 𝑁 = 1, 2, … , 𝑛
• (Item) values: 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ] 𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖 ∈ 𝑁, 𝑗 ∈ 𝑀
• (Item) weights: 𝑤 = [ 𝑤1 , 𝑤2 , … , 𝑤𝑛 ]
• (Knapsack) capacity: 𝐶
• Number of knapsacks: 𝑚
• Set of knapsacks: 𝑀 = {1, … , 𝑚}
Katyanne Farias - Algorithmique d'aide à la décision 40
Bin Packing Problem
• In the bin packing problem, items of different volumes must be packed
into a finite number of bins or containers with volume V in a way that
minimizes the number of used bins.

Instance
• Number of items: 𝑛
• Set of items: 𝑁 = 1, 2, … , 𝑛
• Set of available bins: 𝑀 = [1, 2, … , 𝑛]
• Item weights: 𝑤 = [ 𝑤1 , 𝑤2 , … , 𝑤𝑛 ]
• Bin capacity: 𝐶

Katyanne Farias - Algorithmique d'aide à la décision 41


Bin Packing Problem
Decision variable(s)

1, if bin j is used,
𝑦𝑗 =
0, otherwise.

𝑥𝑖,𝑗 = 1, if item i is packed in bin j,


0, otherwise.

Katyanne Farias - Algorithmique d'aide à la décision 42


Bin Packing Problem
Mathematical formulation
𝑛

(BPP) min ෍ 𝑦𝑗
𝑗=1
𝑛

s.t. ෍ 𝑥𝑖,𝑗 = 1, ∀𝑖 ∈ 𝑁
𝑗=1
𝑛

෍ 𝑤𝑖 𝑥𝑖,𝑗 ≤ 𝐶𝑦𝑗 , ∀𝑗 ∈ 𝑁
𝑖=1

𝑦𝑗 ∈ 0,1 , ∀𝑗 ∈ 𝑁
𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖, 𝑗 ∈ 𝑁
Katyanne Farias - Algorithmique d'aide à la décision 43
Bin Packing Problem
Other variants
1. {2D, 3D} BPP (with rotation);

2. BPP with load balancing and stability constraints;

3. BPP with fragile objects;

4. BPP with conflicts, etc.

Katyanne Farias - Algorithmique d'aide à la décision 44


Assignment Problem
• It consists of finding, in a weighted bipartite graph, a matching in
which the sum of weights of the edges is as large (or small) as
possible. Ex.:
• Assignment of activities to employees;
• Assignment of operators to machines;
• Assignment of drivers to vehicles;
• Assignment of production centers to distribution centers;
• Etc.
?

Katyanne Farias - Algorithmique d'aide à la décision 45


Assignment Problem
• We are given n tasks and n agents.
• Each task is executed by exactly one agent.
• Each agent executes exactly one task.
• The assignment/execution of task j to agent i costs 𝑐𝑖,𝑗
• Objective: Assign each task to exactly one agent and minimize the total
assignment cost.

Katyanne Farias - Algorithmique d'aide à la décision 46


Assignment Problem
Decision variable(s)
Mathematical formulation
𝑥𝑖,𝑗 = 1, if task j is assigned to agent i,
0, otherwise. 𝑛 𝑛

(AP) min ෍ ෍ 𝑐𝑖,𝑗 𝑥𝑖,𝑗


𝑖=1 𝑗=1
𝑛

s.t. ෍ 𝑥𝑖,𝑗 = 1, ∀𝑗 ∈ {1, … , 𝑛}


𝑖=1
𝑛

෍ 𝑥𝑖,𝑗 = 1, ∀𝑖 ∈ {1, … , 𝑛}
𝑗=1

𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖, 𝑗 ∈ {1, … , 𝑛}


Katyanne Farias - Algorithmique d'aide à la décision 47
Assignment Problem
• The solution of the linear relaxation is equal to the integer problem.
• The AP can be solved in polynomial time 𝑂(𝑛4 ), by using the well
known Hungarian algorithm (Kuhn, H.W., 1955).
• Then: we can say that the AP is a computationally easy problem!

Katyanne Farias - Algorithmique d'aide à la décision 48


Generalized Assignment Problem
• We are given m agents and n tasks, with m < n.
• Each task is executed by exactly one agent.
• Each agent may execute more than one task.
• Each agent i is associated to a resource capacity 𝑏𝑖 .
• The assignment/execution of task j to agent i costs 𝑐𝑖,𝑗 and requires 𝑎𝑖,𝑗 of the
resource of agent i.
• Objective: Assign each task to one agent and minimize the total assignment
cost without exceeding the resource capacity of each agent.

Katyanne Farias - Algorithmique d'aide à la décision 49


Generalized Assignment Problem
Decision variable(s)
Mathematical formulation
𝑥𝑖,𝑗 = 1, if task j is assigned to agent i,
0, otherwise. 𝑚 𝑛

(GAP) min ෍ ෍ 𝑐𝑖,𝑗 𝑥𝑖,𝑗


𝑖=1 𝑗=1
𝑚

s.t. ෍ 𝑥𝑖,𝑗 = 1, ∀𝑗 ∈ {1, … , 𝑛}


𝑖=1
𝑛

෍ 𝑎𝑖,𝑗 𝑥𝑖,𝑗 ≤ 𝑏𝑖 , ∀𝑖 ∈ {1, … , 𝑚}


𝑗=1

𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖 ∈ {1, … , 𝑚}, 𝑗 ∈ {1, … , 𝑛}


Katyanne Farias - Algorithmique d'aide à la décision 50
Traveling Salesman Problem
• We are given n cities and their locations (geographical coordinates).
• The travel from city i to city j is associated with a cost 𝑐𝑖,𝑗 (e.g., distance,
duration, etc.).
• Objective: Visit all cities exactly once, return to the departure city, and
minimize the total traveling cost.

Applications:
• School bus service;
• Production Scheduling;
• 3D printing;
• Circuit boards;
• Geographic region monitoring, etc.

Katyanne Farias - Algorithmique d'aide à la décision 51


Traveling Salesman Problem
Instance
• Number of cities: 𝑛
• Set of cities: 𝑁 = 1, 2, … , 𝑛
• Travel cost from city i to city j: 𝑐𝑖,𝑗

Decision variable(s)

𝑥𝑖,𝑗 = 1, if the salesman travels from city i to city j,


0, otherwise.

Katyanne Farias - Algorithmique d'aide à la décision 52


Traveling Salesman Problem
Mathematical formulation 1 (Cut-Set Constraints)
(TSP−CSC) min ෍ ෍ 𝑐𝑖,𝑗 𝑥𝑖,𝑗 (1)
𝑖∈𝑁 𝑗∈𝑁

s.t. ෍ 𝑥𝑖,𝑗 = 1, ∀𝑗 ∈ 𝑁 (2)


𝑖∈𝑁:𝑖 ≠𝑗

෍ 𝑥𝑖,𝑗 = 1, ∀𝑖 ∈ 𝑁 (3)
𝑗∈𝑁:𝑗 ≠𝑖

෍ ෍ 𝑥𝑖,𝑗 ≥ 1, ∀𝑆 ⊂ 𝑁, 𝑆 ≠ ∅ (4)
𝑖∈𝑆 𝑗∉𝑆
𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖, 𝑗 ∈ 𝑁 (5)
Katyanne Farias - Algorithmique d'aide à la décision 53
Traveling Salesman Problem
Mathematical formulation 2 (Subtour Elimination Constraints)
(TSP−SEC) min ෍ ෍ 𝑐𝑖,𝑗 𝑥𝑖,𝑗 (1)
𝑖∈𝑁 𝑗∈𝑁

s.t. ෍ 𝑥𝑖,𝑗 = 1, ∀𝑗 ∈ 𝑁 (2)


𝑖∈𝑁:𝑖 ≠𝑗

෍ 𝑥𝑖,𝑗 = 1, ∀𝑖 ∈ 𝑁 (3)
𝑗∈𝑁:𝑗 ≠𝑖

෍ ෍ 𝑥𝑖,𝑗 ≤ 𝑆 − 1, ∀𝑆 ⊂ 𝑁, 2 ≤ 𝑆 ≤ 𝑛 − 1 (6)
𝑖∈𝑆 𝑗∈ 𝑆
𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖, 𝑗 ∈ 𝑁 (5)
Katyanne Farias - Algorithmique d'aide à la décision 54
Traveling Salesman Problem
Mathematical formulation 3 (Miller-Tucker-Zemlin (MTZ))
(TSP−SEC) min σ𝑖∈𝑁 σ𝑗∈𝑁 𝑐𝑖,𝑗 𝑥𝑖,𝑗 • The sets of constraints (4) and (6)
(1)
grow exponentially with n.
σ𝑖∈𝑁:𝑖 ≠𝑗 𝑥𝑖,𝑗 = 1, ∀𝑗 ∈ 𝑁 (2) • Alternatively, they can be replaced
with (7), (8) and (9).
σ𝑗∈𝑁:𝑗 ≠ 𝑖 𝑥𝑖,𝑗 = 1, ∀𝑖 ∈ 𝑁 (3)

𝑢1 = 1 (7)
𝑢𝑖 − 𝑢𝑗 + 𝑛𝑥𝑗,𝑖 ≤ 𝑛 − 1, ∀𝑖, 𝑗 ∈ 𝑁\{1}, 𝑖 ≠ 𝑗 (8)
2 ≤ 𝑢𝑖 ≤ 𝑛, ∀𝑖 ∈ 𝑁\{1} (9)
𝑥𝑖,𝑗 ∈ 0,1 , ∀𝑖, 𝑗 ∈ 𝑁 (5)

Katyanne Farias - Algorithmique d'aide à la décision 55


Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classical problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 56
Basic definitions
Feasible solution: a solution that satisfies the hard constraints.
Infeasible solution: a solution that do not satisfy at least one hard constraint.

Evaluation Function: a mathematical expression that computes the cost of a


solution. Ex.:
• Summation of the costs associated to each solution component.
• Summation of the penalty values for the soft constraints.
• Summation of the penalty values for the soft and hard constraints, where
penalty values for the hard constraints, are very high.
• Dynamic penalties: As the search progresses, the hard-constraint penalty
values are gradually raised so that the search eventually only searches the
feasible regions of the search space.

Katyanne Farias - Algorithmique d'aide à la décision 57


Basic definitions
Deterministic Search: a search method or algorithm which always returns the
same answer, given exactly the same input and starting conditions.

Exhaustive Search: process that enumerates every possible solution and then
returns the optimal (best) one.

Optimization: the process of attempting to find the best possible solution


amongst all those available.
• The task of optimization is to model your problem in terms of some evaluation
function and then employ a search algorithm to minimize (or maximize) the objective
function.
• However, most of the problems are so large that it is impossible to guarantee that the
solution obtained is optimal.

Katyanne Farias - Algorithmique d'aide à la décision 58


Basic definitions
Complexity: refers to the study of how difficult search and optimization
problems are to solve.
• P: Problems solved in polynomial time over the size of the instance.
• NP: Problems whose solutions can be checked in polynomial time over the size of the
instance and the solution.
• NP-Hard: A problem such that if there is a polynomial time algorithm to solve it, then
we can convert every problem in NP to this problem and solve every NP problem in
polynomial time.
• NP-Complete: A problem that is NP-hard and belongs to NP.

Katyanne Farias - Algorithmique d'aide à la décision 59


Basic definitions
Order (Big O Notation):
• Suppose we have two functions f(x) and g(x) where x is a variable.
• We say that g(x) is of the order of f(x) written g(x) = O(f(x)) if, for some constant value
K, g(x) ≤ K f(x) for all values of x which are greater than K.
• This notation is often used when discussing the time complexity of search algorithms.
• In a certain sense, f(x) bounds g(x) once the values of x get beyond the value of K.

Katyanne Farias - Algorithmique d'aide à la décision 60


Solution methods
• An algorithm is a recipe that shows step by step the procedures
necessary to solve a task.

Exact methods
• Guarantee the optimality of the solution (i.e., an associated solution at the
maximum or minimum of the objective function).
• Usually require a high computational effort.

Katyanne Farias - Algorithmique d'aide à la décision 61


Solution methods
• An algorithm is a recipe that shows step by step the procedures
necessary to solve a task.

Heuristics
• Motivated by the need to develop an approach to obtain high-quality solutions, but
optimality cannot be guaranteed.
• It is a method which seeks good (i.e., near-optimal) solutions at a reasonable
computation cost without being able to guarantee optimality, and possibly not
feasibility.
• Unfortunately, it may not even be possible to state how close to optimality a
particular heuristic solution is.

Katyanne Farias - Algorithmique d'aide à la décision 62


Exact methods
Example: Production mix
Variable(s)
• 𝑥𝑖 : Qty of i to be manufactured. 𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
Instance 3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
• 𝑛: number of types of doors. 1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
• 𝑚: number of operations. 𝑥𝑤 , 𝑥𝑎 ∈ ℤ+
• 𝑝𝑖 : sell price of door of type i.
• 𝑡𝑖,𝑜 : time of operation o for door i.
• 𝑇𝑜 : availability of time for operation o.

Katyanne Farias - Algorithmique d'aide à la décision 63


Exact methods
Example: Production mix

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

Katyanne Farias - Algorithmique d'aide à la décision 64


Exact methods
Example: Production mix

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

The feasible solutions to the problem are all the


integer points contained in this region of feasible
solutions.

Katyanne Farias - Algorithmique d'aide à la décision 65


Exact methods
Points with fractional values are not feasible.
Example: Production mix Nor the points outside the region.

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

The feasible solutions to the problem are all the


integer points contained in this region of feasible
solutions.

Katyanne Farias - Algorithmique d'aide à la décision 66


Exact methods The optimal solution (3.2; 4.8)
without the integrality
Example: Production mix constraints is not even feasible!

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

Katyanne Farias - Algorithmique d'aide à la décision 67


Exact methods The optimal solution (3.2; 4.8)
without the integrality
Example: Production mix constraints is not even feasible!

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

This is the solution to the linear relaxation of the


problem and we can use it to solve the problem.

Katyanne Farias - Algorithmique d'aide à la décision 68


Exact methods
Linear relaxation: remove integrality constraints from an integer
programming problem.
• Its solution provides a dual bound for the optimal solution to the integer
programming problem.
• A feasible solution to the integer programming problem provides a primal
bound to the optimal solution.

Katyanne Farias - Algorithmique d'aide à la décision 69


Exact methods
In a maximization problem:
• Dual bound: upper bound for solving the problem.
• Primal bound: lower bound for solving the problem.

In a minimization problem:
• Dual bound: lower bound for solving the problem.
• Primal bound: upper bound for solving the problem.

Linear relaxation: dual bound


Feasible solution: primal bound

Katyanne Farias - Algorithmique d'aide à la décision 70


Exact methods
Continuous optimization
1. Simplex
2. Lagrangian relaxation
3. Column generation
4. Etc.

Discrete optimization
1. Branch-and-Bound
2. Branch-and-Cut
3. Branch-and-Price
4. Etc.

Katyanne Farias - Algorithmique d'aide à la décision 71


Exact methods: continuous optimization
Simplex method
• The simplex method was invented by G.B. Dantzig for solving LP problems that
arose in the U.S. Air Force planning problems.
• The simplex method is based on the idea of iterative improvements.
• The process starts with a feasible solution and then look for a new solution,
which is better in the sense that it has a larger objective function value.
• We continue this process until we arrive at a solution that cannot be improved
or equaled.
• This final solution is then an optimal solution of the LP.

Katyanne Farias - Algorithmique d'aide à la décision 72


Exact methods: continuous optimization
Simplex method: main idea
The main steps of the simplex method are:
1. Transform the LP problem into the standard form.
2. Transform the “≤” constraints into equalities by adding slack variables.
• Example: 𝑥1 + 5𝑥2 + 𝑥3 ≤ 8 becomes: 𝑥1 + 5𝑥2 + 𝑥3 + 𝑠1 = 8.
3. Obtain an initial solution.
• Set all the original variables to zero (it usually works), obtain the values for the slack
variables and for the objective function.
4. We ask whether this solution can be improved and search for an improved
solution.
• Look for a variables to “enter” the basis and improve the objective function.
• Keep improving until no improvements are possible.

Katyanne Farias - Algorithmique d'aide à la décision 73


Exact methods: continuous optimization
Column generation
• The simplex method is very effective for solving LP models in a finite number of
iterations.
• However, when the LP has a huge (exponential) number of variables the
number of iterations may be prohibitive.
• For these cases, Column Generation (CG) algorithms can be effective.
• The main idea of CG algorithms is to decompose a problem into subproblems
that are easier to solve.
• Solve LPs without enumerating all variables.

Katyanne Farias - Algorithmique d'aide à la décision 74


Exact methods: continuous optimization
Column generation: main steps
1. Start with a small, manageable part of a LP problem (specifically, a small
subset of the original variables).
2. Solve this restricted LP problem to the optimality (using simplex).
3. Analyze the solution to discover one or more attractive variables (based on
dual information) to add to the model.
4. Resolve the enlarged model.
5. Repeat steps 3 and 4 until it achieves a satisfactory solution to the whole
problem.

Katyanne Farias - Algorithmique d'aide à la décision 75


Exact methods: continuous optimization
Lagrangian relaxation
• Lagrangian relaxation is a technique used to obtain good bounds for Integer
Programming models.
• Relies on the idea of relaxing complicating constraints.
• Suppose we are given an 𝐼𝑃 in the form:
𝑧 = 𝑚𝑎𝑥 𝑐 𝑥
(IP) s. 𝑡. 𝐴𝑥 ≤ 𝑏
𝑥 ∈ 𝑋 ⊆ ℤ𝑛

• If the problem is so difficult to solve directly, one possibility is to drop the


complicating constraints 𝐴𝑥 ≤ 𝑏 and adding them into the objective function.
• This is the main idea of the Lagrangian relaxation.

Katyanne Farias - Algorithmique d'aide à la décision 76


Exact methods: continuous optimization
Lagrangian relaxation
• The complicating constraints 𝐴𝑥 ≤ 𝑏 are added to the objective function
multiplied by the Lagrange multipliers 𝑢 = (𝑢1 , … , 𝑢𝑚 ), i.e., a penalty term.
• For any values of 𝑢 = (𝑢1 , … , 𝑢𝑚 ), we define the problem:
𝑧 𝑢 = 𝑚𝑎𝑥 𝑐 𝑥 + 𝑢(𝑏 − 𝐴𝑥)
(IP(𝑢))
s. 𝑡. 𝑥 ∈ 𝑋 ⊆ ℤ𝑛
• Problem 𝐼𝑃(𝑢) is a relaxation of problem 𝐼𝑃 for all 𝑢 ≥ 0, because:
• The feasible region of 𝐼𝑃(𝑢) is lager than the feasible region of 𝐼𝑃.
• This holds because {𝑥: 𝐴𝑥 ≤ 𝑏, 𝑥 ∈ 𝑋} ⊆ 𝑋.
• The objective value is at least as great in 𝐼𝑃(𝑢) as in 𝐼𝑃 for all feasible solutions
in 𝐼𝑃
• As 𝑢 ≥ 0 and 𝐴𝑥 ≤ 𝑏 , ∀ 𝑥 ∈ 𝑋; 𝑐𝑥 + 𝑢 𝑏 − 𝐴𝑥 ≥ 𝑐𝑥 ∀ 𝑥 ∈ 𝑋.

Katyanne Farias - Algorithmique d'aide à la décision 77


Exact methods: continuous optimization
Lagrangian relaxation
• As 𝐼𝑃(𝑢) is a relaxation of 𝐼𝑃, 𝑧(𝑢) ≥ 𝑧 and we obtain an upper bound (UB) on
the optimal value o 𝐼𝑃.
• To find the best (smallest) UB over the infinity values for 𝑢, we need to solve
the Lagraniean Dual Problem:
𝑤𝐿𝐷 = 𝑚𝑖𝑛 𝑧(𝑢)
(LD)
s. 𝑡. 𝑢 ≥ 0

• Solving the Lagrangian relaxation 𝐼𝑃(𝑢) may sometimes lead to an optimal


solution of the original problem 𝐼𝑃.
• Depending on the application, the 𝐼𝑃(𝑢) may be divided in small subproblems
that are easier to solve (for example, by inspection) than the original 𝐼𝑃.

Katyanne Farias - Algorithmique d'aide à la décision 78


Solving methods for
discrete optimization problems

Katyanne Farias - Algorithmique d'aide à la décision 79


Discrete optimization problems
• Taking the Traveling Salesman Problem (TSP) as an example:
• Data: 𝑛 cities and the distance matrix between them.
• Starting from one city (source), visiting the others 𝑛 − 1 cities (only once) and
return to the city of origin.
• Objective: Travel the shortest distance possible.

• Considering the symmetric version, i.e., 𝑑𝑖,𝑗 = 𝑑𝑗,𝑖 , ∀𝑖, 𝑗 ∈ 𝑁, 𝑖 ≠ 𝑗, where 𝑁 is


the set of 𝑛 cities, i.e., 𝑁 = {1, 2, … , 𝑛}:

The number of possible solutions is 𝑛 − 1 !/2

Katyanne Farias - Algorithmique d'aide à la décision 80


Discrete optimization problems
𝑵 #Solutions Computational time
3 1.0 (units) 1 × 10−8 seconds
4 3.0 (units) 3 × 10−8 seconds
5 12.0 (dozens) 12 × 10−8 seconds
6 60.0 (dozens) 60 × 10−8 seconds
7 360.0 (hundreds) 36 × 10−7 seconds
8 2 520.0 (thousands) 2.5 × 10−5 seconds
9 20 160.0 (thousands) 2 × 10−4 seconds
10 181 440.0 (thousands) 1.8 × 10−3 seconds
15 43 589 145 600.0 (billions) 7.3 minutes
20 60 822 550 204 416 000.0 (quadrillions) 7 days
25 310 224 200 866 620 000 000 000.0 (septillions) 98 millions of years
27 201 645 730 563 303 000 000 000 000.0 (octillions) 64 billions of years

Katyanne Farias - Algorithmique d'aide à la décision 81


"Intelligent" enumeration methods
• From the information obtained from the solutions already evaluated, they
avoid the evaluation of certain solutions.
• Examples: algorithms based on techniques
1. Branch-and-Bound,
2. Branch-and-Cut,
3. Branch-and-Price,
4. Branch-Cut-and-Price.
• Allows the resolution of problems of higher dimensions.
However, given the combinatorial nature of the problem, in the
worst case all solutions must be analyzed.

Katyanne Farias - Algorithmique d'aide à la décision 82


Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classic problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 83
Branch-and-Bound
• The branch-and-bound method is based on solving the linear relaxation of a
problem.
• As the method runs, the primal and dual bound are updated.
• When the primal and dual bounds are equal, we find an optimal solution
and we can finish the procedure.

Katyanne Farias - Algorithmique d'aide à la décision 84


Branch-and-Bound
Example: Production mix
Variable(s)
• 𝑥𝑖 : Qty of i to be manufactured. 𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
Instance 3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
• 𝑛: number of doors. 1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
• 𝑚: number of operations. 𝑥𝑤 , 𝑥𝑎 ∈ ℤ+
• 𝑝𝑖 : Price of door i..
• 𝑡𝑖,𝑜 : time of operation o for door i.
• 𝑇𝑜 : availability of time for operation o.

Katyanne Farias - Algorithmique d'aide à la décision 85


Branch-and-Bound
The optimal solution of the linear
Example: Production mix ഥ = (3.2; 4.8), 𝒛ത = 41.6
relaxation: 𝒙

𝑚𝑎𝑥(4.0𝑥𝑤 + 6.0𝑥𝑎 )
s.t.: 1.5𝑥𝑤 + 4.0𝑥𝑎 ≤ 24 (Cut)
3.0𝑥𝑤 + 1.5𝑥𝑎 ≤ 21 (Assembly)
1.0𝑥𝑤 + 1.0𝑥𝑎 ≤ 8 (Finishing)
𝑥𝑤 , 𝑥𝑎 ∈ ℤ+

This value 𝒛ത is a dual bound for the optimal solution.


• Maximization problem: the dual bound is an
upper bound.
• That is, the optimal solution has value 𝑧 ∗ ≤
𝑧ҧ = 41.6
Katyanne Farias - Algorithmique d'aide à la décision 86
Branch-and-Bound
• If the linear relaxation solution is fractional, we must create two new
subproblems through branching constraints.
• Then, choose a variable 𝑥𝑗 with a fractional value in the linear relaxation
solution and add:
• 𝑥𝑗 ≤ ⌊𝑥ഥ𝑗 ⌋ in a subproblem;
• 𝑥𝑗 ≥ ⌊𝑥ഥ𝑗 ⌋ in the other subproblem.
• In the example, if we choose 𝑥𝑤 (x of wood doors).

In the example, we add constraints:


• 𝑥𝑤 ≤ 3
• 𝑥𝑤 ≥ 4

Katyanne Farias - Algorithmique d'aide à la décision 87


Branch-and-Bound
When should we not create new sub-problems?
1. When the relaxed problem is unfeasible.
2. When the solution of the relaxed problem is integer.
3. When the value of any feasible solution of the relaxed problem is worse than the
value of the current best feasible solution.

Katyanne Farias - Algorithmique d'aide à la décision 88


Branch-and-Bound
Example: Production mix

𝑧𝐷 = +∞ P0 𝑥𝑚𝑎𝑑 = 3,2 𝑥𝑎𝑙 = 4,8


𝑧𝑃 = −∞ 𝑧ҧ = 41,6

Since we do not have any information


about the solution to the original problem,
we define:
• Primal (lower) limit as −∞.
• Dual (upper) limit as +∞.

Katyanne Farias - Algorithmique d'aide à la décision 89


Branch-and-Bound
Example: Production mix

𝑧𝐷 = +∞ P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6

Solving P0 by Simplex we find:


• 𝑥ҧ𝑤 = 3.2 and 𝑥ҧ𝑎 = 4.8
• 𝑧0ҧ = 41.6

Katyanne Farias - Algorithmique d'aide à la décision 90


Branch-and-Bound
Example: Production mix

𝑧𝐷 = +∞ P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6

The value of the relaxed solution


𝑧0ҧ = 41.6 is a new (upper) dual
bound for the optimal solution of
the original problem.

Katyanne Farias - Algorithmique d'aide à la décision 91


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.6 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6

As the linear relaxation solution is


fractional (at least one variable has
a fractional value), we must create
two new subproblems (P1 and P2).

Katyanne Farias - Algorithmique d'aide à la décision 92


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.6 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6

P1 𝑥ҧ𝑚𝑎𝑑 = 3,2 𝑥ҧ𝑎𝑙 = 4,8 P2 𝑥ҧ𝑚𝑎𝑑 = 3,2 𝑥ҧ𝑎𝑙 = 4,8


𝑧0ҧ = 41,6 𝑧0ҧ = 41,6

We must select a variable to branch.


We select 𝑥𝑤 .

Katyanne Farias - Algorithmique d'aide à la décision 93


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.6 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

We must select a variable to branch.


We select 𝑥𝑤 .

Katyanne Farias - Algorithmique d'aide à la décision 94


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.6 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

We must choose a variable to


branch. Will be chosen 𝑥𝑤 .

Katyanne Farias - Algorithmique d'aide à la décision 95


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.6 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

P1 and P2 were solved, we can update the


dual (upper) bound by doing:
• 𝑧𝐷 = max 𝑧1ҧ , 𝑧2ҧ = max 𝟒𝟏. 𝟐𝟓; 40.0

Katyanne Farias - Algorithmique d'aide à la décision 96


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.25 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = −∞ 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

Since we found an integer solution in P2, P1 and P2 were solved, we can update the
we can update the primal (lower) bound: dual (upper) bound by doing:
𝑧𝑃 = 𝑧2ҧ = 40.0 • 𝑧𝐷 = max 𝑧1ҧ , 𝑧2ҧ = max 𝟒𝟏. 𝟐𝟓; 40.0

Katyanne Farias - Algorithmique d'aide à la décision 97


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.25 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

Since we found an integer solution in P2, P1 and P2 were solved, we can update the
we can update the primal (lower) bound: dual (upper) bound by doing:
𝑧𝑃 = 𝑧2ҧ = 40.0 • 𝑧𝐷 = max 𝑧1ҧ , 𝑧2ҧ = max 𝟒𝟏. 𝟐𝟓; 40.0

Katyanne Farias - Algorithmique d'aide à la décision 98


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.25 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

Now, we know that the optimal solution 𝑧 ∗


is: 𝟒𝟎. 𝟎𝟎 ≤ 𝒛∗ ≤ 𝟒𝟏. 𝟐𝟓

Katyanne Farias - Algorithmique d'aide à la décision 99


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.25 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑚𝑎𝑑 = 4,0 𝑥ҧ𝑎𝑙 = 4,0 P4 𝑥ҧ𝑚𝑎𝑑 = 4,0 𝑥ҧ𝑎𝑙 = 4,0
𝑧0ҧ = 40,0 𝑧0ҧ = 40,0

Katyanne Farias - Algorithmique d'aide à la décision 100


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 41.25 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 2,667 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 36.0 𝑧4ҧ = 40.667

P3 and P4 were solved, we can update the


dual (upper) bound:
• 𝑧𝐷 = max 𝑧3ҧ , 𝑧4ҧ = max 36.0; 40.667

Katyanne Farias - Algorithmique d'aide à la décision 101


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 40.667 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 2,667 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 36.0 𝑧4ҧ = 40.667

Although in P3 we found an integer solution, P3 and P4 were solved, we can update the
we do not update the primal (lower) bound: dual (upper) bound:
𝑧3 ≤ 𝑧𝑃ҧ = 40.0 • 𝑧𝐷 = max 𝑧3ҧ , 𝑧4ҧ = max 36.0; 40.667

Katyanne Farias - Algorithmique d'aide à la décision 102


Branch-and-Bound
Example: Production mix

𝑧𝐷 = 40.667 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 2,667 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 36.0 𝑧4ҧ = 40.667

𝑥𝑤 ≤ 2 𝑥𝑤 ≥ 3
P5 𝑥ҧ𝑤 = 2,0 𝑥ҧ𝑎 = 5,25 P6 𝑥ҧ𝑚𝑎𝑑 = 2,667 𝑥ҧ𝑎𝑙 = 5,0
𝑧5ҧ = 39,5 Infeasible
Katyanne Farias - Algorithmique d'aide à la décision 103
Branch-and-Bound
Example: Production mix

𝑧𝐷 = 40.667 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 2,667 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 36.0 𝑧4ҧ = 40.667

𝑥𝑤 ≤ 2 𝑥𝑤 ≥ 3
P5 𝑥ҧ𝑤 = 2,0 𝑥ҧ𝑎 = 5,25 P6 𝑥ҧ𝑚𝑎𝑑 = 2,667 𝑥ҧ𝑎𝑙 = 5,0
𝑧5ҧ = 39,5 Infeasible
Katyanne Farias - Algorithmique d'aide à la décision 104
Branch-and-Bound Optimal solution: P2
• 4 doors of each must be produced
Example: Production mix generating a profit of $40.

𝑧𝐷 = 40.667 P0 𝑥ҧ𝑤 = 3.2 𝑥ҧ𝑎 = 4.8


𝑧𝑃 = 40.00 𝑧0ҧ = 41.6
𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P1 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.875 P2 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.0
𝑧1ҧ = 41.25 𝑧2ҧ = 40.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 2,667 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 36.0 𝑧4ҧ = 40.667

𝑥𝑤 ≤ 2 𝑥𝑤 ≥ 3
P5 𝑥ҧ𝑤 = 2,0 𝑥ҧ𝑎 = 5,25 P6 𝑥ҧ𝑚𝑎𝑑 = 2,667 𝑥ҧ𝑎𝑙 = 5,0
𝑧5ҧ = 39,5 Infeasible
Katyanne Farias - Algorithmique d'aide à la décision 105
Branch-and-Bound
Example: Problem with a minimization objective function

Let 𝒛ത be a dual bound for the optimal integer solution.


• Maximization problem: the dual bound is an
upper bound.
• That is, the optimal solution has value 𝑧 ∗ ≤ 𝑧ҧ

Let 𝒛ത be a dual bound for the optimal integer solution.


• Minimization problem: the dual bound is an
lower bound.
• That is, the optimal solution has value 𝑧 ∗ ≥ 𝑧ҧ

Katyanne Farias - Algorithmique d'aide à la décision 106


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = −∞ P0 𝑥𝑚𝑎𝑑 = 3,2 𝑥𝑎𝑙 = 4,8


𝑧𝑃 = +∞ 𝑧ҧ = 41,6

Since we do not have any information


about the solution to the original problem,
we define:
• Primal (upper) limit as +∞.
• Dual (lower) limit as −∞.

Katyanne Farias - Algorithmique d'aide à la décision 107


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = −∞ P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5

Solving P0 by Simplex we find:


• 𝑥ҧ𝑤 = 4.1 and 𝑥ҧ𝑎 = 3.8
• 𝑧0ҧ = 56.5

Katyanne Farias - Algorithmique d'aide à la décision 108


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = −∞ P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5

The value of the relaxed solution


𝑧0ҧ = 56.5 is a new (lower) dual
bound for the optimal solution of
the original problem.

Katyanne Farias - Algorithmique d'aide à la décision 109


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5

As the linear relaxation solution is


fractional (at least one variable has
a fractional value), we must create
two new subproblems (P1 and P2).

Katyanne Farias - Algorithmique d'aide à la décision 110


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5

P1 P2

We must select a variable to branch.


We select 𝑥𝑤 .

Katyanne Farias - Algorithmique d'aide à la décision 111


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 P2

We must select a variable to branch.


We select 𝑥𝑤 .

Katyanne Farias - Algorithmique d'aide à la décision 112


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

P1 and P2 were solved, we can update the


dual (lower) bound by doing:
• 𝑧𝐷 = 𝑚𝑖𝑛 𝑧1ҧ , 𝑧2ҧ = 𝑚𝑖𝑛 𝟓𝟕. 𝟑; 61.0

Katyanne Farias - Algorithmique d'aide à la décision 113


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = +∞ 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

Since we found an integer solution in P2, P1 and P2 were solved, we can update the
we can update the primal (upper) bound: dual (lower) bound by doing:
𝑧𝑃 = 𝑧2ҧ = 61.0 • 𝑧𝐷 = 𝑚𝑖𝑛 𝑧1ҧ , 𝑧2ҧ = 𝑚𝑖𝑛 𝟓𝟕. 𝟑; 61.0

Katyanne Farias - Algorithmique d'aide à la décision 114


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

Since we found an integer solution in P2, P1 and P2 were solved, we can update the
we can update the primal (upper) bound: dual (lower) bound by doing:
𝑧𝑃 = 𝑧2ҧ = 61.0 • 𝑧𝐷 = 𝑚𝑖𝑛 𝑧1ҧ , 𝑧2ҧ = 𝑚𝑖𝑛 𝟓𝟕. 𝟑; 61.0

Katyanne Farias - Algorithmique d'aide à la décision 115


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

Now, we know that the optimal solution 𝑧 ∗


is: 𝟔𝟏. 𝟎 ≤ 𝒛∗ ≤ 𝟓𝟔. 𝟓

Katyanne Farias - Algorithmique d'aide à la décision 116


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 P4

Katyanne Farias - Algorithmique d'aide à la décision 117


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 56.5 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

P3 and P4 were solved, we can update the


dual (lower) bound:
• 𝑧𝐷 = 𝑚𝑖𝑛 𝑧3ҧ , 𝑧4ҧ = 𝑚𝑖𝑛 59.6; 60.25

Katyanne Farias - Algorithmique d'aide à la décision 118


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

We did not find a new integer solution, so we P3 and P4 were solved, we can update the
do not update the primal (upper) bound dual (lower) bound:
Now, we know that: 𝟔𝟏. 𝟎 ≤ 𝒛∗ ≤ 𝟓𝟗.6 • 𝑧𝐷 = 𝑚𝑖𝑛 𝑧3ҧ , 𝑧4ҧ = 𝑚𝑖𝑛 59.6; 60.25

Katyanne Farias - Algorithmique d'aide à la décision 119


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

Also, note that the optimal solution of 𝑃4


has 𝑧4ҧ = 60.25

Katyanne Farias - Algorithmique d'aide à la décision 120


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

Also, note that the optimal solution of 𝑃4 This fact indicates that if we create two
has 𝑧4ҧ = 60.25 new subproblems from 𝑃4, the best integer
solution we can find will have 𝑧 ≥ ⌈𝑧4ҧ ⌉
Katyanne Farias - Algorithmique d'aide à la décision 121
Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

Also, note that the optimal solution of 𝑃4 This fact indicates that if we create two
has 𝑧4ҧ = 60.25 new subproblems from 𝑃4, the best integer
solution we can find will have 𝑧 ≥ ⌈60.25⌉
Katyanne Farias - Algorithmique d'aide à la décision 122
Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

Also, note that the optimal solution of 𝑃4 This fact indicates that if we create two
has 𝑧4ҧ = 60.25 new subproblems from 𝑃4, the best integer
solution we can find will have 𝑧 ≥ 61
Katyanne Farias - Algorithmique d'aide à la décision 123
Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

So, we cannot improve our current best integer solution (𝑧𝑝 = 61) by
oppening new nodes from P4 .

Katyanne Farias - Algorithmique d'aide à la décision 124


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

So, we cannot improve our current best integer solution (𝑧𝑝 = 61) by
oppening new nodes from P4 .
• Thus, we mark P4 as a closed node.
Katyanne Farias - Algorithmique d'aide à la décision 125
Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P5 P6

Katyanne Farias - Algorithmique d'aide à la décision 126


Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 5.0 xത a = 3.0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4 Since we found an integer solution in P5,


we can update the primal (upper) bound:
P5 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P6 But, since 𝑧5ҧ > 𝑧𝑃 the value is not uptaded
𝑧5ҧ = 62.0 Infeasible
Katyanne Farias - Algorithmique d'aide à la décision 127
Optimal solution: P2
Branch-and-Bound
Example: Problem with a minimization objective function

𝑧𝐷 = 59.6 P0 𝑥ҧ𝑤 = 4.1 𝑥ҧ𝑎 = 3.8


𝑧𝑃 = 61.0 𝑧0ҧ = 56.5
𝑥𝑤 ≤ 4 𝑥𝑤 ≥5
P1 𝑥ҧ𝑤 = 4.0 𝑥ҧ𝑎 = 4.175 P2 𝑥ҧ𝑤 = 𝟓. 0 𝑥ҧ𝑎 = 𝟑. 0
𝑧1ҧ = 57.3 𝑧2ҧ = 61.0

𝑥𝑎 ≤ 4 𝑥𝑎 ≥ 5
P3 𝑥ҧ𝑤 = 3.75 𝑥ҧ𝑎 = 4.0 P4 𝑥ҧ𝑤 = 3.5 𝑥ҧ𝑎 = 5.0
𝑧3ҧ = 59.6 𝑧4ҧ = 60.25

𝑥𝑤 ≤ 3 𝑥𝑤 ≥ 4
P5 𝑥ҧ𝑤 = 3.0 𝑥ҧ𝑎 = 4.0 P6 𝑥ҧ𝑚𝑎𝑑 = 2,667 𝑥ҧ𝑎𝑙 = 5,0
𝑧5ҧ = 62.0 Infeasible
Katyanne Farias - Algorithmique d'aide à la décision 128
Branch-and-Bound
Branching strategies
• Given a set of nodes not yet explored, how to choose the next node to
explore?

Deep-first search
• Choose the active node that is deepest in the tree.
• Last node has been generated.
• To find the first feasible solution faster.

Breadth-first search
• Choose the active node that is the further up to
the tree.
• The oldest unexplored node.
Katyanne Farias - Algorithmique d'aide à la décision 129
Branch-and-Bound
Branching strategies
• Given a set of nodes not yet explored, how to choose the next node to
explore?

Best-node first
• Choose the active node with the best bound.
• To minimize the total number of nodes evaluated.

Katyanne Farias - Algorithmique d'aide à la décision 130


Branch-and-Bound
Selection of variables to branch
• Having selected the node to explore, which variable to choose for the branch,
among all the variables that do not have integer values ?
• Some strategies were presented in the literature.
• But their performance proved to be extremely dependent on the specific
problem to which they are applied.

Katyanne Farias - Algorithmique d'aide à la décision 131


Branch-and-Bound
• Imagine you have a huge number of variables.
• Applying the branch-and-bound method on paper is not a good idea...

Cplex does it for you!

Katyanne Farias - Algorithmique d'aide à la décision 132


"Intelligent" enumeration methods
• From the information obtained from the solutions already evaluated, they
avoid the evaluation of certain solutions.
• Examples: algorithms based on techniques
1. Branch-and-Bound,
2. Branch-and-Cut,
3. Branch-and-Price,
4. Branch-Cut-and-Price.
• Allows the resolution of problems of higher dimensions.

Katyanne Farias - Algorithmique d'aide à la décision 133


Branch-and-Cut
• Solving method for integer programs (IP).
• The execution time of a branch and bound method can be long if there is a very large
number of variables or constraints.
• Successful solution of difficult IP requires a combination of strategies.
• Branch-and-cut (B&C) methods are a combination of two main strategies:
• Branch-and-bound (B&B)
• Cutting planes methods
• A B&C algorithm is a B&B in which cutting planes are generated throughout
the B&B tree.
• At each node of the B&B tree much more work is done by finding and
adding cuts, in order to obtain tighter bounds.
• The idea is to significantly reduce the number of nodes in the tree.

Katyanne Farias - Algorithmique d'aide à la décision 134


Branch-and-Price
• Solving method for integer programs (IP).
• Usually, applied to solve large IP, i.e,. IP with large number of variables.
• Branch-and-price (B&P) methods are a combination of two main strategies:
• Branch-and-bound (B&B)
• Column generation (CG)
• As already discussed, CG is applied to solve large LP.

Katyanne Farias - Algorithmique d'aide à la décision 135


Branch-and-Price
• If we are interested in solving an IP using a B&B, the first step of the
algorithm is to solve the linear relaxation.
• To do so, we may use and CG algorithm and two situations may occur:
1. The LP optimal solution is integer.
2. The LP optimal solution is fractional.
• If (1): then we are done, the IP is solved to the optimality.
• If (2): then branching is need, as in the B&B.

• Thus, a B&P algorithm is a B&B algorithm in which each node of the tree is
solved by using a CG algorithm.

Katyanne Farias - Algorithmique d'aide à la décision 136


Solution methods
• For problems of a combinatorial nature, the use of exact methods (i.e.,
which guarantee the optimality of the generated solutions) becomes
quite restricted.
• The global optimum for this class of problems can only be found after
considerable computational effort.
• On the other hand, in practice, it is enough to find a “good” solution to
the problem.
• Challenge: to produce solutions as close as possible to the optimal
ones, in a short time.
Heuristics

Katyanne Farias - Algorithmique d'aide à la décision 137


Heuristics
Challenge: to produce solutions as close as possible to the optimal ones, in a
short time.
• Can be classified into:
• Constructive heuristics;
• Improvement heuristics.

Concepts
• Local Optimum: is a point in the search space where all neighboring solutions are
worse than the current solution.
• Global Optimum: is a point in the search space where all other points in the search
space are worse than (or equal to) the current one.

Katyanne Farias - Algorithmique d'aide à la décision 138


Heuristics
Constructive Heuristics: process of building an initial solution element by
element..
• Ex.: University examination timetabling
• One way to generate a solution is to start with an empty timetable and gradually
schedule examinations until they are all timetabled (e.g., starting from the
examinations which are more difficult to schedule).
• Constructive heuristics are usually thought of as being fast because they often
represent a single-pass approach.
• Greedy algorithms Procedure Construction_heuristic( ):
• Strategy: always make the choice that is 𝑠 ≔ ∅
best at the moment. while 𝑠 is not a complete solution do
• They are not generally guaranteed to find Choose a solution component 𝑐
globally optimal solutions. Add the solution component 𝑐 to 𝑠
End Construction_heuristic

Katyanne Farias - Algorithmique d'aide à la décision 139


Heuristics
Improvement (Local Search) Heuristics: a heuristic mechanism where we
consider neighbors of the current solution as potential replacements. If we
accept a new solution from this neighborhood, then we move to that solution
and then consider its neighbors.
• Ex: Hill Climbing, Simulated annealing, Tabu Search, Variable Neighborhood Search,
etc.
• In contrast to the constructive heuristics, it moves from one solution to another.

Procedure LocalSearch_heuristic(𝑠):
while 𝑠 is not locally optimal do
Find 𝑠 ′ ∈ 𝑁(𝑠) with 𝑓 𝑠 ′ < 𝑓(𝑠);
𝑠 ← 𝑠′;
End LocalSearch_heuristic

Katyanne Farias - Algorithmique d'aide à la décision 140


Heuristics
Metaheuristics: an iterative generation process which guides a subordinate
heuristic.
• It refers to a master strategy that guides and modifies other heuristics to
produce solutions beyond those that are normally generated in a quest for
local optimality.
• Evolutionary Methods
• Subset of the metaheuristic approaches which are typified by the fact that they
maintain a population of candidate solutions and that these solutions compete
for survival.
• Such approaches are inspired by evolution in nature.

Matheuristics: methods that hybridize metaheuristics with mathematical


programming techniques.
Katyanne Farias - Algorithmique d'aide à la décision 141
Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classic problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 142
Constructive Heuristics
Example: TSP

Nearest Neighbor Heuristic


• Start from the origin city.
• Add at each step the unvisited city whose distance to the last visited city is
the shortest possible.

Cheapest Insertion Heuristic


• We start from an initial subroute involving three cities.
• At each step, add a city 𝑘 not yet visited between cities 𝑖 and 𝑗 of the
𝑘
subroute whose insertion cost 𝑠𝑖𝑗 = 𝑑𝑖𝑘 + 𝑑𝑘𝑗 − 𝑑𝑖𝑗 is the smallest as
possible.
Katyanne Farias - Algorithmique d'aide à la décision 143
Constructive Heuristics
Nearest Neighbor Heuristic

Katyanne Farias - Algorithmique d'aide à la décision 144


Constructive Heuristics
Nearest Neighbor Heuristic Require: Set of nodes 𝑁 = {1, 2, … , 𝑛}, and cost matrix c N ×N
Ensure: A Hamiltonian Cycle (𝑣𝑒𝑐𝑠𝑜𝑙)
1: 𝑣𝑒𝑐𝑠𝑜𝑙 ← ∅
2: Select an arbitrary node 𝑘 ∈ 𝑁
3: 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑘); 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 ← 0
4: Set l = 𝑘 and 𝑊 = 𝑁\{𝑘}
5: while 𝑊 ≠ ∅ do
6: 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 = ∞
7: for 𝑖 ∈ 𝑊 do
8: If 𝑐𝑙𝑖 < 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 then
9: 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 = 𝑐𝑙𝑖
10: 𝑏𝑒𝑠𝑡𝑖 = 𝑖
11: 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑏𝑒𝑠𝑡𝑖 ); 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 ← 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 + 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡
12: 𝑙 ← 𝑏𝑒𝑠𝑡𝑖 ; W ← 𝑊\{best 𝑖 }
13: 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑘); 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 ← 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 + 𝑐𝑙𝑘
14: return 𝑣𝑒𝑐𝑠𝑜𝑙

Katyanne Farias - Algorithmique d'aide à la décision 145


Constructive Heuristics
Cheapest Insertion Heuristic

k j
𝑑𝑖𝑗 𝑑𝑗𝑜

i
o
𝑑𝑖𝑜

Katyanne Farias - Algorithmique d'aide à la décision 146


Constructive Heuristics
Cheapest Insertion Heuristic

𝑑𝑘𝑗
k j
𝑑𝑖𝑗 𝑑𝑗𝑜
𝑑𝑘𝑖

i
o
𝑑𝑖𝑜

Katyanne Farias - Algorithmique d'aide à la décision 147


Constructive Heuristics
Require: Set of nodes 𝑁 = {1, 2, … , 𝑛}, and cost matrix c N ×N
Cheapest Insertion Heuristic Ensure: A Hamiltonian Cycle (𝑣𝑒𝑐𝑠𝑜𝑙)
1: 𝑣𝑒𝑐𝑠𝑜𝑙 ← ∅
2: Select two arbitrary node 𝑖, 𝑗 ∈ 𝑁
3: 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑖); 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑗); 𝑣𝑒𝑐𝑠𝑜𝑙.push_back(𝑖)
4: 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 ← 𝑐𝑖𝑗 + 𝑐𝑗𝑖
5: 𝑊 = 𝑁\{𝑖, 𝑗}
6: while 𝑊 ≠ ∅ do
7: 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 = ∞
8: for 𝑖 ∈ 𝑊 do
9: for 𝑝𝑜𝑠 = 0, … , 𝑣𝑒𝑐𝑠𝑜𝑙. 𝑠𝑖𝑧𝑒() − 2 do
10: 𝑐𝑜𝑠𝑡 ← 𝑐𝑣𝑒𝑐𝑠𝑜𝑙 𝑝 −1,𝑖−1 + 𝑐𝑖−1, 𝑣𝑒𝑐𝑠𝑜𝑙 𝑝 +1 − 𝑐𝑣𝑒𝑐𝑠𝑜𝑙 𝑝 −1,𝑣𝑒𝑐𝑠𝑜𝑙 𝑝+1 −1
11: If 𝑐𝑜𝑠𝑡 < 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 then
12: 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 = 𝑐𝑜𝑠𝑡
13: 𝑏𝑒𝑠𝑡𝑖 = 𝑖
14: 𝑏𝑒𝑠𝑡𝑝 = 𝑝𝑜𝑠
15: 𝑣𝑒𝑐𝑠𝑜𝑙.insert(𝑣𝑒𝑐𝑠𝑜𝑙.begin() + 𝑏𝑒𝑠𝑡𝑝 , 𝑏𝑒𝑠𝑡𝑖 );
16: 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 ← 𝑠𝑜𝑙𝑐𝑜𝑠𝑡 + 𝑏𝑒𝑠𝑡𝑐𝑜𝑠𝑡 ; W ← 𝑊\{best 𝑖 }
17: return 𝑣𝑒𝑐𝑠𝑜𝑙
Katyanne Farias - Algorithmique d'aide à la décision 148
Constructive Heuristics: VRP
Vehicle Routing Problem
• A generic verbal definition of the family of vehicle routing problems can be the
following:
• Given: A set of transportation requests and a fleet of vehicles.
• The problem is then to find a plan for the following:
• Determine a set of vehicle routes to perform all (or some) transportation requests with
the given vehicle fleet at minimum cost.
• In particular, decide which vehicle handles which
requests in which sequence so that all vehicle routes
can be feasibly executed.

Katyanne Farias - Algorithmique d'aide à la décision 149


Constructive Heuristics: VRP
Vehicle Routing Problem: variants
1. Heterogeneous Fleet VRP: Vehicles may have different capacities and costs;
2. VRP with Time Windows: Customer demands must be served within a given time
interval;
3. Multi-depot VRP: Vehicles may depart from multiple depots;
4. Periodic VRP: Customer demands must be satisfied in
multiple periods (e.g., two days per week);
5. Green VRP: It includes environmental aspects to the
problem;
6. Etc.

Katyanne Farias - Algorithmique d'aide à la décision 150


Constructive Heuristics: VRP
Capacitated Vehicle Routing Problem (CVRP)
• It is the most studied version of the VRP.
• The transportation requests consist of the distribution of goods from a single
depot, denoted as point 0, to a given set of 𝑛 customers, 𝑁 = {1, 2, … , 𝑛}.
• The demand 𝑞𝑖 ≥ 0 has to be delivered to customer 𝑖 ∈ 𝑁.
• The fleet 𝐾 = {1, 2, … , 𝐾 } is assumed to be homogeneous: all vehicles have
the same capacity 𝑄 > 0 and are operating at
identical costs.
• A vehicle that services a customer subset 𝑆 ⊆ 𝑁
starts at the depot, moves once to each of the
customers in 𝑆, and finally returns to the depot.
• A vehicle moving from 𝑖 to 𝑗 incurs the travel cost 𝑐𝑖𝑗 .

Katyanne Farias - Algorithmique d'aide à la décision 151


Constructive Heuristics: VRP
Vehicle Routing Problem: constructive heuristics
• Sequential Insertion Strategy
• Only a single route is considered for insertion at each iteration.
• Parallel Insertion Strategy
• All routes are considered while evaluating the least-cost insertion.
• Cluster First Route Second
• Clusters of customers are created in such a way that the total demand does not
exceed the vehicle capacity.
• Then, for each cluster a TSP is solved.
• Clusters can be created by solving a capacitated p-median problem: k-means
algorithm.

Katyanne Farias - Algorithmique d'aide à la décision 152


Constructive Heuristics: VRP
Vehicle Routing Problem: constructive heuristics
• Route First Cluster Second
• Relax vehicle capacity to build a single route (“giant tour”), e.g., by solving a TSP.
• Then, the “giant tour” is split into feasible trips (i.e., satisfying vehicle capacities).

Katyanne Farias - Algorithmique d'aide à la décision 153


Constructive Heuristics: BPP
Bin Packing Problem
• Items of different volumes must be packed into a finite number of bins or
containers each of volume 𝑉 in a way that minimizes the number of bins used.

• Constructive heuristics
1. Next-Fit (NF).
2. First-Fit (FF).
3. Best-Fit (BF).
4. Next-Fit Decreasing (NFD).
5. First-Fit Decreasing (FFD).
6. Best-Fit Decreasing (BFD).

Katyanne Farias - Algorithmique d'aide à la décision 154


Constructive Heuristics: BPP
Bin Packing Problem: Constructive heuristics
• Next-Fit (NF)
• The first item is assigned to bin 1.
• Items 2, … , 𝑛 are then considered by increasing indices:
 each item is assigned to the current bin, if it fits;
 otherwise, it is assigned to a new bin, which becomes the current one.
• The time complexity of the algorithm is clearly 𝑂(𝑛).
• The worst-case performance ratio of 𝑁𝐹 is 𝑟 𝑁𝐹 = 2.
 This means: for all BPP instances, the worst solution will be 2 times greater than the
optimal one.
𝑁𝐹(𝑙)
 ≤ 2, for all instances 𝑙 .
𝑧(𝑙)

Katyanne Farias - Algorithmique d'aide à la décision 155


Constructive Heuristics: BPP
Bin Packing Problem: Constructive heuristics
• First-Fit (FF)
• It considers the items according to increasing indices and assigns each item to the
lowest indexed initialized bin into which it fits;
• Only when the current item cannot fit into any initialized bin, is a new bin
introduced.
• The time complexity of the algorithm is 𝑂(𝑛𝑙𝑜𝑔𝑛).
17
• The worst-case performance ratio of FF is 𝑟 𝐹𝐹 = .
10

Katyanne Farias - Algorithmique d'aide à la décision 156


Constructive Heuristics: BPP
Bin Packing Problem: Constructive heuristics
• Best-Fit (BF)
• It assigns the current item to the feasible bin (if any) where it fits by leaving the
smallest residual capacity or into a new one if no open bin can accommodate it.
• It breaks ties in favor of the lowest indexed bin.
• The time complexity of the algorithm is 𝑂(𝑛𝑙𝑜𝑔𝑛).
17
• The worst-case performance ratio of FF is 𝑟 𝐹𝐹 = .
10

Katyanne Farias - Algorithmique d'aide à la décision 157


Constructive Heuristics: BPP
Bin Packing Problem: Constructive heuristics
• {NF, FF, BF} Decreasing
• Assume now that the items are sorted so that 𝑤1 ≥ 𝑤2 ≥ ⋯ ≥ 𝑤𝑛 .
• Then NF or FF, or BF is applied.
• The time complexity of the algorithm is 𝑂(𝑛𝑙𝑜𝑔𝑛).
• The worst-case performance ratio (WCPR) of the resulting algorithms FFD and
3
BFD is 𝑟(𝐹𝐹𝐷) = 𝑟(𝐵𝐹𝐷) = .
2
• No polynomial-time approximation algorithm for the BPP can have WCPR smaller
3
than , unless 𝑃 = 𝑁𝑃.
2

Katyanne Farias - Algorithmique d'aide à la décision 158


Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classic problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 159
Improvement Heuristics
• Also called local search techniques.
• They constitute a family of techniques based on the notion of
neighborhood.
• Neighborhood is a set of solutions that can be obtained from an initial solution
𝑠, by applying a neighborhood operator (or move).
• A neighborhood operator turns one solution 𝑠 into another 𝑠′.
Let 𝑆 be the set of all possible solutions.
𝑁 𝑠 ⊆ 𝑆 is the set of neighboring solutions (neighborhood) of 𝑠.
s ′ ∈ 𝑁 𝑠 is called neighbor of 𝑠.

Katyanne Farias - Algorithmique d'aide à la décision 160


Improvement Heuristics
• Start from any initial solution and walk, at each iteration, from neighbor to
neighbor.
• It can be seen as a procedure that traverses a path in a non-oriented graph
𝐺 = (𝑆, 𝐸), where
• 𝑆 is the set of solutions, and
• 𝐸 = { 𝑠, 𝑠 ′ : 𝑠 ∈ 𝑆, 𝑠 ′ ∈ 𝑁 𝑠 } is the set of edges.

Procedure LocalSearch_heuristic(𝑠):
while 𝑠 is not locally optimal do
Find 𝑠 ′ ∈ 𝑁(𝑠) with 𝑓 𝑠 ′ < 𝑓(𝑠);
𝑠 ← 𝑠′;
End LocalSearch_heuristic

Katyanne Farias - Algorithmique d'aide à la décision 161


Improvement Heuristics
Neighbor solutions
• It depends on the structure adopted to represent the solution.

Knapsack Problem TSP Scheduling problem


01011 13524 1 57912
4386
• Instance: 5 items • Instance: 5 cities
• 1 vector with 5 binary elem. • 1 vector with 6 integer elem. • Instance: 2 machines, 9 jobs
• Choose items 2, 4, 5 • Visit order • 2 integer vectors
• Sequence of jobs to be
processed in each machine.

Katyanne Farias - Algorithmique d'aide à la décision 162


Improvement Heuristics
Neighbor solutions
Example: Knapsack Problem (n items)
• Solution represented by a binary vector 𝑠 of 𝑛 elements.
• 𝑠 𝑖 = 0, indicates that item 𝑖 is not placed in the knapsack.
• 𝑠 𝑖 = 1, indicates that item 𝑖 is placed in the knapsack.

Katyanne Farias - Algorithmique d'aide à la décision 163


Improvement Heuristics
Neighbor solutions
Example: Knapsack Problem (2 items)

01 11

00 10

Katyanne Farias - Algorithmique d'aide à la décision 164


Improvement Heuristics
Neighbor solutions
Example: Knapsack Problem (3 items)

010 011
110
111
001
000

100 101

Katyanne Farias - Algorithmique d'aide à la décision 165


Improvement Heuristics
Neighbor solutions
Example: Knapsack Problem (4 items)

0010 0011
0110
0111
0001
0000

0100 0101
1010 1011
1110
1111
1001
1000

1100 1101

Katyanne Farias - Algorithmique d'aide à la décision 166


Improvement Heuristics
Neighborhood Operators
Example: TSP
• Reinsertion (or-opt): removes one element from the vector and (re)inserts it in a
different position.
• Exchange (swap): swaps the position of one element with another, and vice versa
(𝑝𝑎 ↔ 𝑝𝑏).
• 2-opt: removes two edges/arcs and reconnects the tour by adding two new
edges/arcs (i.e., it reverses a subsequence of visits).
• Others:
• Remove and reinsert 𝑘 consecutive elements.
• Swap two sets of consecutive elements.
• Remove and reinsert 𝑘 edges/arcs (k-opt).

Katyanne Farias - Algorithmique d'aide à la décision 167


Improvement Heuristics
Neighborhood Operators 1 5 3 2 6 7 4 1
Example: TSP
1
• Swap 5 with 4

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 168
Improvement Heuristics
Neighborhood Operators 1 4 3 2 6 7 5 1
Example: TSP
1
• Swap 5 with 4

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 169
Improvement Heuristics
Neighborhood Operators 1 4 3 2 6 7 5 1
Example: TSP
1
• Swap 3 with 2

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 170
Improvement Heuristics
Neighborhood Operators 1 4 2 3 6 7 5 1
Example: TSP
1
• Swap 3 with 2

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 171
Improvement Heuristics
Neighborhood Operators 1 4 2 3 6 7 5 1
Example: TSP
1
• Swap 3 with 6

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 172
Improvement Heuristics
Neighborhood Operators 1 4 2 6 3 7 5 1
Example: TSP
1
• Swap 3 with 6

4
5

2
3

7
6
Katyanne Farias - Algorithmique d'aide à la décision 173
Improvement Heuristics
Neighborhood Operators: Reinsertion
Example: TSP
• Let 𝑣𝑒𝑐 be a solution vector for a TSP instance with 𝑛 cities.
• Ex: Let 𝑛 = 5 and 𝑣𝑒𝑐 = [1 3 5 4 2 1].
• The costs of all solutions in the or-opt neighborhood can be
assessed/evaluated by the algorithm below:
• Defining 0 as the first index of a vector: 𝑣𝑒𝑐[0] is the first element of the vector 𝑣𝑒𝑐.
Procedure OrOpt(𝑣𝑒𝑐)
1: for 𝑖 = 1 … 𝑛 − 1 do
2: for 𝑝 = 1 … 𝑛 do
3: if 𝑝 ≠ 𝑖 and 𝑝 ≠ 𝑖 + 1 then
4: 𝑛𝑒𝑤𝑐𝑜𝑠𝑡 = 𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑐𝑜𝑠𝑡
5: −𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐[𝑖] − 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖 ,𝑣𝑒𝑐[𝑖+1] + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐[𝑖+1]
6: −𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑝−1 ,𝑣𝑒𝑐 𝑝 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑝−1 ,𝑣𝑒𝑐 𝑖 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖 ,𝑣𝑒𝑐[𝑝]
Katyanne Farias - Algorithmique d'aide à la décision 174
Improvement Heuristics
Neighborhood Operators: Swap
Example: TSP
• Let 𝑣𝑒𝑐 be a solution vector for a TSP instance with 𝑛 cities.
• Ex: Let 𝑛 = 5 and 𝑣𝑒𝑐 = [1 3 5 4 2 1].
• The costs of all solutions in the swap neighborhood can be
assessed/evaluated by the algorithm below:
• Defining 0 as the first index of a vector: 𝑣𝑒𝑐[0] is the first element of the vector 𝑣𝑒𝑐.

Procedure Swap(𝑣𝑒𝑐)
1: for 𝑖 = 1 … 𝑛 − 2 do
2: for j = 𝑖 + 1 … 𝑛 − 1 do
3: 𝑛𝑒𝑤𝑐𝑜𝑠𝑡 = 𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑐𝑜𝑠𝑡
4: −𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐 𝑖 − 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖 ,𝑣𝑒𝑐 𝑖+1 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐 𝑗 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑗 ,𝑣𝑒𝑐[𝑖+1]
5: −𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑗−1 ,𝑣𝑒𝑐 𝑗 − 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑗 ,𝑣𝑒𝑐 𝑗+1 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑗−1 ,𝑣𝑒𝑐[𝑖] + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖 ,𝑣𝑒𝑐[𝑗+1]

Katyanne Farias - Algorithmique d'aide à la décision 175


Improvement Heuristics
Neighborhood Operators: 2-opt
Example: TSP
• Let 𝑣𝑒𝑐 be a solution vector for a TSP instance with 𝑛 cities.
• Ex: Let 𝑛 = 9 and 𝑣𝑒𝑐 = [1 3 5 4 2 9 7 8 6 1].
• The costs of all solutions in the 2-opt neighborhood can be
assessed/evaluated by the algorithm below:
• Defining 0 as the first index of a vector: 𝑣𝑒𝑐[0] is the first element of the vector 𝑣𝑒𝑐.

Procedure TwoOpt(𝑣𝑒𝑐) //Symmetric TSP


1: for 𝑖 = 1 … 𝑛 − 4 do
2: for j = 𝑖 + 4 … 𝑛 do
3: 𝑛𝑒𝑤𝑐𝑜𝑠𝑡 = 𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑐𝑜𝑠𝑡
4: −𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐 𝑖 − 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑗−1 ,𝑣𝑒𝑐 𝑗
5: +𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖−1 ,𝑣𝑒𝑐 𝑗−1 + 𝑑𝑖𝑠𝑡𝑣𝑒𝑐 𝑖 ,𝑣𝑒𝑐 𝑗

Katyanne Farias - Algorithmique d'aide à la décision 176


Improvement Heuristics
• They replace a current solution with a solution from the neighborhood
based on an evaluation function.
• The evaluation function can be the objective function itself, or any other
function.
• Infeasible solutions usually are ignored (not accepted).
• They can be accepted by adding a penalization cost (considering the
evaluation function).
• Ex: Knapsack Problem: 𝑓 𝑠 = σ𝑛𝑗=1 𝑝𝑗 𝑠𝑗 − 𝛼 max{0, σ𝑛𝑗=1 𝑤𝑗 𝑠𝑗 − 𝐶}
• 𝛼 is a penalty coefficient.
• Several strategies can be adopted to define how the neighborhoods will
be explored and/or which solutions will be accepted.

Katyanne Farias - Algorithmique d'aide à la décision 177


Improvement Heuristics
Best Improvement Method (BIM)
• 𝑠 is a initial solution.
• 𝑁(𝑠) is the set of neighbor solutions.
• Choose neighbor 𝑠′ that has the best solution value f(𝑠 ′ ).
• If 𝑓 𝑠 ′ is better than 𝑓 𝑠 , 𝑠 ← 𝑠′.
• This procedure is repeated until 𝑓 𝑠 ′ is no better than 𝑓 𝑠 .

Procedure Best_Improvement_Method(𝑠):
Choose 𝑠 ′ with best 𝑓 𝑠 ′ ∀𝑠 ′ ∈ 𝑁(𝑠)
While 𝑓 𝑠 ′ is better than 𝑓 𝑠 do
𝑠 ← 𝑠′
Choose 𝑠 ′ with best 𝑓 𝑠 ′ ∀𝑠 ′ ∈ 𝑁 𝑠
EndWhile
End Best_Improvement_Method
Katyanne Farias - Algorithmique d'aide à la décision 178
Improvement Heuristics
Best Improvement Method (BIM)
Cost

Local optimum

Solution space
Katyanne Farias - Algorithmique d'aide à la décision 179
Improvement Heuristics
First Improvement Method (FIM)
• 𝑠 is a initial solution.
• 𝑁(𝑠) is the set of neighbor solutions.
• Choose first neighbor 𝑠′ that is better than 𝑠.
• This procedure is repeated until there is no neighbor better than 𝑠.
Procedure First_Improvement_Method(𝑠):
• In contrast to the previous method, While 𝑓 𝑠 ′ is better than 𝑓 𝑠 do
the FIM avoid an exhaustive 𝑖 = 1; 𝑠 ← 𝑠′
While (𝑓 𝑠 is better than 𝑓 𝑖 or 𝑖 < |𝑁|) do
exploration of the neighborhood.
𝑖++
EndWhile
𝑠′ ← 𝑖
EndWhile
End First_Improvement_Method
Katyanne Farias - Algorithmique d'aide à la décision 180
Course content
1. Mathematical modeling
i. Motivations of modeling
ii. Linear Programming
iii. Integer Linear Programming: classic problems
iv. Solution methods
2. Solving methods for discrete optimization problems
I. Branch-and-Bound
II. Constructive Heuristics
III. Improvement Heuristics
IV. Metaheuristics
3. Cplex solver and heuristic implementation
I. Some examples
II. Mini-project
Katyanne Farias - Algorithmique d'aide à la décision 181
Metaheuristics
• Metaheuristics are procedures designed to find a good solution,
eventually the optimal one, consisting in the application, in each step,
of a subordinate heuristics, which have to be modeled for each specific
problem.
• They are general in nature and provided with mechanisms to avoid
getting stuck in optimal locations possibly far from global optimal.
• Metaheuristics differ from each other basically by the mechanism used
to get out of the traps of great places.

Katyanne Farias - Algorithmique d'aide à la décision 182


Metaheuristics
• They can be classified according to the number of solutions managed
during the solution search process:

• Single-solution Metaheuristics: exploration of the space of solutions is done by


means of moves, which are applied at each step on the current solution, generating
another promising solution in its neighborhood.

• Population-based Metaheuristics: consist of keeping a set of good solutions and


combining them in order to try to produce even better solutions.

Katyanne Farias - Algorithmique d'aide à la décision 183


Single-solution metaheuristics
Multi-Start
• Consists of sampling the space of solutions, applying a improvement
procedure to each generated solution.
• The samples are obtained through the generation of random solutions.
• With this procedure, there is a diversification in the search space, making it
possible to escape from the optimal locations.
• The great advantage of the method is that it is easy to implement.

Katyanne Farias - Algorithmique d'aide à la décision 184


Multi-Start
Procedure MultiStart(𝑓 . , 𝑁 . , 𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎, 𝑠)
1: 𝑓 ∗ ← ∞ // value associated with 𝑠 ∗
2: While 𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎 not attended do
3: 𝑠 ← BulidSolution() // generate q solution 𝑠 from 𝑁 𝑠
4: 𝑠 ← LocalSearch(f(.), N(.), s) // apply a local search
procedure in 𝑠
5: If 𝑓 𝑠 is better than 𝑓(𝑠 ∗ ) then
6: 𝑠∗ ← 𝑠
7: 𝑓∗ ← 𝑓 𝑠
8: EndIf
9: EndWhile
10: 𝑠 ← 𝑠 ∗
11: Return 𝑠
End MultiStart

Katyanne Farias - Algorithmique d'aide à la décision 185


Single-solution metaheuristics
Greedy Randomized Adaptive Search Procedure (GRASP)
• GRASP procedure is an iterative method, proposed by Feo & Resende (1995).
• It consists of two phases:

1. Construction phase: a solution is generated, element by element.2

2. Local search phase: an optimal location in the neighborhood of the built


solution is searched. The best solution found throughout all GRASP iterations
performed is returned as a result.

Katyanne Farias - Algorithmique d'aide à la décision 186


Greedy Randomized Adaptive Search Procedure (GRASP)

Procedure GRASP(𝑓 . , 𝑔 . , 𝑁 . , 𝐺𝑅𝐴𝑆𝑃𝑚𝑎𝑥, 𝑠)


1: 𝑓 ∗ ← ∞ // value associated with 𝑠 ∗
2: For 𝑖𝑡𝑒𝑟 = 1, 2, … , 𝐺𝑅𝐴𝑆𝑃𝑚𝑎𝑥 do
3: 𝑠 ← BulidSolution(g(.), ∝, s) // generate q solution 𝑠 from 𝑁 𝑠
4: 𝑠 ← LocalSearch(f(.), N(.), s) // apply a local search procedure in 𝑠
5: If 𝑓 𝑠 is better than 𝑓(𝑠 ∗ ) then
6: 𝑠∗ ← 𝑠
7: 𝑓∗ ← 𝑓 𝑠
8: EndIf
9: EndFor
10: 𝑠 ← 𝑠 ∗
11: Return 𝑠
End GRASP

The difference in relation to Multi-Start is in


the construction of the initial solution

Katyanne Farias - Algorithmique d'aide à la décision 187


Single-solution metaheuristics
Greedy Randomized Adaptive Search Procedure (GRASP)
• In the building phase, a solution is iteratively constructed, element by element.
• At each iteration of this phase, the next candidate elements to be included in
the solution are placed in a list 𝐶 of candidates, following an ordering
predetermined criterion 𝑐.
• The benefits associated with choosing each element are updated at each
iteration of the build phase to reflect changes that come from selecting the
previous element.
• Each element is randomly selected from a restricted subset formed by the best
elements that make up the candidate list.
• This subset is called the Restricted Candidate List (RCL).
• It allows different solutions to be generated on each GRASP iteration.
Katyanne Farias - Algorithmique d'aide à la décision 188
Single-solution metaheuristics
Greedy Randomized Adaptive Search Procedure (GRASP)
Procedure BulidSolution(𝑔 . , ∝, 𝑠) For a minimization problem
1: 𝑠 ← ∅
2: Initialize set 𝐶 of candidates
3: While 𝐶 ≠ ∅ do
4: 𝑔 𝑡𝑚𝑖𝑛 = min{𝑔(𝑡)|𝑡 ∈ 𝐶}
5: 𝑔 𝑡𝑚𝑎𝑥 = m𝑎𝑥{𝑔(𝑡)|𝑡 ∈ 𝐶}
6: 𝐿𝐶𝑅 = 𝑡 ∈ 𝐶 𝑔 𝑡 ≤ 𝑔 𝑡𝑚𝑖𝑛 + ∝ 𝑔 𝑡𝑚𝑎𝑥 − 𝑔 𝑡𝑚𝑖𝑛 The parameter ∝ controls the level of
7: Randomly select an element 𝑡 ∈ 𝐿𝐶𝑅 greedy and randomness of the
8: 𝑠 ← s ∪ {𝑡} procedure:
9: Update candidate set 𝐶 • ∝ = 0 generates purely greedy
10: EndWhile solutions.
11: Return 𝑠 • ∝ = 1 generates totally random
End BulidSolution solutions.

Katyanne Farias - Algorithmique d'aide à la décision 189


Single-solution metaheuristics
Greedy Randomized Adaptive Search Procedure (GRASP)
• The solutions generated by the GRASP building phase are probably not optimal
locally.
• Hence the importance of the local search phase, which aims to improve the
solution built.
• The efficiency of local search depends, in part, on the quality of the built
solution.

Katyanne Farias - Algorithmique d'aide à la décision 190


Single-solution metaheuristics
Variable Neighborhood Search (VNS)
• Proposed by Nenad Mladenovic and Pierre Hansen, it is a local search method
that consists of exploring the space of solutions through systematic exchanges
of neighborhood structures.
• It explores neighborhoods gradually more distant from the current solution
and focuses the search around a new solution if and only if an improvement
move is made.
• It also contains a local search procedure to be applied to the current solution.
• The local search procedure can also use different neighborhood structures.
• In its original version, the VNS method uses the Variable Neighborhood
Descent (VND) method to perform the local search.

Katyanne Farias - Algorithmique d'aide à la décision 191


Variable Neighborhood Search (VNS)
Procedure VNS(𝑓 . , 𝑁 . , 𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎, 𝑠, 𝑟)
1: Let 𝑠0 be an initial solution
2: Let 𝑟 the number of different neighborhood operators
3: 𝑠 ← 𝑠0 // current solution Cost
4: While 𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎 not attended do
5: 𝑘 ← 1 // type of current neighborhood operator
6: While 𝑘 ≤ 𝑟 do
7: Generate a random neighbor 𝑠 ′ ∈ 𝑁 𝑘 𝑠
8: 𝑠 ′′ ← 𝐿𝑜𝑐𝑎𝑙𝑆𝑒𝑎𝑟𝑐ℎ(𝑠 ′ )
9: If 𝑓 𝑠′′ is better than 𝑓(𝑠) s s'
10: then
11: 𝑠 ← 𝑠 ′′
12: 𝑘 ←1
s'
13: else
s'
13: 𝑘 ←𝑘+1
14: EndIf s''s
15: EndWhile
s''
16: EndWhile
17: Return 𝑠
End VN
Solution space

Katyanne Farias - Algorithmique d'aide à la décision 192


Single-solution metaheuristics
Iterated Local Search (ILS)
• The Iterated Local Search (ILS) method is based on the idea that a local search
procedure can be improved by generating new starting solutions, which are
obtained by means of perturbations.
• Composed of 4 components/procedures:
1. GenerateInitialSolution(): generates an initial solution 𝑠0 .
2. LocalSearch(): from a solution 𝑠′, returns a possibly improved solution 𝑠′′.
3. Perturbation(): modifies the current solution 𝑠 leading to an intermediate solution 𝑠′.
4. AcceptanceCriteria(): decides from which solution the next perturbation will be applied
OR if one solution is better than the other.

Katyanne Farias - Algorithmique d'aide à la décision 193


Iterated Local Search (ILS)

Procedure ILS(𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎)
1: 𝑠0 ← 𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝐼𝑛𝑖𝑡𝑖𝑎𝑙𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛()
2: 𝑠 ← 𝐿𝑜𝑐𝑎𝑙𝑆𝑒𝑎𝑟𝑐ℎ(𝑠0 )
3: While 𝑆𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎 not attended do
4: 𝑠′ ← Perturbation (historic, s)
5: 𝑠′′ ← LocalSearch(s′)
6: 𝑠 ← AcceptanceCriteria(𝑠, 𝑠′′,historic)
7: EndWhile
End ILS

Katyanne Farias - Algorithmique d'aide à la décision 194


Single-solution metaheuristics
Iterated Local Search (ILS)
• The success of ILS is centered on the choice of the local search method, the
perturbation and the acceptance criteria.
• The performance of the ILS with respect to the quality of the final solution and
the speed of convergence strongly depends on the chosen local search
method.
• The perturbation should be:
• Strong enough to allow you to escape from the current local optimum.
• Weak enough to save features of the current local optimum.

Katyanne Farias - Algorithmique d'aide à la décision 195


Single-solution metaheuristics
Iterated Local Search (ILS)
• Perturbation and acceptance criteria control the intensification and
diversification procedures.
• Intensification
• Consists of staying in the region of the space where the search is located, seeking to
explore it more effectively.
• Ex: applying small perturbations.
• Diversification
• Consists of moving to other regions of the solution space.
• Ex: Accepting any solutions; applying major perturbations.

Katyanne Farias - Algorithmique d'aide à la décision 196


Single-solution metaheuristics
Tabu Search (TS)
• Local search based method.
• Part of a solution 𝑠, and explore, at each iteration, a subset 𝑉 of the
neighborhood 𝑁(𝑠).
• Explore the space of solutions moving from one solution 𝑠 to another 𝑠 ′ ∈ 𝑉
that is its best neighbor.
• Motivation: Don't get stuck in a local optimum.
• However, you can make the algorithm cycle (𝑠1 → 𝑠2 → 𝑠1 → 𝑠2 → 𝑠1 → ⋯).
• To avoid algorithm cycling, it makes use of a memory structure to store
characteristics of the generated solutions.
• Such a structure is called the Taboo List.

Katyanne Farias - Algorithmique d'aide à la décision 197


Single-solution metaheuristics
Tabu Search (TS)
• Features of solutions (or moves) stored in the Tabu List 𝑇 are prohibited by |𝑇|
iterations.
• At the end of each iteration, add the movement (or feature of the obtained solution) at
the top of the list and remove the last one from the list.
• Reduces the possibility of cycling, but may prohibit movements for solutions
not yet visited.
• Aspiration function (A): Mechanism that removes the taboo status of a
movement.
• Ex.: Aspiration by objective – movement is accepted if the value of the generated solution
is better than the value of the best solution found until then (incumbent solution).

Katyanne Farias - Algorithmique d'aide à la décision 198


Single-solution metaheuristics
Tabu Search (TS)
• Stop criteria
• Number of iterations without improvement;
• Execution Time;
• Solution value; etc.
• Parameters
• Tabu list size (|𝑇|);
• Aspiration Function (A);
• Number of neighboring solutions explored |𝑉|;
• Maximum number of iterations without improvement (𝑇𝑆𝑚𝑎𝑥 ); etc.

Katyanne Farias - Algorithmique d'aide à la décision 199


Tabu Search (TS)
Procedure TS(𝑓 . , 𝑁 . , 𝐴 . , 𝑉 , 𝑓𝑔𝑜𝑜𝑑 , 𝑇 , 𝑇𝑆𝑚𝑎𝑥, 𝑠)
1: 𝑠 ∗ ← 𝑠 // best solution until then
2: 𝑖𝑡𝑒𝑟 ← 0 // iteration number counter
3: 𝑏𝑒𝑠𝑡𝐼𝑡𝑒𝑟 ← 0 // most recent iteration that provided 𝑠 ∗
4: 𝑇 ← ∅ // Tabu list
5: Initialize the aspiration function A
6: While 𝑓 𝑠 is better than 𝑓𝑔𝑜𝑜𝑑 and 𝑖𝑡𝑒𝑟 − 𝑏𝑒𝑠𝑡𝐼𝑡𝑒𝑟 ≤ 𝑇𝑆𝑚𝑎𝑥 do
7: 𝑖𝑡𝑒𝑟 ← 𝑖𝑡𝑒𝑟 + 1
8: Let 𝑠 ′ ← 𝑠 ۩ 𝑚 the best element of 𝑉 ⊆ 𝑁 𝑠 so that the movement is not taboo
(𝑚 ∉ 𝑇) or 𝑠 ′ meets the aspiration conditions 𝑓 𝑠 ′ better than 𝐴 𝑓 𝑠
9: Update Tabu list 𝑇
10: 𝑠 ← 𝑠′
11: If 𝑓 𝑠 is better than 𝑓(𝑠 ∗ ) then
12: 𝑠∗ ← 𝑠
13: 𝑏𝑒𝑠𝑡𝐼𝑡𝑒𝑟 ← 𝑖𝑡𝑒𝑟
14: EndIf
15: Update aspiration function 𝐴
16: EndWhile
17: 𝑠 ← 𝑠 ∗
18: Return 𝑠
End TS
Katyanne Farias - Algorithmique d'aide à la décision 200
Single-solution metaheuristics
Simulated Annealing (SA)
• It makes an analogy with thermodynamics, by simulating the cooling of a set of
heated atoms (annealing).
• Let 𝑠 ∗ be the best solution found during execution and 𝑠 the current solution.
• At each iteration, a neighboring solution 𝑠 ′ ∈ 𝑁(𝑠) is generated.
• Calculate ∆ = 𝑓 𝑠 ′ − 𝑓(𝑠)
• Considering a minimization problem
• If ∆ < 0, 𝑠′ becomes the new current solution.
• If ∆ > 0, 𝑠′ can be accepted as the new current solution, with a probability 𝑒 −∆/𝑇 .
• Where 𝑇 is a parameter (current temperature).
• 𝑇 controls the probability of accepting worse solutions.

Katyanne Farias - Algorithmique d'aide à la décision 201


Single-solution metaheuristics
Simulated Annealing (SA)
𝑒 −1/𝑇
1.0
0.8
0.6
0.4
0.2

1 1
𝑒 −1 = = = 0.3679
𝑒 2.7183
0

0 10 20 30 40 50 Temperature 𝑇

Katyanne Farias - Algorithmique d'aide à la décision 202


Single-solution metaheuristics
Simulated Annealing (SA)
• The temperature is initialized with a high value 𝑇0 .
• After a fixed number of iterations, the temperature is updated:
• 𝑇𝑘 ←∝ 𝑇𝑘−1
• ∝ is a cooling parameter: 0 < ∝ < 1
The closer to 0, the faster the cooling.
• The temperature tends to zero, the lower the probability of accepting worse solutions (it
approaches the descent method).
• When the temperature reaches a value close to zero, the system is in balance,
showing the finding of a local optimum.
• Parameters: cooling ratio(∝), number of iterations for each temperature (𝑆𝐴𝑚𝑎𝑥)
and the initial temperature (𝑇0 ).

Katyanne Farias - Algorithmique d'aide à la décision 203


Simulated Annealing (SA) Procedure SA(𝑓 . , 𝑁 . , ∝, 𝑆𝐴𝑚𝑎𝑥, 𝑇0 , 𝑠)
1: 𝑠 ∗ ← 𝑠 // best solution until then
2: 𝑖𝑡𝑒𝑟𝑇 ← 0 // number of iterations in temperature 𝑇
3: 𝑇 ← 𝑇0 // current temperature
4: While T > 0 do
5: While 𝑖𝑡𝑒𝑟𝑇 < 𝑆𝐴𝑚𝑎𝑥 do
6: 𝑖𝑡𝑒𝑟𝑇 ← 𝑖𝑡𝑒𝑟𝑇 + 1
7: Generate a random neighbor 𝑠 ′ ∈ 𝑁 𝑠
8: ∆ = 𝑓 𝑠′ − 𝑓 𝑠
9: If (∆< 0)
10: Then
11: 𝑠 ← 𝑠′
12: If (𝑓 𝑠 ′ < 𝑓(𝑠 ∗ )) then 𝑠 ∗ ← 𝑠′
13: Else
14: Get 𝑥 ∈ 0,1
15: If (𝑥 < 𝑒 −∆/𝑇 ) Then 𝑠 ← 𝑠 ′
16: EndIf
17: EndWhile
18: 𝑇 ←∝×𝑇
19: 𝑖𝑡𝑒𝑟𝑇 ← 0
20: EndWhile
21: 𝑠 ← 𝑠 ∗
22: Return 𝑠
End SA
Katyanne Farias - Algorithmique d'aide à la décision 204
Single-solution metaheuristics
Simulated Annealing (SA)
• They usually include reheating, when the amount of solutions rejected
consecutively is high.
• The initial temperature can be determined by:
Simulation
• Part of a solution 𝑠 and a low temperature 𝑡.
• Count the number 𝑘 of neighbors accepted with 𝑡.
• If 𝑘 ≥ 𝑆𝐴𝑚𝑎𝑥 , 𝑡 is the initial temperature (for, e.g., = 95%). Otherwise,
increase 𝑡 and repeat the procedure.

Katyanne Farias - Algorithmique d'aide à la décision 205


Single-solution metaheuristics
Simulated Annealing (SA)
• They usually include reheating, when the amount of solutions rejected
consecutively is high.
• The initial temperature can be determined by:
The cost of the solutions
• Calculate the cost of all neighbors for a set of initial solutions 𝑆𝑖𝑛𝑖𝑡 =
{𝑠1 , 𝑠2 , … , 𝑠 𝑖𝑛𝑖𝑡 }.

• The highest cost found is an estimate for 𝑇0 , i.e., 𝑇0 = max′
{f s }
𝑠∈𝑆𝑖𝑛𝑖𝑡 ,𝑠 ∈𝑁(𝑠)

Katyanne Farias - Algorithmique d'aide à la décision 206


Procedure InitialTemperature(𝑓 . , 𝑁 . , 𝛽, 𝛾, 𝑆𝐴𝑚𝑎𝑥, 𝑇0 , 𝑠)
Simulated Annealing 1: 𝑇 ← 𝑇0
2: 𝑖𝑡𝑒𝑟𝑇 ← 0
// current temperature
// number of iterations in temperature 𝑇
3: 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑒 ← True
4: While 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑒 do
5: 𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 ← 0 // number of neighbors accepted for temperature 𝑇
6: For 𝑖𝑡𝑒𝑟𝑇 = 1, … , 𝑆𝐴𝑚𝑎𝑥 do
7: Generate a random neighbor 𝑠 ′ ∈ 𝑁 𝑠
8: ∆ = 𝑓 𝑠′ − 𝑓 𝑠
9: If (∆< 0)
10: Then
11: 𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 ← 𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 + 1
12: Else
13: Get 𝑥 ∈ 0,1
14: If (𝑥 < 𝑒 −∆/𝑇 ) Then 𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 ← 𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 + 1
15: EndIf
16: EndFor
17: If (𝐴𝑐𝑐𝑒𝑝𝑡𝑒𝑑 > 𝛾 × 𝑆𝐴𝑚𝑎𝑥)
18: Then 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑒 ← False
19: Else 𝑇 ← 𝛽 × 𝑇
20: EndIf
21: 𝑖𝑡𝑒𝑟𝑇 ← 0
22: EndWhile
22: Return 𝑇
End InitialTemperature
Katyanne Farias - Algorithmique d'aide à la décision 207
Population-based Metaheuristics
Genetic Algorithms (GA)
• It makes an analogy with natural evolutionary processes.
• Individuals with better genetic characteristics have greater chances of survival
and of producing more and more fit children.
• Less fit individuals tend to disappear.

1. Chromosome (individual of population): problem solution.


2. Gene: solution component.
3. Allele: possible value that each component of the solution can assume.

Katyanne Farias - Algorithmique d'aide à la décision 208


Population-based Metaheuristics
Genetic Algorithms (GA)
• A reproduction mechanism (based on evolutionary processes) is applied to the
population.
• Objective: explore the search space and find better solutions.
• Individuals are evaluated by a fitness function (it measures how adapted you
are to the environment).
• The bigger, the more adapted to the environment.
• Starts with a population {𝑠10 , 𝑠20 , 𝑠30 ,…, 𝑠𝑛0 }: population in time 0.
• Procedure
• Creates a population {𝑠1𝑡+1 , 𝑠2𝑡+1 , 𝑠3𝑡+1 ,…, 𝑠𝑛𝑡+1 } at time 𝑡 + 1 from a population at time 𝑡.
• Obtained from a reproduction phase: selects individuals for recombination and/or
mutation.

Katyanne Farias - Algorithmique d'aide à la décision 209


Population-based Metaheuristics
Genetic Algorithms (GA)
• Recombination operation (crossover)
• The genes from two chromosomes are combined to generate children
chromosomes (usually two).
• Each child contains a set of genes from each parent.
• Mutation operation
• Randomly change a part of the genes of each chromosome (components of the
solution).

• Both operations are carried out with a certain probability:


• Recombination: high probability (eg.: 80%)
• Mutation: low probability (eg.: 1 to 2%)
Katyanne Farias - Algorithmique d'aide à la décision 210
Population-based Metaheuristics
Genetic Algorithms (GA)
• After the new population of time 𝑡 + 1 is generated:
• The surviving population is defined (i.e., the 𝑛 solutions that will integrate the
new population), based on the individuals' aptitude.
• Criteria for choosing surviving chromosomes:
1. Random;
2. Roulette: chance of survival proportional to skill level;
3. Mixed: Combination of Aleatory & Roulette. Then, it accepts the selection
of the least 𝑡 individuals (in order to escape from local optimal solutions).

Katyanne Farias - Algorithmique d'aide à la décision 211


Population-based Metaheuristics
Genetic Algorithms (GA)
• Stop criteria
• When a certain number of generations is reached.
• Number of iterations without improvement.
• Population standard deviation is less than a certain value (i.e., homogeneity of the
solution).

Katyanne Farias - Algorithmique d'aide à la décision 212


Genetic Algorithms (GA)
• Parameters:
Procedure AG(𝑠𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎) • population size 𝑛, crossover probability,
1: 𝑡 ← 0 mutation probability, number of generations,
2: Generate initial population 𝑃(𝑡) number of iterations without improvement,
3: Evaluate 𝑃(𝑡) etc.
4: While 𝑠𝑡𝑜𝑝𝐶𝑟𝑖𝑡𝑒𝑟𝑖𝑎 is not attended do • Representation of solutions.
5: 𝑡 ←𝑡+1
6: Generate 𝑃(𝑡) from 𝑃(𝑡 − 1) • Classic crossover operator: union of gene
7: Evaluate 𝑃(𝑡) segments from each parent, to form children
8: Define surviving population chromosomes (Offsprings). Ex.:
22: EndWhile • 𝑝1 = 1 0 1 0 1 0)
End AG • 𝑝2 = 1 1 1 0 0 0)
• 𝑜1 = 1 0 1 0 0 0)
• 𝑜2 = 1 1 1 0 1 0)
• Classic mutation operator: alter one or more
genes. Ex.:
• 𝑝1 = 1 0 1 0 1 0 → 1 1 1 0 1 0

Katyanne Farias - Algorithmique d'aide à la décision 213


Population-based Metaheuristics
Ant Colony Optmization (ACO)
• Simulates the behavior of a set of ants (agents) that cooperate with each other
to solve an optimization problem.
• Cooperation is through the pheromone deposited by each ant when moving in
the search space.
• The pheromone trail is used as information for other ants.

Katyanne Farias - Algorithmique d'aide à la décision 214


Population-based Metaheuristics
Ant Colony Optmization (ACO)
Algorithm Operation
• An ant colony moves concurrently and asynchronously building paths in
the search space.
• Move by applying a stochastic local decision policy.
• Makes use of pheromone trails and heuristic information.
• As they move, they build new solutions to the optimization problem.
• Once a solution is built (or during the construction of a solution), the ant
evaluates the solution (partial or complete) and deposits a pheromone trail
on the components or connections used along the way.
• This information (from pheromone) is used to direct the search for other
ants.
Katyanne Farias - Algorithmique d'aide à la décision 215
Population-based Metaheuristics
Ant Colony Optmization (ACO)
• 𝑚 − number of ants;
• Q − amount of pheromone deposited by an ant after completing a route;
• Γ0 − initial amount of pheromone in each arc;
• 𝑑𝑖𝑗 − distance between cities 𝑖 and 𝑗;
• Γ𝑖𝑗 − amount of pheromone in each arc (𝑖, 𝑗);
• ΔΓ𝑖𝑗𝑘 − amount of pheromone deposited by ant 𝑘 on the arc (𝑖, 𝑗);
• 𝜌 − pheromone evaporation rate;
• ΔΓ𝑖𝑗 − amount of pheromone deposited by all ants in the arch 𝑖, 𝑗 .

Katyanne Farias - Algorithmique d'aide à la décision 216


Ant Colony Optmization (ACO)
Procedure AntColony( )
1: Let be 𝑄 and Γ0 constants
2: 𝑓 ∗ ← ∞
3: Do ∆Γ𝑖𝑗 ← 0 and Γ𝑖𝑗 ← Γ0 for all arc (𝑖, 𝑗)
4: For 𝑘 = 1, … , 𝑚 do
5: Select the starting city for the 𝑘-th ant
6: Get a route 𝑅 𝑘 for each ant 𝑘
7: Let 𝐿𝑘 be the length of the route 𝑅 𝑘
8: If (𝐿𝑘 < 𝐿) then 𝑠 ∗ ← 𝑅 𝑘 and 𝑓 ∗ ← 𝑓(𝑠 ∗ )
9: Calculate the amount of trail left by the ant 𝑘:
10: If 𝑖, 𝑗 ∈ 𝑅 𝑘
11: Then ∆Γ𝑖𝑗𝑘 ← 𝑑𝑖𝑗 × 𝑄/𝐿𝑘
12: Else ∆Γ𝑖𝑗𝑘 ← 0
13: EndIf
14: Do ∆Γ𝑖𝑗 ← ∆Γ𝑖𝑗 + ∆Γ𝑖𝑗𝑘
15: EndFor
16: Do ∆Γ𝑖𝑗 ← 1 − 𝜌 × ∆Γ𝑖𝑗 + ∆Γ𝑖𝑗 ∀(𝑖, 𝑗)
17: If (best route 𝑠 ∗ has not been changed in the last 𝑖𝑡𝑒𝑟𝑀𝑎𝑥 iterations)
18: Then 𝑆𝑇𝑂𝑃: 𝑠 ∗ is the best solution
19: Else Return to step 4
End AntColony
Katyanne Farias - Algorithmique d'aide à la décision 217
Population-based Metaheuristics
Ant Colony Optmization (ACO)
• To obtain a route for an ant 𝑘 (step 6):
• Being in city 𝑖, city 𝑗 is chosen among the unvisited ones (i.e., 𝑗 ∈ 𝑁𝑉𝑖𝑘 ), with a
probability given by:
𝛼 𝛽
𝑘
Γ𝑖𝑗 × 𝜂𝑖𝑗 𝑘
𝜌𝑖𝑗 = 𝛼× 𝜂 𝛽
, ∀𝑗 ∈ 𝑁𝑉𝑖
σ 𝑘 Γ 𝑖𝑙 𝑖𝑙
𝑙∈𝑁𝑉 𝑖
• Where 𝜂𝑖𝑙 = 1/𝑑𝑖𝑗 is heuristic information available a priori.
• 𝛼 and 𝛽 parameters that determine the influence of the pheromone trail and heuristic
information, respectively.
• 𝛼 = 0: selection probability is proportional to 𝜂𝑖𝑙 𝛽 ; closer cities have more probab. to be
chosen.
• 𝛽 = 0: only the use of pheromones has an influence; rapid stagnation of the search (i.e.,
ants following the same way and generating the same solutions).
Katyanne Farias - Algorithmique d'aide à la décision 218
Population-based Metaheuristics
Ant Colony Optmization (ACO)
• To obtain the amount of pheromone deposited by ant 𝑘 in each arc:
• Proportional to the arc length (i.e., the unit quantity of pheromone 𝑄 = 𝐿𝑘 by ant 𝑘 is
multiplied by the arc length.
• Unlike the real situation, the ant only deposits the pheromone after completing the
route and not during the course.
• After the 𝑘 ants generate their routes/solutions, the amount of pheromone is updated
(evaporation + deposit of new amounts).
• Γ𝑖𝑗 ← 1 − 𝜌 × Γ𝑖𝑗 + ∆Γ𝑖𝑗 , ∀ 𝑖, 𝑗
• Γ𝑖𝑗 ← 1 − 𝜌 × Γ𝑖𝑗 + 𝜌∆Γ𝑖𝑗𝑏𝑒𝑠𝑡 , ∀ 𝑖, 𝑗 , where ∆Γ𝑖𝑗𝑏𝑒𝑠𝑡 represents the pheromone trail left
by the ant that produced the best solution.
• Algorithm terminates after 𝐼𝑡𝑒𝑟𝑀𝑎𝑥 iterates consecutive iterations without
improvement.
Katyanne Farias - Algorithmique d'aide à la décision 219
Cplex solver and
heuristic implementation

Katyanne Farias - Algorithmique d'aide à la décision 220


Cplex solver
Improve

Define the problem Formulate Implement and


and collect data mathematical model test model

• Define objective
• Identify constraints
• Select and process data Implement use
in the company

Katyanne Farias - Algorithmique d'aide à la décision 221


References
1. Souza, M.J.F. Inteligência Computacional para Otimização. Notas de Aula. Departamento de
Computação, Universidade Federal de Ouro Preto. 2011. (Download link:
InteligenciaComputacional.pdf)
2. Delorme, M; Iori, M; Martello, S. Bin packing and cutting stock problems: Mathematical models
and exact algorithms. European Journal of Operational Research, 255, 1-20, 2016.
3. Martello, S. Bin Packing Problem. University of Bologna.
4. Prins, C. The route-first cluster-second principle in vehicle routing. International Workshop on
Vehicle Routing in Practice. Oslo, 2008.
5. Toth, P.; Vigo, D. The vehicle routing problem - Problems, Methods, and Applications. Second
Edition. SIAM Series on Optimization. 2014.
6. Mladenovic, N. A tutorial on Variable Neighborhood Search. Les Cahiers du GERAD, 2004.
7. Hansen, P., Mladenovic, N. Variable Neighborhood Search Methods. Les Cahiers du GERAD, 2009.

Katyanne Farias - Algorithmique d'aide à la décision 222


Algorithmique d'aide
à la décision

Katyanne Farias de Araújo

You might also like