0% found this document useful (0 votes)
12 views

Cours 3

Some typical Artificial Intelligence Problems Playing games : chess game . The way the Computer choose which move to make is governed by AI Navigation : find the shortest road to go from A to B, the road that has the less traffic Path finding where we have different possibilities ; find the most optimal path Planning and scheduling Find an optimal sequence of actions to solve a given task

Uploaded by

yahabd Raouf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Cours 3

Some typical Artificial Intelligence Problems Playing games : chess game . The way the Computer choose which move to make is governed by AI Navigation : find the shortest road to go from A to B, the road that has the less traffic Path finding where we have different possibilities ; find the most optimal path Planning and scheduling Find an optimal sequence of actions to solve a given task

Uploaded by

yahabd Raouf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Méthodes de Résolution des

Problèmes
 Prof. Nadjet KAMEL
Les métaheuristiques à solution unique
◦ Elles débutent la recherche avec une seule solution initiale.

◦ Elles se basent sur la notion du voisinage pour améliorer la qualité de la solution


courante. En fait, la solution initiale subit une série de modifications en fonction
de son voisinage. Le but de ces modifications locales est d’explorer le
voisinage de la solution actuelle afin d’améliorer progressivement sa qualité au
cours des différentes itérations.

◦ Le voisinage de la solution s englobe l’ensemble des modifications qui peuvent


être effectuées sur la solution elle-même.

◦ La qualité de la solution finale dépend particulièrement des modifications


effectuées par les opérateurs de voisinages. En effet, les mauvaises
transformations de la solution initiale mènent la recherche vers la vallée de
l’optimum local d’un voisinage donné (peut être un mauvais voisinage) ce qui
bloque la recherche en fournissant une solution de qualité insuffisante.
Hill Climbing
 Hill Climbing algorithm is a local search algorithm
 Used to find the peak of a mountain or the best solution of a problem by
continuously moving in the direction of increasing elevation
 It terminates when it reaches a peak value where no neighbor has a higher value.
 It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
 Hill-climbing algorithm search moves in the direction which optimizes the cost.
 No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
 Uses an improvement technique
 Start from a single point (current state) in the search
space
 At each iteration a new point is selected from the
neighborhood of the current point
 If new point is better, it becomes the current point,
otherwise another neighbor is selected and tested
 Terminates when there is no more imporvement in
the neighborhood
Hill Climbing

 Current state: It is a state in a landscape diagram


where an agent is currently present.
 Global Maximum: Global maximum is the best
possible state of state space landscape. It has the
highest value of objective function.
 Local Maximum: Local maximum is a state which is
better than its neighbor states, but there is also
another state which is higher than it.
 Flat local maximum: It is a flat space in the
landscape where all the neighbor states of current
states have the same value.
 Shoulder: It is a plateau region which has an uphill
edge.
Hill Climbing

Begin
Start with an initial solution s
Evaluate s : f(s)
While solution not found do
s’ from neigborhood of s
Evaluate s’: f(s’)
If f(s’) is better than f(s) then
Replace s by s’
Endif
Endwhile
Return s
End
Hill Climbing

 Problems in Hill Climbing

◦ Local optima

◦ Flat

◦ Ridge
Hill Climbing

1 2 4 1 4 7
 Fitness = number of
F(s)=5 5 7 2 5 8
element in the wrong cases
3 6 8 3 6
 Minimization problem
Goal
 F(s)=4 local optima F(s)=5
1 2 4 1 2 4 1 2 4 1 4
F(s)=6
F(s)=4 5 7 5 7 5 6 7 5 2 7 F(s)=5
3 6 8 3 6 8 3 8 3 6 8

2 4 1 2 4
F(s)=5
F(s)=5 1 5 7 3 5 7
3 6 8 6 8
 How to escape from local optima ?
 Single solution
◦ Simulated Annealing
◦ Tabu Search
◦…

 Population-based
◦ Genetic Algorithms
◦ Swarm Particule Optimisation
◦…
Intensification & Diversification
 Intensification
◦ Favor search toward optimum(but maybe local...)
 Diversification
◦ Try to look at other part of search space (but maybe
useless)
 Too much intensification local optimum
 Too much diversification random search
 Behaves like Greedy and always accepts improving moves
 If a neighbor solution j is worse than the current solution i, j will be
accepted depending on the ‘temperature’ parameter T and the
amount of deterioration f(i)-f(j)
 Early on, when the ‘temperature’ is heigh, accepting a non-improving
move is more likely than later on
 Poorer solutions are accepted based on the probability
exp()
 The system ‘temperature’ is controlled in accordance with cooling
schedule
Simulated Annealing
 Based on statistical mechanics
◦ Analogy with the simulation of the annealing of solid

 In order to escape local optimum, SA allows moves


to neighbor with less quality than the current
solution.

 Probability of moving to such solution is decreased


during the search•
Simulated Annealing
 S. Kirkpatrick, C.D. Gelatt et M.P. Vecchi en 1983, and V. Černy en 1985.
 Local search method using a strategy to avoid the local optima
 It is inspired from the process used in Termodynamics. We alternate cycles of
slowly cooling and heating a solid.
 This process is transposed to Optimization in order to find the extrema of a function.
 Analogy :
◦ Solutions in the combinatory problems = configuration of particles
◦ Objective function = energy
◦ Move to neighboring solution = state change
◦ Control Parameter = Temperature
Simulated Annealing
 A Variable T to simulate the temperature of heating.
◦ Initialised to a high value, then it is decreased (cooling)
 We accept solutions worse than the current solution
as the temperature is high. Thus, we allow the
algorithm to skip the local optima at the start of the
process
 The chance of accepting a bad solution decreases
as the temperature decreases. Thus, we allow the
algorithm to focus on an area where a near optimum
solution can be found
Simulated Annealing
 Simulated Annealing algorithm iteratively traverses the space of solutions.
 We start with a solution s0 initially generated in a random manner to which
corresponds an initial energy E0, and an initial temperature T0 which is generally
high.
 At each iteration of the algorithm, an elementary change is made on the solution,
this modification varies the energy of the system E.

◦ If this variation is negative (the new solution improves the objective function, and
reduces the energy of the system), it is accepted.
◦ If the solution found is less good than the previous one then it will be accepted
with a probability P calculated according to the following Boltzmann distribution:
Simulated Annealing
 The choice of temperature is essential to guarantee the balance between the
intensification and diversification of solutions in the research space.
 The choice of the initial temperature depends on the quality of the starting solution.
If this solution is chosen randomly, a relatively high temperature must be taken.
 The following rule is often used:

Where [0,1]

 The temperature can be raised when the search seems blocked in a region of the
search space. We consider a large increase in temperature as a process of
diversification while a decrease in temperature corresponds to a process of
intensification.
Tabu Search
 Tabu search (TS) is a local search method combined with a set of techniques to
avoid being trapped in a local minimum or repeating a cycle

 Optimizing the solution with tabu search is based on two tips:


◦ the use of the notion of neighborhood and
◦ the use of memory allowing intelligent guidance of the search process
 In order to escape local optima and cycles, it accepts to move to a worst solution.
 Uses a Tabu List of k last visited nodes (solutions): partial memory
 Moving to a solution in the Tabu List is forbidden
Tabu Search
 The Tabu search does not stop at the first local optimum encountered.
 It always retains the best neighbor solution s', even if this is of poor quality than
the current solution s.
 Poor quality solutions can have good neighborhoods and therefore guide
research towards better solutions.
 However, this strategy can create a cycling phenomenon
 In order to overcome this problem, taboo research proposes the use of a memory
allowing the storage of the last solutions encountered so as not to visit them in the
next iterations and fall into the problem of repetitive cycling.
Tabu Search
sBest ← s0
bestCandidate ← s0
tabuList ← []
tabuList.push(s0)
while (not stoppingCondition())
sNeighborhood ← getNeighbors(bestCandidate)
bestCandidate ← sNeighborhood[0]
for (sCandidate in sNeighborhood)
if ( (not tabuList.contains(sCandidate)) and (fitness(sCandidate) > fitness(bestCandidate)) )
bestCandidate ← sCandidate
end
end
if (fitness(bestCandidate) > fitness(sBest))
sBest ← bestCandidate
end
tabuList.push(bestCandidate)
if (tabuList.size > maxTabuSize)
tabuList.removeFirst()
end
end
return sBest

You might also like