0% found this document useful (0 votes)
8 views51 pages

Hillclimbing

1. Hill climbing is a simple optimization technique that starts with a random state and iteratively moves to a neighboring state with a better score until no better neighbors exist. 2. It is prone to getting stuck in local optima rather than finding the global optimum. Variations include restarting from random states multiple times or selecting random neighbors rather than always choosing the best. 3. The key design decision is how to define the neighborhood of states - the set of neighboring states to consider moving to. This impacts efficiency and ability to escape local optima.

Uploaded by

Kaspa Vivek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views51 pages

Hillclimbing

1. Hill climbing is a simple optimization technique that starts with a random state and iteratively moves to a neighboring state with a better score until no better neighbors exist. 2. It is prone to getting stuck in local optima rather than finding the global optimum. Variations include restarting from random states multiple times or selecting random neighbors rather than always choosing the best. 3. The key design decision is how to define the neighborhood of states - the set of neighboring states to consider moving to. This impacts efficiency and ability to escape local optima.

Uploaded by

Kaspa Vivek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Advanced Search

Hill climbing, simulated annealing,


genetic algorithm
Xiaojin Zhu
[email protected]
Computer Sciences Department
University of Wisconsin, Madison

[Based on slides from Andrew Moore http://www.cs.cmu.edu/~awm/tutorials ]


slide 1
Optimization problems
• Previously we want a path from start to goal
 Uninformed search: g(s): Iterative Deepening
 Informed search: g(s)+h(s): A*
• Now a different setting:
 Each state s has a score f(s) that we can compute
 The goal is to find the state with the highest score, or
a reasonably high score
 Do not care about the path
 This is an optimization problem
 Enumerating the states is intractable
 Even previous search algorithms are too expensive

slide 2
Examples
• N-queen: f(s) = number of conflicting queens
in state s
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.

slide 3
Examples
• N-queen: f(s) = number of conflicting queens
in state s
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.

• Traveling salesperson problem (TSP)


 Visit each city once, return to first city
 State = order of cities, f(s) = total mileage

slide 4
Examples
• N-queen: f(s) = number of conflicting queens
in state s
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.

• Traveling salesperson problem (TSP)


 Visit each city once, return to first city
 State = order of cities, f(s) = total mileage
• Boolean satisfiability (e.g., 3-SAT)
 State = assignment to variables
A  B  C
 f(s) = # satisfied clauses A  C  D
B  D  E
C   D   E
A   C  E

slide 5
1. HILL CLIMBING

slide 6
Hill climbing
• Very simple idea: Start from some state s,
 Move to a neighbor t with better score. Repeat.
• Question: what’s a neighbor?
 You have to define that!
 The neighborhood of a state is the set of neighbors
 Also called ‘move set’
 Similar to successor function

slide 7
Neighbors: N-queen
• Example: N-queen (one queen per column). One
possibility:

… Neighborhood
of s
s
f(s)=1

slide 8
Neighbors: N-queen
• Example: N-queen (one queen per column). One
possibility: tie breaking more promising?
 Pick the right-most most-conflicting column;
 Move the queen in that column vertically to a
different location.

f=1

… Neighborhood
of s
s
f(s)=1 f=2

slide 9
Neighbors: TSP
• state: A-B-C-D-E-F-G-H-A
• f = length of tour

slide 10
Neighbors: TSP
• state: A-B-C-D-E-F-G-H-A
• f = length of tour
• One possibility: 2-change

A-B-C-D-E-F-G-H-A

flip

A-E-D-C-B-F-G-H-A

slide 11
Neighbors: SAT
• State: (A=T, B=F, C=T, D=T, E=T)
• f = number of satisfied clauses
• Neighbor:

A  B  C
A  C  D
B  D  E
C   D   E
A   C  E

slide 12
Neighbors: SAT
• State: (A=T, B=F, C=T, D=T, E=T)
• f = number of satisfied clauses
• Neighbor: flip the assignment of one variable

(A=F, B=F, C=T, D=T, E=T) A  B  C


(A=T, B=T, C=T, D=T, E=T) A  C  D
(A=T, B=F, C=F, D=T, E=T) B  D  E
C   D   E
(A=T, B=F, C=T, D=F, E=T) A   C  E
(A=T, B=F, C=T, D=T, E=F)

slide 13
Hill climbing
• Question: What’s a neighbor?
 (vaguely) Problems tend to have structures. A small
change produces a neighboring state.
 The neighborhood must be small enough for
efficiency
 Designing the neighborhood is critical. This is the
real ingenuity – not the decision to use hill climbing.
• Question: Pick which neighbor?
• Question: What if no neighbor is better than the
current state?

slide 14
Hill climbing
• Question: What’s a neighbor?
 (vaguely) Problems tend to have structures. A small
change produces a neighboring state.
 The neighborhood must be small enough for
efficiency
 Designing the neighborhood is critical. This is the
real ingenuity – not the decision to use hill climbing.
• Question: Pick which neighbor? The best one (greedy)
• Question: What if no neighbor is better than the
current state? Stop. (Doh!)

slide 15
Hill climbing algorithm

1. Pick initial state s


2. Pick t in neighbors(s) with the largest f(t)
3. IF f(t)  f(s) THEN stop, return s
4. s = t. GOTO 2.

• Not the most sophisticated algorithm in the world.


• Very greedy.
• Easily stuck.

slide 16
Hill climbing algorithm

1. Pick initial state s


2. Pick t in neighbors(s) with the largest f(t)
3. IF f(t)  f(s) THEN stop, return s
4. s = t. GOTO 2.

• Not the most sophisticated algorithm in the world.


• Very greedy.
• Easily stuck.
your enemy:

local
optima

slide 17
Local optima in hill climbing
• Useful conceptual picture: f surface = ‘hills’ in state
space Global optimum,
f where we want to be

state

• But we can’t see the landscape all at once. Only see


the neighborhood. Climb in fog.
f
fog
state
slide 18
Local optima in hill climbing
• Local optima (there can be many!)

Declare top- f
of-the-world?

state
• Plateaux
Where shall I go?
f

state

slide 19
Local optima in hill climbing
• Local optima (there can be many!)

Declare top of f
the world? fog
The rest of the lecture is
about state

• Plateaus Escaping
Where shall I go?
local
f optima
fog
state

slide 20
Not every local minimum should be escaped

slide 21
Repeated hill climbing with random restarts
• Very simple modification
1. When stuck, pick a random new start, run basic
hill climbing from there.
2. Repeat this k times.
3. Return the best of the k local optima.

• Can be very effective


• Should be tried whenever hill climbing is used

slide 22
Variations of hill climbing
• Question: How do we make hill climbing less greedy?

slide 23
Variations of hill climbing
• Question: How do we make hill climbing less greedy?
 Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?

slide 24
Variations of hill climbing
• Question: How do we make hill climbing less greedy?
 Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?

• Question: What if the neighborhood is too large to


enumerate? (e.g. N-queen if we need to pick both the
column and the move within it)

slide 25
Variations of hill climbing
• Question: How do we make hill climbing less greedy?
 Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?

• Question: What if the neighborhood is too large to


enumerate? (e.g. N-queen if we need to pick both the
column and the move within it)
 First-choice hill climbing
• Randomly generate neighbors, one at a time
• If better, take the move
• Pros / cons compared with basic hill climbing?

slide 26
Variations of hill climbing
• We are still greedy! Only willing to move upwards.
• Important observation in life:

Sometimes one Sometimes one


needs to needs to move to an
temporarily step
back in order to
= inferior neighbor in
order to escape a
move forward. local optimum.

slide 27
Variations of hill climbing
WALKSAT [Selman]
• Pick a random unsatisfied clause
• Consider 3 neighbors: flip each variable
• If any improves f, accept the best
• If none improves f:
 50% of the time pick the least bad neighbor
 50% of the time pick a random neighbor

This is the best known algorithm for A  B  C


A  C  D
satisfying Boolean formulae.
B  D  E
C   D   E
A   C  E

slide 28
2. SIMULATED ANNEALING

slide 29
Simulated Annealing
anneal
• To subject (glass or metal) to a process of heating
and slow cooling in order to toughen and reduce
brittleness.

slide 30
Simulated Annealing

1. Pick initial state s


2. Randomly pick t in neighbors(s)
3. IF f(t) better THEN accept st.
4. ELSE /* t is worse than s */
5. accept st with a small probability
6. GOTO 2 until bored.

slide 31
Simulated Annealing

1. Pick initial state s


2. Randomly pick t in neighbors(s)
3. IF f(t) better THEN accept st.
4. ELSE /* t is worse than s */
5. accept st with a small probability
6. GOTO 2 until bored.

How to choose the small probability?


idea 1: p = 0.1

slide 32
Simulated Annealing

1. Pick initial state s


2. Randomly pick t in neighbors(s)
3. IF f(t) better THEN accept st.
4. ELSE /* t is worse than s */
5. accept st with a small probability
6. GOTO 2 until bored.

How to choose the small probability?


idea 1: p = 0.1
idea 2: p decreases with time

slide 33
Simulated Annealing

1. Pick initial state s


2. Randomly pick t in neighbors(s)
3. IF f(t) better THEN accept st.
4. ELSE /* t is worse than s */
5. accept st with a small probability
6. GOTO 2 until bored.

How to choose the small probability?


idea 1: p = 0.1
idea 2: p decreases with time
idea 3: p decreases with time, also as the ‘badness’
|f(s)-f(t)| increases

slide 34
Simulated Annealing
• If f(t) better than f(s), always accept t
• Otherwise, accept t with probability
 | f ( s)  f (t ) |  Boltzmann
exp    distribution
 Temp 

slide 35
Simulated Annealing
• If f(t) better than f(s), always accept t
• Otherwise, accept t with probability
 | f ( s)  f (t ) |  Boltzmann
exp    distribution
 Temp 

• Temp is a temperature parameter that ‘cools’


(anneals) over time, e.g. TempTemp*0.9 which
gives Temp=(T0)#iteration
 High temperature: almost always accept any t
 Low temperature: first-choice hill climbing
• If the ‘badness’ (formally known as energy difference)
|f(s)-f(t)| is large, the probability is small.

slide 36
SA algorithm
// assuming we want to maximize f()
current = Initial-State(problem)
for t = 1 to  do
T = Schedule(t) ; // T is the current temperature, which is
monotonically decreasing with t
if T=0 then return current ; //halt when temperature = 0
next = Select-Random-Successor-State(current)
deltaE = f(next) - f(current) ; // If positive, next is better
than current. Otherwise, next is worse than current.
if deltaE > 0 then current = next ; // always move to a
better state
else current = next with probability p = exp(deltaE /
T) ; // as T  0, p  0; as deltaE  -, p 0
end
slide 37
Simulated Annealing issues
• Cooling scheme important
• Neighborhood design is the real ingenuity, not the
decision to use simulated annealing.
• Not much to say theoretically
 With infinitely slow cooling rate, finds global
optimum with probability 1.
• Proposed by Metropolis in 1953 based on the
analogy that alloys manage to find a near global
minimum energy state, when annealed slowly.
• Easy to implement.
• Try hill-climbing with random restarts first!

slide 38
GENETIC ALGORITHM

http://www.genetic-programming.org/
slide 39
Evolution
• Survival of the fittest, a.k.a. natural selection
• Genes encoded as DNA (deoxyribonucleic acid), sequence of
bases: A (Adenine), C (Cytosine), T (Thymine) and G (Guanine)
• The chromosomes from the parents exchange randomly by a
process called crossover. Therefore, the offspring exhibit some
traits of the father and some traits of the mother.
 Requires genetic diversity among the parents to ensure
sufficiently varied offspring
• A rarer process called mutation also changes the genes (e.g.
from cosmic ray).
 Nonsensical/deadly mutated organisms die.
 Beneficial mutations produce “stronger” organisms
 Neither: organisms aren’t improved.

slide 40
Natural selection
• Individuals compete for resources
• Individuals with better genes have a larger chance to
produce offspring, and vice versa
• After many generations, the population consists of
lots of genes from the superior individuals, and less
from the inferior individuals
• Superiority defined by fitness to the environment
• Popularized by Darwin
• Mistake of Lamarck: environment does not force an
individual to change its genes

slide 41
Genetic algorithm
• Yet another AI algorithm based on real-world analogy
• Yet another heuristic stochastic search algorithm
• Each state s is called an individual. Often (carefully)
coded up as a string.

(3 2 7 5 2 4 1 1)

• The score f(s) is called the fitness of s. Our goal is to


find the global optimum (fittest) state.
• At any time we keep a fixed number of states. They
are called the population. Similar to beam search.

slide 42
Individual encoding
• The “DNA”
• Satisfiability problem A  B  C
A  C  D
(A B C D E) = (T F T T T) B  D  E
• TSP C   D   E
A   C  E
A-E-D-C-B-F-G-H-A

slide 43
Genetic algorithm
• Genetic algorithm: a special way to generate
neighbors, using the analogy of cross-over, mutation,
and natural selection.

slide 44
Genetic algorithm
• Genetic algorithm: a special way to generate
neighbors, using the analogy of cross-over, mutation,
and natural selection.

Number of non- prob. reproduction


attacking pairs  fitness

slide 45
Genetic algorithm
• Genetic algorithm: a special way to generate
neighbors, using the analogy of cross-over, mutation,
and natural selection.

Number of non- prob. reproduction


attacking pairs  fitness
 Next generation

slide 46
Genetic algorithm
• Genetic algorithm: a special way to generate
neighbors, using the analogy of cross-over, mutation,
and natural selection.

Number of non- prob. reproduction


attacking pairs  fitness
 Next generation

slide 47
Genetic algorithm (one variety)
1. Let s1, …, sN be the current population
2. Let pi = f(si) / j f(sj) be the reproduction probability
3. FOR k = 1; k<N; k+=2
• parent1 = randomly pick according to p
• parent2 = randomly pick another
• randomly select a crossover point, swap strings
of parents 1, 2 to generate children t[k], t[k+1]
4. FOR k = 1; k<=N; k++
• Randomly mutate each position in t[k] with a
small probability (mutation rate)
5. The new generation replaces the old: { s }{ t }.
Repeat.

slide 48
Proportional selection
• pi = f(si) / j f(sj)
• j f(sj) = 5+20+11+8+6=50
• p1=5/50=10%

slide 49
Variations of genetic algorithm
• Parents may survive into the next generation
• Use ranking instead of f(s) in computing the
reproduction probabilities.
• Cross over random bits instead of chunks.
• Optimize over sentences from a programming
language. Genetic programming.
• …

slide 50
Genetic algorithm issues
• State encoding is the real ingenuity, not the decision
to use genetic algorithm.
• Lack of diversity can lead to premature convergence
and non-optimal solution
• Not much to say theoretically
 Cross over (sexual reproduction) much more
efficient than mutation (asexual reproduction).
• Easy to implement.
• Try hill-climbing with random restarts first!

slide 51

You might also like