0% found this document useful (0 votes)
50 views

Heuristic Function

The document discusses different types of heuristic functions that can be used to guide search algorithms for solving problems like the 8-puzzle. It explains that admissible heuristics do not overestimate the cost to reach the goal and that more informative heuristics can improve search performance. The document also describes how relaxed problems can be used to define admissible heuristics and covers local search algorithms like hill-climbing, simulated annealing, and genetic algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Heuristic Function

The document discusses different types of heuristic functions that can be used to guide search algorithms for solving problems like the 8-puzzle. It explains that admissible heuristics do not overestimate the cost to reach the goal and that more informative heuristics can improve search performance. The document also describes how relaxed problems can be used to define admissible heuristics and covers local search algorithms like hill-climbing, simulated annealing, and genetic algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

Informed search algorithms

Chapter 4
Material
• Chapter 4 Section 1 - 3
• Exclude memory-bounded heuristic search
Heuristics FUNCTIONS
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?

Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search
• Typical search costs (average number of nodes
expanded):
• d=12 IDS = 3,644,035 nodes
A*(h1) = 227 nodes
A*(h2) = 73 nodes
• d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes

Relaxed problems
• A problem with fewer restrictions on the actions
is called a relaxed problem
• The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the
original problem
• If the rules of the 8-puzzle are relaxed so that a
tile can move anywhere, then h1(n) gives the
shortest solution
• If the rules are relaxed so that a tile can move to
any adjacent square, then h2(n) gives the
shortest solution

Local search algorithms
• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution
• State space = set of "complete" configurations
• Find configuration satisfying constraints, e.g., n-
queens
• In such cases, we can use local search
algorithms
• keep a single "current" state, try to improve it
Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal

Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"


Hill-climbing search
• Problem: depending on initial state, can
get stuck in local maxima


Hill-climbing search: 8-queens problem

• h = number of pairs of queens that are attacking each other, either directly
or indirectly
• h = 17 for the above state


Hill-climbing search: 8-queens problem

• A local minimum with h = 1



Simulated annealing search
• Idea: escape local maxima by allowing some
"bad" moves but gradually decrease their
frequency


Properties of simulated
annealing search
• One can prove: If T decreases slowly enough,
then simulated annealing search will find a
global optimum with probability approaching 1

• Widely used in VLSI layout, airline scheduling,


etc
Local beam search
• Keep track of k states rather than just one

• Start with k randomly generated states

• At each iteration, all the successors of all k


states are generated

• If any one is a goal state, stop; else select the k


best successors from the complete list and
repeat.
Genetic algorithms
• A successor state is generated by combining two parent
states
• Start with k randomly generated states (population)
• A state is represented as a string over a finite alphabet
(often a string of 0s and 1s)

• Evaluation function (fitness function). Higher values for


better states.
• Produce the next generation of states by selection,
crossover, and mutation
Genetic algorithms

• Fitness function: number of non-attacking pairs of


queens (min = 0, max = 8 × 7/2 = 28)
• 24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29% etc

Genetic algorithms

You might also like