L04 Search (MoreSearchStrategies)
L04 Search (MoreSearchStrategies)
Informed Search
56
Best-First search
Special cases:
greedy best-first search
A* search
57
Greedy best-first search expands the node that appears to be closest to goal
58
59
Complete? No can get stuck in loops, e.g., Iasi Neamt Iasi Neamt
Time? O(bm), but a good heuristic can give dramatic improvement
Space? O(bm) -- keeps all nodes in memory
Optimal? No
60
A* search
It combines greedy and uniform-cost search to find the (estimated) cheapest path
through the current node
Evaluation function f(n) = g(n) + h(n), the estimated total cost of path through n
to goal
g(n) = cost so far to reach n (path cost up to n)
h(n) = estimated cost from n to goal
61
62
63
Admissible heuristics
An admissible heuristic never overestimates the cost to reach the goal, i.e., it is
optimistic
Example: hSLD(n) (never overestimates the actual road distance)
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n
be an unexpanded node in the fringe such that n is on a shortest path to an optimal
goal G.
We shall have:
f(G2) = g(G2)
since h(G2) = 0
64
since G2 is suboptimal
since h(G) = 0
from above
h(n)
h*(n)
g(n) + h(n) g(n) h*(n)
f(n)
f(G)
since h is admissible
Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
If h is consistent, we have
f(n')
= g(n') + h(n')
= g(n) + c(n, a, n') + h(n')
g(n) + h(n)
= f(n)
Optimality of A* search
66
A* is optimally efficient
no other algorithm is guaranteed to expand fewer nodes than A*
Complexity of A*
The number of nodes within the goal contour search space is still exponential
with respect to the length of the solution
better than other algorithms, but still problematic
67
Properties of A* search
The value of f never decreases along any path starting from the initial node
also known as monotonicity of the function
almost all admissible heuristics show monotonicity
those that dont can be modified through minor changes
Complete? Yes (unless there are infinitely many nodes with f f(G) )
Optimal? Yes
Admissible heuristics
68
h1(S) = 8
h2(S) = 3+1+2+2+2+3+3+2 = 18
Dominance
Relaxed problems
If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then
h1(n) gives the shortest solution
If the rules are relaxed so that a tile can move to any adjacent square, then
h2(n) gives the shortest solution
relaxed problems
fewer restrictions on the successor function (operators)
its exact solution may be a good heuristic for the original problem
8-Puzzle Heuristics
level of difficulty
around 20 steps for a typical solution
branching factor is about 3
exhaustive search would be 320 =3.5 * 109
9!/2 = 181,440 different reachable states
distinct arrangements of 9 squares
generation of heuristics
possible from formal specifications
70
In many optimization problems, the path to the goal is irrelevant; the goal state
itself is the solution
Example: n-queens
Put n queens on an n n board with no two queens on the same row, column, or
diagonal
for some problems, the state description provides all the information required for
a solution
71
Hill-climbing search
general problem: depending on initial state, can get stuck in local maxima
73
74
h = number of pairs of queens that are attacking each other, ether directly or
indirectly
75
76
Idea: escape local maxima by allowing some "bad" moves but gradually decrease
their frequency
analogy to annealing
gradual cooling of a liquid until it freezes
will find the global optimum if the temperature is lowered slowly enough
One can prove: If T decreases slowly enough, then simulated annealing search
will find a global optimum with probability approaching 1
77
Beam search
Beam search is a heuristic search algorithm that is an optimization of best-first search
that reduces its memory requirement. Best-first search is a graph search which orders all
partial solutions (states) according to some heuristic which attempts to predict how close
a partial solution is to a complete solution (goal state). In beam search, only a
predetermined number of best partial solutions are kept as candidates.
Beam search uses breadth-first search to build its search tree. At each level of the tree, it
generates all successors of the states at the current level, sorting them in order of
increasing heuristic values. However, it only stores a predetermined number of states at
each level (called the beam width). The smaller the beam width, the more states are
pruned. Therefore, with an infinite beam width, no states are pruned and beam search is
identical to breadth-first search. The beam width bounds the memory required to perform
the search, at the expense of risking completeness (possibility that it will not terminate)
and optimality (possibility that it will not find the best solution). The reason for this risk
is that the goal state could potentially be pruned.
The beam width can either be fixed or variable. In a fixed beam width, a maximum
number of successor states is kept. In a variable beam width, a threshold is set around the
current best state. All states that fall outside this threshold are discarded. Thus, in places
where the best path is obvious, a minimal number of states is searched. In places where
the best path is ambiguous, many paths will be searched.
If any one is a goal state, stop; else select the k best successors from the complete
list and repeat.
78