0% found this document useful (0 votes)
36 views

Informed Searches: Associate Professor Email

A* search is an algorithm that combines the efficiency of greedy best-first search with the completeness of uniform cost search. It uses an evaluation function f(n) = g(n) + h(n), where g(n) is the cost from the start node to node n and h(n) is a heuristic estimate of the cost from n to the goal. A* is optimal if h(n) is admissible, meaning it does not overestimate the cost to reach the goal. A* is complete except if there are an infinite number of nodes with f values lower than the goal node.

Uploaded by

Mansoor Qaisrani
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Informed Searches: Associate Professor Email

A* search is an algorithm that combines the efficiency of greedy best-first search with the completeness of uniform cost search. It uses an evaluation function f(n) = g(n) + h(n), where g(n) is the cost from the start node to node n and h(n) is a heuristic estimate of the cost from n to the goal. A* is optimal if h(n) is admissible, meaning it does not overestimate the cost to reach the goal. A* is complete except if there are an infinite number of nodes with f values lower than the goal node.

Uploaded by

Mansoor Qaisrani
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 40

Informed Searches

Dr. Azhar Mahmood


Associate Professor
Email: [email protected]
Informed Search
Strategies
A* Search: eval-fn: f(n)=g(n)+h(n)
A* (A Star)
• Greedy Search minimizes a heuristic h(n) which is an
estimated cost from a node n to the goal state. Greedy
Search is efficient but it is not optimal nor complete.

• Uniform Cost Search minimizes the cost g(n) from the


initial state to n. UCS is optimal and complete but not
efficient.

• New Strategy: Combine Greedy Search and UCS to get an


efficient algorithm which is complete and optimal.
Uniform-cost orders by path cost, or backward cost: g(n).
Greedy orders by goal proximity, or forward cost: h(n).
A* Search orders by the sum: f(n) = g(n) +h(n).
3
A* (A Star)
• A* uses a heuristic function which combines g(n)
and h(n): f(n) = g(n) + h(n)

• g(n) is the exact cost to reach node n from the


initial state.

• h(n) is an estimation of the remaining cost to


reach the goal.

4
A* (A Star)

g(n)

f(n) = g(n)+h(n) n

h(n)

5
Approach 3: f measures the total
cost of the solution path
(Admissible Heuristic Functions)
• A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never
overestimates the cost to reach the goal.
– Admissible heuristics are “optimistic”: “the cost is not that much …”
• However, g(n) is the exact cost to reach node n from the initial state.
• Therefore, f(n) never over-estimate the true cost to reach the goal
state through node n.
• Theorem: A search is optimal if h(n) is admissible.
– I.e. The search using h(n) returns an optimal solution.
• Given h2(n) > h1(n) for all n, it’s always more efficient to use h2(n).
– h2 is more realistic than h1 (more informed), though both are optimistic.

6
A* Search
A* Search
A* Search
A* Search
A* Search

Bucharest appears on the fringe


but not selected for expansion
since its cost (450)
is higher than that of Pitesti (417).

Important to understand for the proof


of
optimality of A*
A* search example
Arad --- Sibiu --- Rimnicu --- Pitesti --- Bucharest
Using: f(n) = g(n) + h(n)
Note: Bucharest
Claim: Optimal path found! twice in tree.

1) Can it go wrong?
Note: Greedy best first
Arad --- Sibiu --- Fagaras ---
Bucharest

2) What’s special about


“straight distance” to goal?
Uniform cost search
It underestimates true path
distance!
3) What if all our estimates to goal are 0?
Eg h(n) = 0  (f(n)= g(n)
4) What if we overestimate?
5) What if h(n) is true distance (h*(n))?
What is f(n)?
Shortest dist. through n --- perfect heuristics --- no search
A* Search
Start State Heuristic: h(n)
A 75
118 A 366
140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = g(n) + h (n)
13
g(n): is the exact cost to reach node n from the initial state.
A* Search: Tree Search

A Start

14
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C

15
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

16
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[415] H

17
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[415] H

101
Goal I [418]
18
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[415] H I [450]

101
Goal I [418]
19
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[415] H I [450]

101
Goal I [418]
20
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[415] H I [450]

101
Goal I [418]
21
A* Algorithm
A* with systematic checking for repeated
states …
1. Search queue Q is empty.
2. Place the start state s in Q with f value h(s).
3. If Q is empty, return failure.
4. Take node n from Q with lowest f value.
(Keep Q sorted by f values and pick the first element).
5. If n is a goal node, stop and return solution.
6. Generate successors of node n.
7. For each successor n’ of n do:
a) Compute f(n’) = g(n) + cost(n,n’) + h(n’).
b) If n’ is new (never generated before), add n’ to Q.
c) If node n’ is already in Q with a higher f value, replace it with
current f(n’) and place it in sorted order in Q.
End for
8. Go back to step 3.

22
A* Search: Analysis

Start •A* is complete except if there is an


A 75 infinity of nodes with f < f(G).
118

140 B •A* is optimal if heuristic h is


C
111 admissible.
E
D 80 99 •Time complexity depends on the
G F
quality of heuristic but is still
exponential.
97
•For space complexity, A* keeps all
H 211
nodes in memory. A* has worst
101
case O(bd) space complexity, but an
I
Goal iterative deepening version is
23
possible (IDA*).
A* properties
• Under some reasonable conditions for the heuristics, we have:
Complete
– Yes, unless there are infinitely many nodes with f(n) < f(Goal)
• Time
– Sub-exponential grow when h( n)  h* ( n)  O (log h* ( n))
– So, a good heuristics can bring exponential search down significantly!
• Space
– Fringe nodes in memory. Often exponential.
• Optimal
– Yes (under admissible heuristics; discussed next)
– Also, optimal use of heuristics information!

• Widely used. E.g. Google maps.


• After almost 40 yrs, still new applications found.
• Also, optimal use of heuristic information.
Heuristics: (1) Admissibility
• The heuristic function h(n) is called admissible if h(n) is
never larger than h*(n), namely h(n) is always less or equal
to true cheapest cost from n to the goal.
– A heuristic h(n) is admissible if for every node n,
– h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state
from n.
• An admissible heuristic never overestimates the cost to
reach the goal,
– Example: hSLD(n) (never overestimates the actual road distance)
– Note: it follows that h(goal) = 0.
• A heuristic h(n) is non-admissible if node n,
– h(n) > h*(n), where h*(n) is the true cost to reach the goal state
from n.
A* Search: h not admissible !
Start State Heuristic: h(n)
A 75
118 A 366
140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 138
101 I 0
I
Goal f(n) = g(n) + h (n) – (H-I) Overestimated
26
g(n): is the exact cost to reach node n from the initial state.
A* Search: Tree Search

A Start

27
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C

28
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

29
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]

97
[455] H

30
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[413] G F [417]
211
97
[455] H Goal I [450]

31
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[473] D [413] G F [417]


211
97
[455] H Goal I [450]

32
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[473] D [413] G F [417]


211
97
[455] H Goal I [450]

33
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[473] D [413] G F [417]


211
97
[455] H Goal I
[450=140+99+211]

34
A* Search: Tree Search

A Start

118 75
140

E [393] B [449]
[447] C
80 99

[473] D [413] G F [417]

97
[455] H Goal I [450=140+99+211]

101
I A* not
[418=140+80+97+101] optimal !!!
optimal !!! 35
Heuristics: (2) Consistency
• A heuristic is consistent (or monotone) if for every node n, every successor n' of n generated by any action
a,
• h(n) ≤ c(n,a,n') + h(n')

• (form of the triangle inequality)


• If h is consistent, we have
• f(n') = g(n') + h(n')
• = g(n) + c(n,a,n') + h(n')
• ≥ g(n) + h(n)
• = f(n) f(n') ≥ f(n)
• i.e., f(n) is non-decreasing along any path.



 sequence of nodes expanded by A* is in
• nondecreasing order of f(n)
•  the first goal selected for expansion must be
an optimal goal.

Note: Monotonicity is a stronger condition than admissibility.


Any consistent heuristic is also admissible.
8-Puzzle
• Slide the tiles horizontally or vertically into the empty space until the
• configuration matches the goal configuration

• What’s the branching factor?


• (slide “empty space”)

About 3, depending on location of empty tile:


middle  4; corner  2; edge  3
Admissible heuristics
• E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
• (i.e., no. of steps from desired location of each tile)
• h1(Start) = 8
• h2(Start) =

3+1+2+2+2+3+3+2 = 18
• Why are heuristics admissible?
True cost = 8+18=26
• Which is better?
• How can we get the optimal heuristics? (Given H_opt(Start) =
26. How would we find the next board on the optimal path to the
goal?)


Desired properties heuristics:
• (1) consistent (admissible)
(2) As close to opt as we can get (sometimes go a bit over…)
(3) Easy to compute! We want to explore many nodes.
A* on 8-Puzzle with h(n) = # misplaced
tiles
Conclusions
• Frustration with uninformed search led to the idea
of using domain specific knowledge in a search so
that one can intelligently explore only the relevant
part of the search space that has a good chance of
containing the goal state. These new techniques
are called informed (heuristic) search strategies.

• Even though heuristics improve the performance


of informed search algorithms, they are still time
consuming especially for large size instances.

40

You might also like