0% found this document useful (0 votes)
9 views

5-Heuristic Search Greedy Search-08!01!2025

The document outlines informed search algorithms, including best-first search, greedy best-first search, and A* search, which utilize domain-specific knowledge to efficiently navigate large search spaces. It contrasts these with uninformed search methods, emphasizing the role of heuristics in guiding the search process. Additionally, it discusses properties of A* search, including admissibility and optimality, and provides examples of heuristic functions.

Uploaded by

rugved wadnerkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

5-Heuristic Search Greedy Search-08!01!2025

The document outlines informed search algorithms, including best-first search, greedy best-first search, and A* search, which utilize domain-specific knowledge to efficiently navigate large search spaces. It contrasts these with uninformed search methods, emphasizing the role of heuristics in guiding the search process. Additionally, it discusses properties of A* search, including admissibility and optimality, and provides examples of heuristic functions.

Uploaded by

rugved wadnerkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Module:2

Informed Search Algorithms


Outline
• Informed search algorithms
– Best-first search
• Greedy best-first search
• A* search
• Local search algorithms
– Hill-climbing search
– Simulated annealing search
Informed Search
• Uninformed search algorithms which looked through search
space for all possible solutions of the problem without having
any additional knowledge about search space.
• Informed search algorithm
– A search using domain-specific knowledge.
– It contains an array of knowledge such as how far from the goal, path
cost, how to reach to goal node, etc.
– This knowledge help agents to explore less to the search space and find
more efficiently the goal node.
• The informed search algorithm is more useful for large search
space. Informed search algorithm uses the idea of heuristic, so
it is also called Heuristic search.
Heuristic

• Heuristic: heuristic (from Greek εὑρίσκω " find,


discover") is a technique designed for solving a
problem more quickly when classic methods are
too slow, or for finding an approximate solution
when classic methods fail to find any exact
solution
Overview
• Informed Search: uses problem-specific knowledge.
• With best-first, node is selected for expansion based
on evaluation function f(n).
• Evaluation function is a cost estimate; expand the
lowest evaluation is expanded first.
• Heuristic functions are the most common form in
which additional knowledge of the problem is passed
to the search algorithm.
Best-first search
• The general approach we consider is called best first
search.
• Best-first search is an instance of the general Tree-
Search or Graph-Search algorithm in which a node
is selected for expansion based on an evaluation
function, f(n).
• The evaluation function is construed as a cost-
estimate, so the node with the lowest evaluation is
expanded first.
• The implementation of best first graph search is
identical to that for Uniform-Cost Search, except for
the use of f instead of g to order the priority queue.
Best-first search
• Idea: use an evaluation function f(n) for each
node.
– f(n) provides an estimate for the total cost
– Expand the node n with smallest f(n).
Best-first search
• Best-First Search algorithms constitute a large family
of algorithms, with different evaluation functions.
– Each has a heuristic function h(n)
• g(n) = cost from the initial state to the current state n.
• h(n) = estimated cost of the cheapest path from node
n to a goal node.
• f(n) = evaluation function to select a node for
expansion (usually the lowest cost node).
Example for Best First Search

The generic best-first search


algorithm selects a node for
expansion according to an
evaluation function. It allows
revising the decision of the
algorithm.
Example for Best First
Search
Best first search would give
you S->A->G following the
heuristic function.
However, if you look at the
graph closer, you would see
that the path S->B->C->G
has lower cost of 5 instead
of 6.

The green numbers are the actual costs and the red numbers
are the heuristic value.
Best-first search
• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
 Expand most desirable unexpanded node

• Implementation:
Order the nodes in fringe in decreasing order of desirability.

• Special cases:
– greedy best-first search
– A* search
Greedy best-first search
• Greedy best-first search algorithm always selects the
path which appears best at that moment.
• Evaluation function f(n) = h(n) (heuristic)
= estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal.
• The greedy best first algorithm is implemented by the
priority queue.
• Greedy best-first search expands nodes with minimal h(n). It is not optimal,
but is often efficient.
• It builds up a solution step by step, always choosing the next step that offers
the most obvious and immediate benefit. So the problems where choosing
locally optimal also leads to the global solution is the best fit for Greedy. In
this algorithm, the decisions are final, and not revised.
Romania with step costs in km
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g.,


Iasi  Neamt  Iasi  Neamt 

• Time? O(bm), but a good heuristic can give


dramatic improvement

• Space? O(bm) -- keeps all nodes in memory

• Optimal? No
Difference between uniform-cost search and
best-first search methods

• Uniform-cost search is uninformed search:


• It doesn't use any domain knowledge. It expands the
least cost node, and it does so in every direction
because no information about the goal is provided.
• It can be viewed as a function f(n) = g(n) where g(n) is a
path cost
Difference between uniform-cost search and
best-first search methods

• Best-first search is informed search: it uses a heuristic


function to estimate how close the current state is to the
goal.
• Hence our cost function f(n) = g(n) is combined with the
cost to get from n to the goal, the h(n) (heuristic function
that estimates that cost) giving us f(n) = g(n) + h(n).
• An example of a best-first search algorithm is A*
algorithm.
• Both methods have a list of expanded nodes, but best-
first search will try to minimize that number of expanded
nodes (path cost + heuristic function).
A* search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to
goal
Working of A*
1. The algorithm maintains two sets
• OPEN list: The OPEN list keeps track of those nodes
that need to be examined.
• CLOSED list: The CLOSED list keeps track of nodes
that have already been examined.
2. Initially, the OPEN list contains just the
initial node, and the CLOSED list is empty
• g(n) = the cost of getting from the initial node to n
• h(n) = the estimate, according to the heuristic
function, of the cost from n to goal node
• f(n) = g(n)+h(n); intuitively, this is the estimate of the
best solution that goes through n
Working of A*
3. Each node also maintains a pointer to its parent, so that the
best solution if found can be retrieved.

– It has main loop that repeatedly gets the node, call it n with lowest f(n) value
from OPEN list.
– If n is goal node then stop (done) otherwise, n is removed from OPEN list &
added to the CLOSED list.
– Next all the possible successor nodes of n are generated.

4. For each successor node n, if it is already in the CLOSED list


and the copy there has an equal or lower f estimate, and then
we can safely discard the newly generated n and move on.

– Similarly, if n is already in the OPEN list & the copy there has an equal or
lower f estimate, we can discard the newly generated n and move on.
Romania with step costs in km
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Example – A* Search
h(n)

g(n)

f(n) = g(n)+h(n)
Source = S; Destination = G
Example – A* Search
Example – A* Search
Example – A* Search
Example – A* Search
Example – A* Search
Example – A* Search
Example – A* Search

17
Example – A* Search
Example – A* Search

17
Example – A* Search

17
Example – A* Search

Solution = SBEFG
A* Properties

• Admissible
• Optimality
• Consistent
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state
from n.

• An admissible heuristic never overestimates the cost to reach


the goal, i.e., it is optimistic

• Example: hSLD(n) (never overestimates the actual road


distance)

• Theorem: If h(n) is admissible, A* using TREE-SEARCH is


optimal
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.

• f(G2) = g(G2) since h(G2) = 0


• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.

• f(G2) > f(G) from above


• h(n) ≤ h*(n) since h is admissible
• g(n) + h(n) ≤ g(n) + h*(n)
• f(n) ≤ f(G)
Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

• If h is consistent, we have

f(n') = g(n') + h(n')


= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)

• i.e., f(n) is non-decreasing along any path.

• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal


Properties of A*

• Complete? Yes (unless there are infinitely


many nodes with f ≤ f(G) )

• Time? Exponential

• Space? Keeps all nodes in memory

• Optimal? Yes
Admissible heuristics
E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?
Admissible heuristics
E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

Dominance
If h2(n) ≥ h1(n) for all n (both
admissible)
then h2 dominates h1
h2 is better for search

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Comparison between Uninformed vs Informed
Classical Search Method:
Uninformed/Informed

Uninformed/Informed
Uninformed/Informed method suitable
method not suitable for
for Observable, Deterministic, Known
Non-Observable, Non-
environments where the solution is a
Deterministic, Unknown
sequence of actions.
environments.

You might also like