0% found this document useful (0 votes)
11 views

Ch-2Problem Solving

Uploaded by

PARTH VYAVHARE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Ch-2Problem Solving

Uploaded by

PARTH VYAVHARE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 118

Artificial Intelligence

Chapter 3: Solving Problems by


Searching

January 26, 2003 AI: Chapter 3: Solving Problems 1


by Searching
Problem Solving Agents
• Problem solving agent
– A kind of “goal based” agent
– Finds sequences of actions that lead to
desirable states.

• The algorithms are uninformed


– No extra information about the problem other
than the definition
• No extra information
• No heuristics (rules)
January 26, 2003 AI: Chapter 3: Solving Problems 2
by Searching
Goal Based Agent
Goal Based Agent

What the world Sensors Percepts


State is like now

Environment
How the world evolves

What my actions do
What it will be like
if I do action A

What action I
Goals Actuators
should do now Actions

January 26, 2003 AI: Chapter 3: Solving Problems 3


by Searching
Goal Based Agent
Function Simple-Problem-Solving-Agent( percept ) returns action

Inputs: percept a percept


Static: seq an action sequence initially empty
state some description of the current world
goal a goal, initially null
problem a problem formulation

state <- UPDATE-STATE( state, percept )


if seq is empty then do
goal <- FORMULATE-GOAL( state )
problem <- FORMULATE-PROBLEM( state, goal )
seq <- SEARCH( problem ) # SEARCH
action <- RECOMMENDATION ( seq ) # SOLUTION
seq <- REMAINDER( seq )
return action #
EXECUTION
January 26, 2003 AI: Chapter 3: Solving Problems 4
by Searching
Goal Based Agents
• Assumes the problem environment is:
– Static
• The plan remains the same
– Observable
• Agent knows the initial state
– Discrete
• Agent can enumerate the choices
– Deterministic
• Agent can plan a sequence of actions such that each will lead to an
intermediate state

• The agent carries out its plans with its eyes closed
– Certain of what’s going on
– Open loop system

January 26, 2003 AI: Chapter 3: Solving Problems 5


by Searching
Well Defined Problems and
Solutions
• A problem
– Initial state
– Actions and Successor Function
– Goal test
– Path cost

January 26, 2003 AI: Chapter 3: Solving Problems 6


by Searching
Example: Water Pouring
• Given a 4 gallon bucket and a 3 gallon
bucket, how can we measure exactly 2
gallons into one bucket?
– There are no markings on the bucket
– You must fill each bucket completely

January 26, 2003 AI: Chapter 3: Solving Problems 7


by Searching
Example: Water Pouring
• Initial state:
– The buckets are empty
– Represented by the tuple ( 0 0 )

• Goal state:
– One of the buckets has two gallons of water in it
– Represented by either ( x 2 ) or ( 2 x )

• Path cost:
– 1 per unit step
January 26, 2003 AI: Chapter 3: Solving Problems 8
by Searching
Example: Water Pouring
• Actions and Successor Function
– Fill a bucket
• (x y) -> (3 y)
• (x y) -> (x 4)
– Empty a bucket
• (x y) -> (0 y)
• (x y) -> (x 0)
– Pour contents of one bucket into another
• (x y) -> (0 x+y) or (x+y-4, 4)
• (x y) -> (x+y 0) or (3, x+y-3)
January 26, 2003 AI: Chapter 3: Solving Problems 9
by Searching
Example: Water Pouring
(0,0)

(4,0) (0,3)

(1,3) (4,3) (3,0)

(1,0) (0,1)
(3,3) (4,2)

(4,1)
(2,3)

(2,0) (0,2)

January 26, 2003 AI: Chapter 3: Solving Problems 10


by Searching
Example: Eight Puzzle
• States:
– Description of the eight 7 2 4
tiles and location of the
blank tile
5 6
• Successor Function:
– Generates the legal states 8 3 1
from trying the four actions
{Left, Right, Up, Down}
• Goal Test:
– Checks whether the state 1 2 3
matches the goal
configuration
4 5 6
• Path Cost:
– Each step costs 1 7 8

January 26, 2003 AI: Chapter 3: Solving Problems 11


by Searching
Example: Eight Puzzle
• Eight puzzle is from a family of “sliding –
block puzzles”
– NP Complete
– 8 puzzle has 9!/2 = 181440 states
– 15 puzzle has approx. 1.3*1012 states
– 24 puzzle has approx. 1*1025 states

January 26, 2003 AI: Chapter 3: Solving Problems 12


by Searching
Example: Eight Queens
• Place eight queens on a Q
chess board such that no
queen can attack another Q
queen
Q
• No path cost because Q
only the final state
counts! Q
Q
• Incremental formulations Q
• Complete state
formulations Q
January 26, 2003 AI: Chapter 3: Solving Problems 13
by Searching
Example: Eight Queens
• States: Q
– Any arrangement of 0 to 8
queens on the board Q
• Initial state:
– No queens on the board Q
• Successor function:
– Add a queen to an empty Q
square
• Goal Test: Q
– 8 queens on the board and Q
none are attacked
• 64*63*…*57 = 1.8*1014 Q
possible sequences
– Ouch! Q
January 26, 2003 AI: Chapter 3: Solving Problems 14
by Searching
Example: Eight Queens
• States: Q
– Arrangements of n queens,
one per column in the Q
leftmost n columns, with
no queen attacking another Q
are states
• Successor function: Q
– Add a queen to any square
in the leftmost empty Q
column such that it is not
attacked by any other Q
queen.
Q
• 2057 sequences to
investigate Q
January 26, 2003 AI: Chapter 3: Solving Problems 15
by Searching
Other Toy Examples
• Another Example: Jug Fill
• Another Example: Black White Marbles
• Another Example: Row Boat Problem
• Another Example: Sliding Blocks
• Another Example: Triangle Tee

January 26, 2003 AI: Chapter 3: Solving Problems 16


by Searching
Example: Map Planning

January 26, 2003 AI: Chapter 3: Solving Problems 17


by Searching
Searching For Solutions
• Initial State
– e.g. “At Arad”
• Successor Function
– A set of action state pairs
– S(Arad) = {(Arad->Zerind, Zerind), …}
• Goal Test
– e.g. x = “at Bucharest”
• Path Cost
– sum of the distances traveled
January 26, 2003 AI: Chapter 3: Solving Problems 18
by Searching
Searching For Solutions
• Having formulated some problems…how
do we solve them?

• Search through a state space

• Use a search tree that is generated with


an initial state and successor functions
that define the state space
January 26, 2003 AI: Chapter 3: Solving Problems 19
by Searching
Searching For Solutions
• A state is (a representation of) a physical configuration

• A node is a data structure constituting part of a search


tree
– Includes parent, children, depth, path cost

• States do not have children, depth, or path cost

• The EXPAND function creates new nodes, filling in the


various fields and using the SUCCESSOR function of the
problem to create the corresponding states

January 26, 2003 AI: Chapter 3: Solving Problems 20


by Searching
Searching For Solutions

January 26, 2003 AI: Chapter 3: Solving Problems 21


by Searching
Searching For Solutions

January 26, 2003 AI: Chapter 3: Solving Problems 22


by Searching
Searching For Solutions

January 26, 2003 AI: Chapter 3: Solving Problems 23


by Searching
Uninformed Search Strategies
• Uninformed strategies use only the information
available in the problem definition
– Also known as blind searching

• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
January 26, 2003 AI: Chapter 3: Solving Problems 24
by Searching
Comparing Uninformed Search
Strategies
• Completeness
– Will a solution always be found if one exists?
• Time
– How long does it take to find the solution?
– Often represented as the number of nodes searched
• Space
– How much memory is needed to perform the search?
– Often represented as the maximum number of nodes stored at
once
• Optimal
– Will the optimal (least cost) solution be found?

• Page 81 in AIMA text


January 26, 2003 AI: Chapter 3: Solving Problems 25
by Searching
Comparing Uninformed Search
Strategies
• Time and space complexity are measured
in
– b – maximum branching factor of the search
tree
– m – maximum depth of the state space
– d – depth of the least cost solution

January 26, 2003 AI: Chapter 3: Solving Problems 26


by Searching
Breadth-First Search
• Recall from Data Structures the basic
algorithm for a breadth-first search on a
graph or tree

• Expand the shallowest unexpanded node

• Place all new successors at the end of a


FIFO queue
January 26, 2003 AI: Chapter 3: Solving Problems 27
by Searching
Breadth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 28


by Searching
Breadth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 29


by Searching
Breadth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 30


by Searching
Breadth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 31


by Searching
Properties of Breadth-First Search
• Complete
– Yes if b (max branching factor) is finite
• Time
– 1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
– exponential in d
• Space
– O(bd+1)
– Keeps every node in memory
– This is the big problem; an agent that generates nodes at 10
MB/sec will produce 860 MB in 24 hours
• Optimal
– Yes (if cost is 1 per step); not optimal in general

January 26, 2003 AI: Chapter 3: Solving Problems 32


by Searching
Lessons From Breadth First Search
• The memory requirements are a bigger
problem for breadth-first search than is
execution time

• Exponential-complexity search problems


cannot be solved by uniformed methods
for any but the smallest instances

January 26, 2003 AI: Chapter 3: Solving Problems 33


by Searching
Uniform-Cost Search
• Same idea as the algorithm for breadth-
first search…but…
– Expand the least-cost unexpanded node
– FIFO queue is ordered by cost
– Equivalent to regular breadth-first search if all
step costs are equal

January 26, 2003 AI: Chapter 3: Solving Problems 34


by Searching
Uniform-Cost Search
• Complete
– Yes if the cost is greater than some threshold
– step cost >= ε
• Time
– Complexity cannot be determined easily by d or d
– Let C* be the cost of the optimal solution
– O(bceil(C*/ ε))
• Space
– O(bceil(C*/ ε))
• Optimal
– Yes, Nodes are expanded in increasing order

January 26, 2003 AI: Chapter 3: Solving Problems 35


by Searching
Depth-First Search
• Recall from Data Structures the basic
algorithm for a depth-first search on a
graph or tree

• Expand the deepest unexpanded node

• Unexplored successors are placed on a


stack until fully explored
January 26, 2003 AI: Chapter 3: Solving Problems 36
by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 37


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 38


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 39


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 40


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 41


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 42


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 43


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 44


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 45


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 46


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 47


by Searching
Depth-First Search

January 26, 2003 AI: Chapter 3: Solving Problems 48


by Searching
Depth-First Search
• Complete
– No: fails in infinite-depth spaces, spaces with loops
• Modify to avoid repeated spaces along path
– Yes: in finite spaces
• Time
– O(bm)
– Not great if m is much larger than d
– But if the solutions are dense, this may be faster than breadth-
first search
• Space
– O(bm)…linear space
• Optimal
– No

January 26, 2003 AI: Chapter 3: Solving Problems 49


by Searching
Depth-Limited Search
• A variation of depth-first search that uses
a depth limit
– Alleviates the problem of unbounded trees
– Search to a predetermined depth l (“ell”)
– Nodes at depth l have no successors

• Same as depth-first search if l = ∞


• Can terminate for failure and cutoff
January 26, 2003 AI: Chapter 3: Solving Problems 50
by Searching
Depth-Limited Search

January 26, 2003 AI: Chapter 3: Solving Problems 51


by Searching
Depth-Limited Search
• Complete
– Yes if l < d
• Time
– O(bl)
• Space
– O(bl)
• Optimal
– No if l > d
January 26, 2003 AI: Chapter 3: Solving Problems 52
by Searching
Iterative Deepening Search
• Iterative deepening depth-first search
– Uses depth-first search
– Finds the best depth limit
• Gradually increases the depth limit; 0, 1, 2, … until
a goal is found

January 26, 2003 AI: Chapter 3: Solving Problems 53


by Searching
Algorithm
• Explorer the nodes in DFS order.
• Set a LIMIT variable with a limit value.
• Loop each node up to the limit value and
further increase the limit value
accordingly.
• Terminate the search when goal state is
found.

January 26, 2003 AI: Chapter 3: Solving Problems 54


by Searching
Iterative Deepening Search

January 26, 2003 AI: Chapter 3: Solving Problems 55


by Searching
Iterative Deepening Search

January 26, 2003 AI: Chapter 3: Solving Problems 56


by Searching
Iterative Deepening Search

January 26, 2003 AI: Chapter 3: Solving Problems 57


by Searching
Iterative Deepening Search

January 26, 2003 AI: Chapter 3: Solving Problems 58


by Searching
Iterative Deepening Search

January 26, 2003 AI: Chapter 3: Solving Problems 59


by Searching
Iterative Deepening Search
• Complete
– Yes
• Time
– O(bd)
• Space
– O(bd)
• Optimal
– Yes if step cost = 1
– Can be modified to explore uniform cost tree
January 26, 2003 AI: Chapter 3: Solving Problems 60
by Searching
Lessons From Iterative Deepening
Search
• Faster than BFS even though IDS
generates repeated states
– BFS generates nodes up to level d+1
– IDS only generates nodes up to level d

• In general, iterative deepening search is


the preferred uninformed search method
when there is a large search space and
the depth of the solution is not known
January 26, 2003 AI: Chapter 3: Solving Problems 61
by Searching
Avoiding Repeated States
• Complication of wasting time by expanding
states that have already been encountered and
expanded before
– Failure to detect repeated states can turn a linear
problem into an exponential one

• Sometimes, repeated states are unavoidable


– Problems where the actions are reversable
• Route finding
• Sliding blocks puzzles

January 26, 2003 AI: Chapter 3: Solving Problems 62


by Searching
Avoiding Repeated States

State Space Search Tree

January 26, 2003 AI: Chapter 3: Solving Problems 63


by Searching
Avoiding Repeated States

January 26, 2003 AI: Chapter 3: Solving Problems 64


by Searching
Disadvantages of Iterative deepening search
•The drawback of iterative deepening search is that it seems
wasteful because it generates states multiple times.

Note: Generally, iterative deepening search is required when the


search space is large, and the depth of the solution is unknown.

January 26, 2003 AI: Chapter 3: Solving Problems 65


by Searching
Bidirectional search
• The strategy behind the bidirectional search is to run two searches
simultaneously--one forward search from the initial state and
other from the backside of the goal--hoping that both searches
will meet in the middle.

• As soon as the two searches intersect one another, the


bidirectional search terminates with the goal node. This search is
implemented by replacing the goal test to check if the two searches
intersect. Because if they do so, it means a solution is found.

January 26, 2003 AI: Chapter 3: Solving Problems 66


by Searching
Bidirectional search
• Run two simultaneous searches
– one forward from the initial state another
backward from the goal
– stop when the two searches meet
• However, computing backward is difficult
– A huge amount of goal states
– at the goal state, which actions are used to
compute it?
– can the actions be reversible to computer its
predecessors?
Bidirectional Strategy

2 fringe queues: FRINGE1 and FRINGE2

Time and space complexity = O(bd/2) << O(bd)


Bidirectional
S search
Forward
A D Backwards

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
The performance measure of Bidirectional
search
Complete: Bidirectional search is complete.
Optimal: It gives an optimal solution.
Time and space complexity: Bidirectional
search has O(bd/2)
Disadvantage of Bidirectional Search
It requires a lot of memory space.

January 26, 2003 AI: Chapter 3: Solving Problems 71


by Searching
Informed Search /Heuristics
search in AI
 An informed search is more efficient than an uninformed search
because in informed search, along with the current state
information, some additional information is also present, which
make it easy to reach the goal state.

January 26, 2003 AI: Chapter 3: Solving Problems 72


by Searching
Generate and Test
Generate and Test Search is a heuristic search technique
based on Depth First Search with Backtracking.

 Guarantees to find a solution if done systematically and


there exists a solution.

 In this technique, all the solutions are generated and tested


for the best solution.

It ensures that the best solution is checked against all


possible generated solutions.

January 26, 2003 AI: Chapter 3: Solving Problems 73


by Searching
Generate and Test
It is also known as British Museum Search Algorithm as it’s like looking for an
exhibit at random or finding an object in the British Museum by wandering
randomly.
The evaluation is carried out by the heuristic function as all the solutions are
generated systematically .
But if there are some paths which are most unlikely to lead us to result then
they are not considered.
The heuristic does this by ranking all the alternatives and is often effective in
doing so.
Systematic Generate and Test may prove to be ineffective while solving
complex problems.
 But there is a technique to improve in complex cases as well by combining
generate and test search with other techniques so as to reduce the search space.
Algorithm

January 26, 2003 AI: Chapter 3: Solving Problems 74


by Searching
For example in Artificial Intelligence Program
DENDRAL we make use of two techniques, the first
one is Constraint Satisfaction Techniques followed by
Generate and Test Procedure to work on reduced
search space i.e. yield an effective result by working
on a lesser number of lists generated in the very first
step.

January 26, 2003 AI: Chapter 3: Solving Problems 75


by Searching
Algorithm

Generate a possible solution. For example, generating a


particular point in the problem space or generating a path
for a start state.

Test to see if this is a actual solution by comparing the


chosen point or the endpoint of the chosen path to the set
of acceptable goal states.

If a solution is found, quit. Otherwise go to Step 1

January 26, 2003 AI: Chapter 3: Solving Problems 76


by Searching
Hill Climbing Algorithm

Hill climbing search is a local search problem.

The purpose of the hill climbing search is to climb a


hill and reach the topmost peak/ point of that hill.

It is based on the heuristic search technique where


the person who is climbing up on the hill estimates the
direction which will lead him to the highest peak.

January 26, 2003 AI: Chapter 3: Solving Problems 77


by Searching
State-space Landscape of
Hill climbing algorithm
 To understand the concept of hill climbing algorithm, consider the below
landscape representing the goal state/peak and the current state of the
climber.
The topographical regions shown in the figure can be defined as:

Global Maximum: It is the highest point on the hill, which is the goal state.

Local Maximum: It is the peak higher than all other peaks but lower than the
global maximum.

Flat local maximum: It is the flat area over the hill where it has no uphill or
downhill. It is a saturated point of the hill.

Shoulder: It is also a flat area where the summit is possible.

Current state: It is the current position of the person.


January 26, 2003 AI: Chapter 3: Solving Problems 78
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 79
by Searching
Types of Hill climbing search
algorithm
There are following types of hill-climbing search:

1.Simple hill climbing

2.Steepest-ascent hill climbing

3.Stochastic hill climbing

4.Random-restart hill climbing

January 26, 2003 AI: Chapter 3: Solving Problems 80


by Searching
Simple hill climbing search
Simple hill climbing is the simplest technique to climb a hill.

The task is to reach the highest peak of the mountain.

 Here, the movement of the climber depends on his move/steps.

If he finds his next step better than the previous one, he continues to move else
remain in the same state.

This search focus only on his previous and next step.

Simple hill climbing Algorithm

Create a CURRENT node, NEIGHBOUR node, and a GOAL node.

If the CURRENT node=GOAL node, return GOAL and terminate the search.

Else CURRENT node<= NEIGHBOUR node, move ahead.

Loop until the goal is not reached or a point is not found. 81


Steepest-ascent hill climbing
 Unlike simple hill climbing search, It considers all the successive nodes,
compares them, and choose the node which is closest to the solution.

 Steepest hill climbing search is similar to best-first search because it


focuses on each node instead of one.

Note: Both simple, as well as steepest-ascent hill climbing search, fails when
there is no closer node.

Steepest-ascent hill climbing algorithm

 Create a CURRENT node and a GOAL node.

 If the CURRENT node=GOAL node, return GOAL and terminate the


search.

 Loop until a better node is not found to reach the solution.

 If there is any better successor node present, expand it. 82


Stochastic hill climbing
Stochastic hill climbing does not focus on all the nodes.

It selects one node at random and decides whether it should be expanded or
search for a better one.

Random-restart hill climbing


Random-restart algorithm is based on try and try strategy.

It iteratively searches the node and selects the best one at each step until the
goal is not found.

The success depends most commonly on the shape of the hill.

If there are few plateaus, local maxima, and ridges, it becomes easy to reach the
destination.
83
Limitations of Hill climbing
algorithm
Local Maxima:
It is that peak of the mountain which is highest than all its neighboring states but
lower than the global maxima. It is not the goal peak because there is another peak
higher than it.

January 26, 2003 AI: Chapter 3: Solving Problems 84


by Searching
Plateau:
It is a flat surface area where no uphill exists. It becomes difficult for the climber to
decide that in which direction he should move to reach the goal point. Sometimes, the
person gets lost in the flat area.

January 26, 2003 AI: Chapter 3: Solving Problems 85


by Searching
Ridges: It is a challenging problem where the person finds two or more local maxima
of the same height commonly. It becomes difficult for the person to navigate the right
point and stuck to that point itself.

January 26, 2003 AI: Chapter 3: Solving Problems 86


by Searching
Best-first Search (Greedy search)

 A best-first search is a general approach of informed search.


 Here, a node is selected for expansion based on an evaluation function f(n),
where f(n) interprets the cost estimate value.

 The evaluation function expands that node first, which has the lowest cost. A component of
f(n) is h(n) which carries the additional information required for the search algorithm, i.e.,

h(n)= estimated cost of the cheapest path from the current node n to the goal node.

January 26, 2003 AI: Chapter 3: Solving Problems 87


by Searching
Best-first Search (Greedy search)

Note: If the current node n is a goal node, the value of h(n) will be 0.

Best-first search is known as a greedy search because it always tries to explore


the node which is nearest to the goal node and selects that path, which gives a
quick solution.

Thus, it evaluates nodes with the help of the heuristic function,


i.e., f(n)=h(n).

January 26, 2003 AI: Chapter 3: Solving Problems 88


by Searching
Best-first search Algorithm
Set an OPEN list and a CLOSE list.
The OPEN list contains visited but unexpanded nodes and the CLOSE list
contains visited as well as expanded nodes.

Initially, traverse the root node and visit its next successor nodes and place
them in the OPEN list in ascending order of their heuristic value.

Select the first successor node from the OPEN list with the lowest heuristic
value and expand further.

Now, rearrange all the remaining unexpanded nodes in the OPEN list and
repeat above two steps.

If the goal node is reached, terminate the search, else expand further.

January 26, 2003 AI: Chapter 3: Solving Problems 89


by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 90
by Searching
The performance measure of Best-
first search Algorithm:
Completeness: Best-first search is incomplete even in
finite state space.

Optimality: It does not provide an optimal solution.

Time and Space complexity: It has O(bm) worst time


and space complexity, where m is the maximum depth
of the search tree. If the quality of the heuristic function
is good, the complexities could be reduced substantially.

January 26, 2003 AI: Chapter 3: Solving Problems 91


by Searching
Disadvantages of Best-first search

BFS does not guarantees to reach the goal state.

Since the best-first search is a greedy approach, it


does not give an optimized solution.

It may cover a long distance in some cases.

January 26, 2003 AI: Chapter 3: Solving Problems 92


by Searching
A* Search Algorithm

 most widely used informed search algorithm where a node n is


evaluated by combining values of the functions g(n)and h(n).

The function g(n) is the path cost from the start/initial node to a
node n and h(n) is the estimated cost of the cheapest path from
node n to the goal node. Therefore, we have

f(n)=g(n)+h(n)

where f(n) is the estimated cost of the cheapest solution


through n.
So, in order to find the cheapest solution, try to find the lowest values
of f(n).

January 26, 2003 AI: Chapter 3: Solving Problems 93


by Searching
In the above example, S is the root node, and G is the goal node. Starting from
the root node S and moving towards its next successive nodes A and B. In order
to reach the goal node G, calculate the f(n) value of node S, A and B using the
evaluation equation i.e. f(n)=g(n)+h(n)

January 26, 2003 AI: Chapter 3: Solving Problems 94


by Searching
A* Search Algorithm
Calculation of f(n) for node S:

f(S)=(distance from node S to S) + h(S)


0+10=10.

Calculation of f(n) for node A:

f(A)=(distance from node S to A)+h(A)


2+12=14

Calculation of f(n) for node B:

f(B)=(distance from node S to B)+h(B)


3+14=17
Therefore, node A has the lowest f(n) value. Hence, node A will be
explored to its next level nodes C and D and again calculate the lowest f(n)
value. After calculating, the sequence we get is S->A->D->G with
f(n)=13(lowest value).

January 26, 2003 AI: Chapter 3: Solving Problems 95


by Searching
How to make A* search admissible to get an
optimized solution?
A* search finds an optimal solution as it has the admissible heuristic function
h(n) which believes that the cost of solving a problem is less than its actual cost .

 A heuristic function can either underestimate or overestimate the cost


required to reach the goal node.

 But an admissible heuristic function never overestimates the cost value


required to reach the goal state.

Underestimating the cost value means the cost we assumed in our mind is less
than the actual cost.

Overestimating the cost value means the cost we assumed is greater than the
actual cost, i.e.,

96
Here, h(n) is the actual heuristic cost
value and h’(n) is the estimated heuristic cost
value.

January 26, 2003 AI: Chapter 3: Solving Problems 97


by Searching
Note: An overestimated cost value
may or may not lead to an
optimized solution, but an
underestimated cost value always
lead to an optimized solution.

January 26, 2003 AI: Chapter 3: Solving Problems 98


by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 99
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 100
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 101
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 102
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 103
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 104
by Searching
Consider the below search tree where the starting/initial node is A and goal node
is E. We have different paths to reach the goal node E with their different heuristic
costs h(n) and path costs g(n). The actual heuristic cost is h(n)=18. Let’s suppose
two different estimation values:

January 26, 2003 AI: Chapter 3: Solving Problems 105


by Searching
•So, when the cost value is overestimated, it will not take any load to search the best
optimal path and acquire the first optimal path.

•But if the h(n) value is underestimated, it will try and reach the best optimal value of
h(n) which will lead to a good optimal solution.

Note: Underestimation of h(n) leads to a


better optimal solution instead of
overestimating the value.

January 26, 2003 AI: Chapter 3: Solving Problems 106


by Searching
The performance measure
of A* search

Completeness: The star(*) in A* search


guarantees to reach the goal node.

Optimality: An underestimated cost will


always give an optimal solution.

Space and time complexity: A* search


has O(bd) space and time complexities.

January 26, 2003 AI: Chapter 3: Solving Problems 107


by Searching
Disadvantage of A* search

A* mostly runs out of space for a long period.

January 26, 2003 AI: Chapter 3: Solving Problems 108


by Searching
AO* search Algorithm

AO* search is a specialized graph based on AND/OR


operation.

It is a problem decomposition strategy where a problem is


decomposed into smaller pieces and solved separately to get a
solution required to reach the desired goal.

 Although A*search and AO* search, both follow best-first


search order, but they are dissimilar from one another.

January 26, 2003 AI: Chapter 3: Solving Problems 109


by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 110
by Searching
AO* search Algorithm

Here, the destination/ goal is to eat some food.

We have two ways, either order food from any restaurant or buy some food
ingredients and cook to eat food.

 Thus, we can apply any of the two ways, the choice depends on us.

It is not guaranteed whether the order will be delivered on time, food will be
tasty or not, etc.

But if we will purchase and cook it, we will be more satisfied.

Therefore, the AO* search provides two ways to choose either OR or AND.

 It is better to choose AND rather OR to get a good optimal solution.

January 26, 2003 AI: Chapter 3: Solving Problems 111


by Searching
Constraint Statisfaction
Constraint satisfaction is a technique where a problem is solved
when its values satisfy certain constraints or rules of the problem.
Such type of technique leads to a deeper understanding of the
problem structure as well as its complexity.

Constraint satisfaction depends on three components, namely:


X: It is a set of variables.

D: It is a set of domains where the variables reside. There is a


specific domain for each variable.

C: It is a set of constraints which are followed by the set of


variables.
January 26, 2003 AI: Chapter 3: Solving Problems 112
by Searching
In constraint satisfaction, domains are the spaces where the
variables reside, following the problem specific constraints.
These are the three main elements of a constraint
satisfaction technique.
The constraint value consists of a pair of {scope, rel}.
The scope is a tuple of variables which participate in the
constraint and rel is a relation which includes a list of values
which the variables can take to satisfy the constraints of the
problem.

January 26, 2003 AI: Chapter 3: Solving Problems 113


by Searching
Solving Constraint Satisfaction Problems
The requirements to solve a constraint satisfaction problem
(CSP) is:

A state-space

The notion of the solution.

A state in state-space is defined by assigning values to


some or all variables such as

{X1=v1, X2=v2, and so on…}.


114
An assignment of values to a variable can be done in three
ways:

Consistent or Legal Assignment: An assignment which does not


violate any constraint or rule is called Consistent or legal assignment.

Complete Assignment: An assignment where every variable is


assigned with a value, and the solution to the CSP remains consistent.
Such assignment is known as Complete assignment.

Partial Assignment: An assignment which assigns values to some of


the variables only. Such type of assignments are called Partial
assignments.

January 26, 2003 AI: Chapter 3: Solving Problems 115


by Searching
Types of Domains in CSP

There are following two types of domains which are used by


the variables :

Discrete Domain: It is an infinite domain which can have one state


for multiple variables. For example, a start state can be allocated
infinite times for each variable.

Finite Domain: It is a finite domain which can have continuous states


describing one domain for one specific variable. It is also called a
continuous domain.

January 26, 2003 AI: Chapter 3: Solving Problems 116


by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 117
by Searching
January 26, 2003 AI: Chapter 3: Solving Problems 118
by Searching

You might also like