0% found this document useful (0 votes)
3 views

Chapter 3

Uploaded by

deyaaherbawi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Chapter 3

Uploaded by

deyaaherbawi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 142

Artificial Intelligence: A Modern Approach

Chapter 3

Solving Problems By
Searching

Chapter 3 S o l v i n g P r o b l e m s B y S e a r c h i n g
1
Outline

♦ Problem-solving
agents
♦ Example Problems
♦ Problem
formulation
♦ Search Algorithms
♦ Uninformed Search
Strategies
♦ Informed
(Heuristic) Search
Strategies

Chapter 2: Intelligent Agents 2


Problem Solving Agents

• In which we see how an agent can find a sequence of


actions that achieves its goals when no single action
will do.

• The simplest agents discussed in Chapter 2 were the


reflex agents, which base their actions on a direct
mapping from states to actions.

• This chapter PROBLEM-SOLVING describes one


kind of goal-based agent called a problem-
solving agent. use atomic representations
Problem-solving Agents

• Problem Solving Agent is a goal-based agent


• It solves problem by finding sequences of actions
that lead to goals
• To solve a problem,
– First of all: formulate a goal based on the current situation (goal
formulation phase):
• The goal is formulated as a set of world states, in which the goal is
satisfied
– Then: formulate a problem: is the process of deciding what
actions and states to consider, given a goal (problem
formulation Phase).
– Next, look for a sequence of actions that reaches the goal
(search phase):
• Actions are the operators causing transitions between world states
– Finally, once a solution is found, the actions it recommends can
be carried out (execution phase)
• Thus, we have a simple "formulate, search, execute"
design for the agent
Artificial Intelligence 4
• Driving from Ajlun  Amman
– Goal: be in Amman
– Problem:
• States: various cities
• Actions: drive between cities.
– Solution:
• sequence of cities: Ajlun  Jarash  Amman
• We assume that the environment is
– Observable, so the agent always knows the current
state.
– Discrete, so at any given state there are only
finitely many actions to choose from,
– Deterministic, so each action has exactly one
outcome.
– From these,
• we know that problem-solving agent is the easiest one
Well-defined Problems and Solutions
A problem can be defined formally by five components:
1. The initial state(s) that the agent starts in: In(Ajlun).
2. ACTIONs(a) : A description of the possible actions available to
the agent given a particular state s :
– If s is In(Ajlun), then the applicable actions are {Go(Irbid),
Go(Jarash), Go(Salt)}.
3. RESULT (s, a): that returns the (successor) state that results from
doing action a in state s (transition model).
– RESULT (In(Ajlun), Go(Jarash)) = In(Jarash) .
4. The goal test, which determines whether a given state is a goal
state.
– The agent's goal in Amman is { In(Amman)}.
5. A path cost function that assigns a numeric cost to each path.
– The agent chooses a cost function that reflects its own performance
measure.
– the cost of a path might be its length in kilometers.
– The step cost of taking action a in state s to reach state s' is denoted
by : c (s, a, s').
• State Space:
– Together, the initial state, actions, and transition model (1, 2
and 3) define the state space of the problem.
– The state space forms a directed network or graph in which
the nodes are states and the links between nodes are actions.
– A path in the state space is a sequence of states connected by
a sequence of actions.
• The preceding elements define a problem and can be
gathered into a single data structure that is given as input to
a problem-solving algorithm.
• A solution to a problem is an action sequence that leads
from the initial state to a goal state.
• Solution quality is measured by the path cost function, and
an optimal solution has the lowest path cost among all
solutions.
Uninformed and Informed Searching
Algorithms

• uninformed search algorithms that are given


no information about the problem other than its
definition

• Informed search algorithms, on the other hand,


can do quite well given some guidance on where
to look for solutions
Goal formulation and Problem formulation

• Goal formulation, based on the current


situation and the agent’s performance
measure, is the first step in problem solving

• Problem formulation is the process of


deciding what actions and states to consider,
given a goal
Example : Romania
Problem-solving agents
Restricted form of general
agent:
function Simple-Problem-Sol ving-Agent ( percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state ← Update-State(state, percept)
if seq is empty then
goal ← Formulat e - G oa l (state)
problem ← Formulat e - P roblem (state,
goal) seq ← Search( problem)
action ← Recommendation (seq,
state) seq ← Remainder(seq, state)
return action

Note: this is offline problem solving; solution executed


“eyes closed.” Online problem solving involves acting
without complete knowledge.
Problem Formulation & Solution
Example: Romania

On holiday in Romania; currently in


Arad. Flight leaves tomorrow from
Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between
cities
Find solution:
sequence of cities, e.g.,
Arad, Sibiu, Fagaras,
Bucharest
Problem types

Deterministic, fully observable =⇒ single-state problem


Agent knows exactly which state it will be in; solution is
a sequence
Non-observable =⇒ conformant problem
Agent may have no idea where it is; solution (if any) is a
sequence
Nondeterministic and/or partially observable =⇒
contingency problem percepts provide new
information about current state
solution is a contingent plan or a policy
often interleave search,
execution
Unknown state space =⇒ exploration
problem (“online”)
Example: vacuum world
EX: Vacuum Cleaner Problem Formulation
Example: vacuum world state space graph

states??
actions??
goal
test??
path
cost??
Example: vacuum world state space graph

states??: integer dirt and robot locations (ignore dirt


amounts etc.) actions??
goal
test??
path
cost??
Example: vacuum world state space graph

states??: integer dirt and robot locations actions??:


Left, Right, Suck, N oOp
goal
test??
path
cost??
Example: vacuum world state space graph

states??: integer dirt and robot locations (ignore dirt


amounts etc.) actions??: Left, Right, Suck, N oOp
goal test??: no
dirt path cost??
Example: vacuum world state space graph

states??: integer dirt and robot locations (ignore dirt


amounts etc.) actions??: Left, Right, Suck, N oOp
goal test??: no dirt
path cost??: 1 per action (0 for N oOp)
Example: The 8-puzzle

7 2 4 1 2 3

5 6 4 5 6

8 3 1 7 8

states??
actions??
goal
test??
path
cost??
Example: The 8-puzzle

7 2 4 1 2 3

5 6 4 5 6

8 3 1 7 8

Start State

Goal State

states??: integer locations of tiles (ignore intermediate


positions) actions??
goal
test??
path
cost??
Example: The 8-puzzle

7 2 4 1 2 3

5 6 4 5 6

8 3 1 7 8

Start State
Goal State

states??: integer locations of tiles (ignore intermediate


positions) actions??: move blank left, right, up, down
(ignore unjamming etc.) goal test??
path cost??
Example: The 8-puzzle

7 2 4 1 2 3

5 6 4 5 6

8 3 1 7 8

Start State
Goal State

states??: integer locations of tiles (ignore intermediate


positions) actions??: move blank left, right, up, down
(ignore unjamming etc.) goal test??: = goal state (given)
path cost??
Example: The 8-puzzle

7 2 4 1 2 3

5 6 4 5 6

8 3 1 7 8

Start State
Goal State

states??: integer locations of tiles (ignore intermediate


positions) actions??: move blank left, right, up, down
(ignore unjamming etc.) goal test??: = goal state (given)
path cost??: 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
Tree search algorithms

Basic idea:
offline, simulated exploration of state
space
by generating successors of already-
explored states (a.k.a. expanding
states)
function Tree-Search( problem, strategy) returns a solution, or
failure initialize the search tree using the initial state of
problem
loop do
if there are no candidates for expansion then
return failure choose a leaf node for expansion
according to strategy
if the node contains a goal state then return the
corresponding solution
else expand the node and add the resulting nodes
to the search tree
end
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search
Tree Search

Sub Tree
Tree Search
Tree Search
Tree search example
Tree search example
Tree search example
Search strategies
Searching For Solutions
• A strategy is defined by picking the order of node
expansion
Measuring problem-solving performance
• Strategies are evaluated along the following
dimensions:
– Completeness: does it always find a solution if one exists?
– Optimality: does it always find a least-cost solution?

– Time complexity: number of nodes generated (basic
operation)
– Space complexity: maximum number of nodes in memory
• Time and space complexity are measured in terms
of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
Types of Search strategies

1- Uninformed (blind) Search


2- Informed Search
3- Adversarial Search(Game Theory)
Types of Search strategies
 Uninformed search: no information about:
• the number of steps
• the path cost from the current state to the goal
– search the state space blindly
 Uninformed strategies use only the information available in the problem
definition.

Uninformed Search(blind Search):


• Breadth-first search
• Uniform-cost search (UCS)
• Depth-first search
• Depth-limited search
• Iterative deepening search
 Informed search, or heuristic search
– a cleverer strategy that searches
toward the goal,
– based on the information from the
current state so far
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Breadth-first search
Properties of breadth-first search
• Complete: find the solution eventually (if b is finite)
• Optimal: When all step costs are equal BFS is optimal because it
always expands the shallowest unexpanded node.

• Time: the total number of nodes generate


1+b+b2+b3+… +bd = O(bd)

• Space: every node generated remains in memory.


– the space complexity is O(bd),

• The disadvantage
• the space complexity (worst) and the time complexity are
enormous

Artificial Intelligence 61
Uniform Cost Search (UCS)

Uniform cost search is same as BFS but there are some


significant differences
1- Instead of expanding the shallowest node, uniform-cost
search expands the node n with the lowest path cost g(n).
– This is done by storing the frontier as a priority queue ordered
by g.
– By this, we can find an algorithm that is optimal with any step-
cost function (step costs are nonnegative).
2- The goal test is applied to a node when it is selected for
expansion rather than when it is first generated.
3-The second difference is that a test is added in case a better
path is found to a node currently on the frontier.
Uniform Cost Search (UCS)

Example: The problem is to get from Sibiu to


Bucharest.
• The successors of Sibiu are Rimnicu Vilcea and
Fagaras, with costs 80 , 99,respectively.
• The least-cost node, Rimnicu Vilcea, is expanded
next, adding Pitesti with cost 80+97 =177.
• The least-cost node is now Fagaras, so it is
expanded, adding Bucharest with cost 99+211
=310.
• Now a goal node has been generated; but uniform-
cost search keeps going,
• choosing Pitesti for expansion and adding a
second path to Bucharest with cost 80+97+101=
278.
• Now the algorithm checks to see if this new path is
better than the old one: it is, so the old one is
discarded.
• Bucharest, now with g-cost 278, is selected for
expansion and the solution is returned.
Uniform Cost Search (UCS)
Example 2

5 2
[5] [2]
1 4 1 7
[6] [3] [9]
[9]
Goal 4 5
state
[x] = g(n) [7] [8]
path cost of
node n 64
Uniform Cost Search (UCS)

5 2
[5] [2]

65
Uniform Cost Search (UCS)

5 2
[5] [2]

1 7
[3] [9]

66
Uniform Cost Search (UCS)

5 2
[5] [2]

1 7
[3] [9]
4 5

[7] [8]

67
Uniform Cost Search (UCS)

5 2
[5] [2]
1 4 1 7
[6] [3] [9]
[9]
4 5

[7] [8]

68
Uniform Cost Search (UCS)

5 2
[5] [2]
Goal state
1 4 1
path cost 7
g(n)=[6] [3] [9]
[9]
4 5

[7] [8]

69
Uniform Cost Search (UCS)

5 2
[5] [2]
1 4 1 7
[6] [3] [9]
[9]
4 5

[7] [8]

70
Uniform Cost Search: Example3

H
F G

E
A

B C
D

Instructor:Ahmad M. AL-Smadi 71
Uniform Cost Search

H
F G

E
A

B C

D
Instructor:Ahmad M. AL-Smadi 72
Uniform Cost Search

Instructor:Ahmad M. AL-Smadi 73
Depth-first search

Stack
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Depth-first search
Properties of Depth-first search

• Not complete
– because a path may be infinite or looping then the path will
never fail and go back try another option
• Not optimal (refer to the previous figure)
– depth- first search will explore the entire left subtree even if
node 14 is a goal node.
– hence, depth-first search is not optimal.

Artificial Intelligence 94
Properties of Depth-first search

• Time and Space complexity


– The time complexity of depth-first graph search is bounded by
the size of the state space (which may be infinite, of course).
– A depth-first tree search, on the other hand, may generate all of
the O(bm) nodes in the search tree, where m is the maximum
depth of any node;
– So far, depth-first search seems to have no clear advantage over
breadth-first search, so why do we include it?

– The reason is the space complexity.


• For a depth-first tree search needs to store only a single path
from the root to a leaf node, along with the remaining
unexpanded sibling nodes for each node on the path.
• Once a node has been expanded, it can be removed from
memory as soon as all its descendants have been fully
explored.
• For a state space with branching factor b and maximum depth
m, depth-first search requires storage of only O(bm) nodes.
• linear space
Depth-Limited Search

 The embarrassing failure of DFS in infinite state spaces can be


alleviated by supplying DFS with a predetermined depth limit l.
– That is, nodes at depth l are treated as if they have no successors.
– The depth limit solves the infinite-path problem.
– Unfortunately,
• it also introduces an additional source of incompleteness if we
choose l < d, that is, the shallowest goal is beyond the depth
limit.
• Depth-limited search will also be non-optimal if we choose l > d.
– Its time complexity is O(bl) and its space complexity is O(bl).
 DFS can be viewed as a special case of depth-limited search with
l=∞.

Artificial Intelligence 96
Depth-limited search

= depth-first search with depth


limit l, i.e., nodes at depth l have
no successors
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Depth-limited search
Iterative deepening search

function Iterative-Deepening-Search( problem) returns a solution


inputs: problem, a problem
for depth ← 0 to ∞ do
result ← Depth-Limited-Search( problem, depth)
if result /= cutoff then return result
end
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search

Iterative Deepening DFSIDS


• It tries all possible depth limits:
– first 0, then 1, 2, and so on—until a goal is found
– combines the benefits of depth-first and breadth-first search
• Like DFS, its memory requirements are: O(bd) .
• Like BFS,
– it is complete when the branching factor is finite
– and optimal when the path cost is a nondecreasing function
of the depth of the node.
• Time and space complexities
– reasonable
• Suitable for problems
– having a large search space
– and the depth of the solution is not known
Summary

• Breadth-first search expands the shallowest nodes first; it is


complete, optimal for unit step costs. but has exponential
space complexity.
• Depth-first search expands the deepest unexpanded node
first. It is neither complete nor optimal, but has linear space
complexity.
• Depth limited search adds a depth bound.
• Iterative deepening search calls depth-first search with
increasing depth limits until a goal is found. It is complete,
optimal for unit step costs, has time complexity comparable to
breadth-first search, and has linear space complexity.

Artificial Intelligence 113


Informed Search

• Informed Search refers to search algorithms


which help in navigating large databases with
certain available information about the end
goal in search and most widely used in large
databases where uninformed search
algorithms can't accurately curate precise
results.

Types:

• Greedy Best-First Search


• A* Search
Greedy Best-first search

Evaluation function h(n) (heuristic)


= estimate of cost from n to the closest goal
E.g., h S L D (n) = straight-line distance from n to Bucharest
Greedy search expands the node that appears to be closest
to goal
Greedy Best-first search

Choose Smaller Value (B)


Greedy Best-first search

Reach The Destination (I), So go to the next Least Value (A)


Greedy Best-first search

Reach The Destination (I), So go to the next Least Value (C)


Greedy Best-first search
Greedy Best-first search

i<j , and I is the Destination


Final path is S-C-H-I
Romania with step costs in km
Greedy search example
Greedy search example
Greedy search example
Greedy search example
Greedy Best first search

• For this particular problem:


– greedy best-first search finds a solution without ever
expanding a node that is not on the solution path;
– hence, its search cost is minimal.
• However, it is not optimal:
– the path via Sibiu and Fagaras to Bucharest is 32 kilometers
longer than the path through Rimnicu Vilcea and Pitesti.
• Also it is incomplete:
– Consider the problem of getting from Iasi to Fagaras.
– The heuristic suggests that Neamt be expanded first because
it is closest to Fagaras, but it is a dead end.
• Time: O(bm), but a good heuristic can give dramatic
improvement
• Space: O(bm) -- keeps all nodes in memory
A* Search

Idea: avoid expanding paths that are already


expensive
Evaluation function f (n) = g(n) +
h(n) g(n) = cost so far to reach n
h(n) = estimated cost to goal from n
f (n) = estimated total cost of path
through n to goal
A∗ search uses an admissible
heuristic

The algorithm is identical to UNIFORM-COST-SEARCH


except that A* uses g + h instead of g.
A* search is both complete and optimal.
A* Search Example 1
A* Search Example 1
A* Search Example 1
A* Search Example 1
A* Search Example 1

Least
Value
A* Search Example 1
A* Search Example 1
A* Search Example 1
A* Search Example 2
A* Search Example 2
A* Search Example 2
A* Search Example 2
A* Search Example 2
A* Search Example 2
Summary

A problem consists of five parts: the initial state, a set of


actions, a transition model describing the results of
those actions, a set of goal states, and an action cost
function.

Uninformed search methods have access only to the


problem definition. Algorithms build a search tree in an
attempt to find a solution.

Informed search methods have access to a heuristic


function h(n) that estimates the cost of a solution from n.

You might also like