Solving Problems by Searching: Dr. Azhar Mahmood
Solving Problems by Searching: Dr. Azhar Mahmood
Searching
Dr. Azhar Mahmood
Associate Professor
Email: [email protected]
Outline
• Problem-solving Agents
• Problem Formulation
• Example Problems
2
Solving Problems by Searching
• Reflex agent is simple
– base on their actions on a direct mapping from states to
actions
– have great difficulties in learning desired action sequences
– but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn
– It solves problem by
• finding sequences of actions that lead to desirable states
(goals)
– To solve a problem,
• the first step is the goal formulation, based on the current
situation
Goal Formulation
• The goal is formulated
– As a set of world states, in which the goal is satisfied
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
7
Example: Romania
8
Search
• Because there are many ways to achieve the same
goal
– Those ways are together expressed as a tree
– Multiple options of unknown value at a point,
• The agent can examine different possible sequences of
actions, and choose the best
– This process of looking for the best sequence is called
search
– The best sequence is then a list of actions, called solution
Search Algorithm
• Defined as
– taking a problem
– and returns a solution
• Design of an agent
– “Formulate, search, execute”
Problem-solving Agents
11
Well-defined problems and solutions
1.Initial state
2.Actions
3.Transition model or (Successor functions)
4.Goal Test
5.Path Cost
Well-defined problems and solutions
• A problem is defined by 5 components:
– The initial state
• that the agent starts in: e.g., the initial state for our agent in initial
state “Romania” might be described as In(Arad).
– The set of possible actions available to the agent
• Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in
s. We say that each of these actions is applicable in s. For example, from the state
In(Arad), the applicable actions are {Go(Sibiu),Go(Timisoara),Go(Zerind)}.
– Transition model: Description of what each action does
• RESULT(In(Arad),Go(Zerind)) = In(Zerind)
(successor functions): refer to any state reachable from given
state by a single action
• Optimal solution
– the solution with lowest path cost among all solutions
Single-state Problem Formulation
A problem is defined by four items:
1. initial state e.g., "at Arad"
2. actions or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad Zerind, Zerind>, … }
• Abstraction
– The process to take out the irrelevant information
– Leave the most essential parts to the description of the states
( Remove details from representation)
• states?
• actions?
• goal test?
• path cost?
20
Vacuum world state space graph
• states?
• actions?
• goal test?
• path cost?
22
Example: The 8-puzzle
23
Searching for solutions
Searching for Solutions
• Expanding
– Applying successor function to the current state
– Thereby generating a new set of states
• Leaf nodes
– The states having no successors
Fringe / frontier : Set of search nodes that have not been expanded yet.
27
Tree Search Example
28
Tree Search Example
29
Tree Search Algorithms
• Basic idea:
– Offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~ expanding
states)
30
Implementation: General Tree Search
31
Search tree
• A node is having five components:
– STATE: which state it is in the state space
– PATH-COST: the cost, g(n), from initial state to the node n itself
– DEPTH: number of steps along the path from the initial state
Implementation: States vs. Nodes
• A state is a (representation of) a physical configuration
• A node is a data structure constituting part of a search tree includes
state, parent node, action, path cost g(x), depth
• The Expand function creates new nodes, filling in the various fields
and using the SuccessorFn of the problem to create the
corresponding states.
33
Search Strategies
• The essence of searching
– In case the first choice is not correct
– Choosing one option and keep others for later inspection
• Hence, we have the search strategy
– which determines the choice of which state to expand
– good choice fewer work faster
• Important:
– state space ≠ search tree
• State space has unique states {A, B}
• while a search tree may have cyclic paths: A-B-A-B-A-B-
…
• A good search strategy should avoid such paths
Measuring Problem-Solving
Performance
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
35
Measuring Problem-Solving
Performance
• For effectiveness of a search algorithm
– we can just consider the total cost
– The total cost = path cost (g) of the solution found
+ search cost
• search cost = time necessary to find the solution
• Tradeoff:
– (long time, optimal solution with least g)
– vs. (shorter time, solution with slightly larger path
cost g)
Reading
• Artificial Intelligence: A Modern Approach, 3rd
edition
– Chapter 3: Solving problem by Searching
– Section 3.1- 3.4
Next
• Informed/Heuristic Search
– Greedy best-first search
– A* search
– Heuristic functions