0% found this document useful (0 votes)
25 views

Solving Problems by Searching

The document discusses different search algorithms used to solve problems, including uninformed search strategies like breadth-first search, uniform-cost search, depth-first search, and depth-limited search. It provides examples of applying search algorithms to problems like the 8-puzzle, robotic assembly, and finding a route between cities in Romania. Key aspects like the state space, search space, node data structure, and performance of different search strategies are explained.

Uploaded by

Vinisha Kolla
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Solving Problems by Searching

The document discusses different search algorithms used to solve problems, including uninformed search strategies like breadth-first search, uniform-cost search, depth-first search, and depth-limited search. It provides examples of applying search algorithms to problems like the 8-puzzle, robotic assembly, and finding a route between cities in Romania. Key aspects like the state space, search space, node data structure, and performance of different search strategies are explained.

Uploaded by

Vinisha Kolla
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 64

Solving problems by searching

Chapter 3

12/08/21 1
Outline
 Problem-solving agents
 Problem types
 Problem formulation
 Example problems
 Basic search algorithms

12/08/21 2
Selecting a state space
 Real world is absurdly complex
 state space must be abstracted for problem solving
 (Abstract) state = set of real states
 (Abstract) action = complex combination of real actions
 e.g., "Arad  Zerind" represents a complex set of possible routes,
detours, rest stops, etc.
 For guaranteed realizability, any real state "in Arad“ must
get to some real state "in Zerind"
 (Abstract) solution =
 set of real paths that are solutions in the real world
 Each abstract action should be "easier" than the original
problem

12/08/21
 3
Vacuum world state space graph

 states?
 actions?
 goal test?
 path cost?

12/08/21 4
Vacuum world state space graph


states? integer dirt and robot location
 actions? Left, Right, Suck
 goal test? no dirt at all locations
 path cost? 1 per action

12/08/21 5
Example: The 8-puzzle

 states?
 actions?
 goal test?
 path cost?

12/08/21 6
Example: The 8-puzzle

 states? locations of tiles


 actions? move blank left, right, up, down
 goal test? = goal state (given)
 path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]

12/08/21 7

Example: robotic assembly

 states?: real-valued coordinates of robot joint


angles parts of the object to be assembled
 actions?: continuous motions of robot joints
 goal test?: complete assembly
 path cost?: time to execute


12/08/21 8
Example: Romania
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest
 Formulate goal:
 be in Bucharest
 Formulate problem:
 states: various cities
 actions: drive between cities
 Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest


12/08/21 9

Example: Romania

12/08/21 10
Problem-solving agents

12/08/21 11
Searching
In Artificial Intelligence,
search of a problem space for a solution to a problem
not: search through data structures
Examine possible sequences of actions
Input:
problem description, initial state
Output:
solution as an action sequence
Search space:
set of all possible action sequences

12/08/21 12
Searching
 Searching is the process of looking for a
sequence of actions that ultimately lead to a
solution.
1. Choosing an initial state,
2. Testing whether it is a goal state,
3. Expanding it to generate more states, if it is not a
goal state,
4. Repeating above steps until either a solution is
found, or, all the states are expanded.

12/08/21 13
Search Strategies
Different ways to search
Determines which one of the states is to be
expanded next.
1. Uninformed (Blind)
2. Informed (Heuristic)

12/08/21 14
Tree search algorithms
 Basic idea:
 offline, simulated exploration of state space by
generating successors of already-explored states
(expanding states)

12/08/21 15
Tree search example

12/08/21 16
Tree search example

12/08/21 17
Tree search example

12/08/21 18
State Space Vs. Search Space
 State Space consists of all states in the problem.
(Ex: 1 for each city in route-finding problem)
 Search space contains all paths from each node to
goal node. (∞ no. of nodes).
 A state is a (representation of) a physical
configuration of a node
 A node is a data structure that includes state,
parent node, action, path cost, depth. A node
constitutes part of a search tree.

12/08/21 19
Node data structure
 State: Present state of the node
 Parent: which generated this node?
 Action: Due to which this node is generated?
 Path cost: g(n) cost of the path from initial state
to the current node
 Depth: No. of steps along the path from initial
state.

12/08/21 20
Implementation: states vs. nodes

 Fringe is the collection of nodes that are generated, but


not expanded.
 The Expand function creates new nodes, filling in the
various fields and using the Successor Function of the
problem to create the corresponding states.
12/08/21 21
Implementation: general tree search

12/08/21 22
Performance of Search strategies
 Search Strategies are evaluated along the
following dimensions:
 completeness: does it always find a solution if one
exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in
memory
 optimality: does it always find a least-cost solution?

12/08/21 23
 Branching factor: Max. no. of successors of
any node
 Time and space complexity are measured in
terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be
∞)

12/08/21 24
Uninformed (Blind) search strategies

 Uninformed search strategies use only the


information available in the problem definition

1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
12/08/21 25
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

12/08/21 26
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

12/08/21 27
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

12/08/21 28
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

12/08/21 29
Properties of Breadth-First Search
 Complete? Yes (if b is finite)
 Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
 Space? O(bd+1) (keeps every node in memory)
 Optimal? Yes (if cost = 1 per step)

 Space is the bigger problem (more than time)


 Not suitable except for small problems.

12/08/21 30
Uniform-cost search
 Expand least-cost unexpanded node
 Implementation:
 fringe = queue ordered by path cost
 Equivalent to breadth-first if step costs all equal
 Complete? Yes, if step cost ≥ ε
 Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
 Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
 Optimal? Yes – nodes expanded in increasing order of g(n)


12/08/21 31

Depth-First Search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 32
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 33
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 34
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 35
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 36
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 37
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 38
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 39
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 40
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 41
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., (Stack). Put successors at
front

12/08/21 42
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front

12/08/21 43
Properties of depth-first search
 Complete? No: fails in infinite-depth spaces, spaces
with loops
 Modify to avoid repeated states along path
 complete in finite spaces
 Time? O(bm): terrible if m is much larger than d
 but if solutions are dense, may be much faster than
breadth-first
 Space? O(bm), i.e., linear space!
 Optimal? No

12/08/21 44
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors

 Recursive implementation:

12/08/21 45
Iterative deepening search

12/08/21 46
Iterative deepening search l =0

12/08/21 47
Iterative deepening search l =1

12/08/21 48
Iterative deepening search l =2

12/08/21 49
Iterative deepening search l =3

12/08/21 50
Iterative deepening search
 Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

 Number of nodes generated in an iterative deepening


search to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd

 For b = 10, d = 5,
 NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

 Overhead = (123,456 - 111,111)/111,111 = 11%



12/08/21 51

Properties of iterative deepening
search

 Complete? Yes
 Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
 Space? O(bd)
 Optimal? Yes, if step cost = 1


12/08/21 52
Summary of algorithms

12/08/21 53
Repeated states
 Failure to detect repeated states can turn a
linear problem into an exponential one!
 A solvable problem becomes unsolvable.

12/08/21 54
Repeated states

 Algorithms that forget their history must repeat.


 Closed List : A data structure to store every expanded node.
Algorithm can remember every state that it has visited using
closed list.
 Open List : Data structure to store only Fringe nodes.
 Graph Search: Using closed list, Graph Search compares
every current node with closed list, and if it matches with any
node in closed list, it is discarded (not expanded).
 Eliminates searching of all duplicate nodes.
12/08/21 55
Graph search

Time &Space complexities in worst case ∞ size of state space

12/08/21 56
Searching with Partial Information

1. Sensor less Problems (Conformant Problems)


2. Contingency Problems
1. Adversarial Problems: if the uncertainty is caused by the
actions of other agents.
3. Exploration Problems

12/08/21 57
Problem types
 Deterministic, fully observable  single-state problem
 Agent knows exactly which state it will be in; solution is a

sequence
 Non-observable  sensor less problem (conformant problem)
 Agent may have no idea where it is; solution is a sequence.

 Agent must think in terms of a set of possible states than a single

state.
 Belief State: A set of states in which the agent is.

 If there are S states in Physical space, 2 power s Belief States.

 Nondeterministic and/or partially observable  contingency


problem
 percepts provide new information about current state

 often interleave search, execution

 Unknown state space  exploration problem



12/08/21 58
Example: vacuum world
 Single-state, start in #5.
Solution?

12/08/21 59
Example: vacuum world
 Single-state, start in #5.
Solution? [Right, Suck]

 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

12/08/21 60
Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

 Contingency
 Nondeterministic: Suck may
dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7
Solution?


12/08/21 61
Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

 Contingency
 Nondeterministic: Suck may
dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]

12/08/21 62
Single-state problem formulation
A problem is defined by four items:

1. initial state e.g., "at Arad"


2. actions or successor function S(x) = set of action–state pairs
 e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3. goal test, can be
 explicit, e.g., x = "at Bucharest"
 implicit, e.g., Checkmate(x)
4. path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0

 A solution is a sequence of actions leading from the initial state to a


goal state


12/08/21
 63
Summary
 Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored

 Variety of uninformed search strategies

 Iterative deepening search uses only linear space and not


much more time than other uninformed algorithms

12/08/21 64

You might also like