Ai 4
Ai 4
Problem Solving
1
To Solve a Problem:
3
Single Agent Pathfinding Problem
• In these problems, in each case, we have a single problem-
solver making the decisions, and the task is to find a sequence
of primitive steps that take us from the initial location to the
goal location.
• Famous examples:
– Rubik’s Cube (Erno Rubik, 1975).
– Sliding-Tile puzzle.
– Navigation - Travelling Salesman Problem.
4
Two Player Games
• In a two-player game, one must consider the moves of
an opponent, and the ultimate goal is a strategy that
will guarantee a win whenever possible.
• Two-player perfect information have received the
most attention of the researchers till now. But,
nowadays, researchers are starting to consider more
complex games, many of them involve an element of
chance.
• The best Chess, Checkers, and Othello players in the
world are computer programs!
5
Constraint-Satisfaction Problems
• In these problems, we also have a single-agent making all the
decisions, but here we are not concerned with the sequence
of steps required to reach the solution, but simply the
solution itself.
• The task is to identify a state of the problem, such that all the
constraints of the problem are satisfied.
• Famous Examples:
– Eight Queens Problem.
– Number Partitioning.
6
The Problem Spaces
• A problem space consists of a set of states of a
problem and a set of operators that change
the state.
– State : a symbolic structure that represents
a single configuration of the problem in a
sufficient detail to allow problem solving to
proceed.
– Operator : a function that takes a state and
maps it to another state.
7
Representing State
• At any moment, the relevant world is represented as a state
– Initial or Start State: It is the first state from where
algorithm is initiated.
– An action (or an operation) changes the current state to
another state (if it is applied): state transition
– An action can be taken (applicable) only if its
precondition is met by the current state
– For a given state, there might be more than one
applicable actions
– Goal state: a state satisfies the goal description or passes
the goal test.
8
– Dead-end state: a non-goal state to which
no action is applicable
– Expanded Node: a node which has been
visited.
– Open Node: a node that is available to be
visited.
– State Space: A set of all possible states (or
nodes) to solve a problem (Complete map
of possible nodes).
9
Evaluating Search Strategies
• Completeness
– Guarantees finding a solution whenever one exists
• Time Complexity
– How long does it take to find a solution? Usually measured in terms of the
number of nodes expanded
• Space Complexity
– How much space is used by the algorithm? Usually measured in terms of the
maximum size that the “OPEN" list becomes during the search
• Optimality/Admissibility
– If a solution is found, is it guaranteed to be an optimal or highest quality
solution among several different solutions? For example, is it the one with
minimum cost?
10
Dead-end
Initial State Airport
Railway
Station Latifabad
Hyderabad
Faqir Jo Pir
Gari Khata Hussainabad
Channel
Tower
Market
Bi-Pass
City Gate Goal
Rajputana Qasim
Chowk
11
Example: Calculate time complexity and space complexity if the
agent moves from Railway Station to City Gate through Tower
Market.
Solution: Dead-end Airport
Initial State Railway
Station Latifabad
Hyderabad
Faqir Jo Pir
Gari Khata Hussainabad
Channel
Tower
Market No. of Closed nodes = 4
Bi-Pass
No. of closed nodes = 3
City Gate Goal
Rajputana Qasim
Chowk
Step Open Nodes Closed nodes
01 Railway Station ……………
02 Faqir ka Pir, Latifbad, Railway Station
Gari Khata
03 Gari Khata Hussainabad,
Tower Market
04 Tower Market City Gate
05 City Gate No
You may also create the above table in terms of number of nodes
Step Open Nodes Closed nodes
01 01 0
02 03 1
03 01 02
04 01 01
05 01 0
13
Some Example Problems
8 - Puzzle
14
8 - Puzzle
• Path Cost: each step costs 1, so the path cost is just the length
of the path.
15
A portion of the state space representation of a 8-Puzzle
problem
5 4
6 1 8
7 3 2
5 4 5 4 8
6 1 8 6 1
7 3 2 7 3 2
5 1 4 5 4
6 8 6 1 8
7 3 2 7 3 2
5 1 4
6 8
7 3 2
16
The 8 Queens Problem
The goal of 8-Queen problem is to place eight queens on a
chessboard such that no queen attacks any other. ( A queen
attacks any piece in the same row, column or diagonal).
States: any arrangement of 0 to 8 queens on board.
Operators: add a queen to any square.
Goal Test: 8 queens on board, none attacked.
17
Cryptarithmetic
• In crypt-arithmetic problems, letters stand for digits
and the aim is to find a substitution of digits for
letters such that the resulting sum is arithmetically
correct.
• Example:
FORTY Solution: 29786 F=2, O=9, R=7, etc.
+ TEN 850
+ TEN 850
--------- ------
SIXTY 31486
18
Cryptarithmetic
• States: a cryptarithmetic puzzle with some
letters replaced by digits.
• Operators: replace all occurrences of a letter
with a digit not already appearing in the
puzzle.
• Goal Test: puzzle only contains digits, and
represents a correct sum.
19
Missionaries and Cannibals
• Three cannibals and three missionaries come to a crocodile infested river. There
is a boat on their side that can be used by either one or two persons. If cannibals
outnumber the missionaries at any time, the cannibals eat the missionaries.
How can they use the boat to cross the river so that all missionaries survive ?
There are 3 missionaries, 3 cannibals, and 1 boat that can carry up to two people
• Goal: Move all the missionaries and cannibals across the river.
river.
• Operators: Move boat containing some set of occupants across the river (in
• (Geometric version):
21
An instance of the travelling salesman problem
22
Search of the traveling salesperson problem.
23
Remove 5 Sticks
24
Real World Problems:
• Route finding
• VLSI Layout
• Robot Navigation
etc.
25
• State-space search is the process of searching through a state
space for a solution by making explicit a sufficient portion of an
implicit state-space graph to include a goal node.
– Hence, initially V={S}, where S is the start node; when S is
expanded, its successors are generated and those nodes are
added to V and the associated arcs are added to E. This
process continues until a goal node is generated (included in V)
and identified (by goal test)
• During search, a node can be in one of the three
categories:
– Not generated yet (has not been made explicit yet)
– OPEN: generated but not expanded
– CLOSED: expanded
– Search strategies differ mainly on how to select an OPEN node
for expansion at each step of search
26
A General State Space Search Algorithm
• Node n
– state description
– parent (may use a backpointer) (if needed)
– Operator used to generate n (optional)
– Depth of n (optional)
– Path cost from S to n (if available)
• OPEN list
– initialization: {S}
– node insertion/removal depends on specific search strategy
• CLOSED list
– initialization: {}
– organized by backpointers
27
open := {S}; closed :={ };
repeat
n := select(open); /* select one node from open for expansion */
if n is a goal
then exit with success; /* delayed goal testing */
expand(n)
/* generate all children of n
put these newly generated nodes in open (check duplicates)
put n in closed (check duplicates) */
until open = {};
exit with failure
28
Search Strategies
Search Directions
• The objective of search procedure is to discover a
path through a problem spaces from an initial
configuration to a goal state. There are two
directions in which a search could proceed:
– Forward, from that start states
– Backward, from the goal states
29
Forward Search: (Data-directed / Data-Driven
Reasoning / Forward Chaining)
• This search starts from available information
and tries to draw conclusion regarding the
situation or the goal attainment. This process
continues until (hopefully) it generates a path
that satisfies the goal condition.
30
Backward Search: (Goal directed/driven /
Backward Chaining)
This search starts from expectations of what the goal is or what is to happen
(hypothesis), then it seeks evidence that supports (or contradicts) those
expectations (or hypothesis).
The problem solver begins with the goal to be solved, then finds rules or
moves that could be used to generate this goal and determine what
conditions must be true to use them.
These conditions become the new goals, sub goals, for the search. This
process continues, working backward through successive sub goals, until
(hopefully) a path is generated that leads back to the facts of the problem.
31
Goal-driven search uses knowledge of the goal to
guide the search. Use goal-driven search if;
35
Breadth First Search (BFS)
• Breadth First Search explores the state space
in a level by level fashion. Only when there
are no more states to be explored at a given
level does the algorithm move onto the next
level. A
B C
D E F G
36
BFS
2 3 4
5 6 7 8 9 10 11
12 …
Goal
37
BFS
38
BFS
• Time Complexity and Space Complexity:
- If we look at how BFS expands from the root we see
that it first expands on a set number of nodes, say b.
- On the second level it becomes b2.
- On the third level it becomes b3.
- And so on until it reaches bd for some depth d.
1+ b + b2 + b3 + . . . . + bd which is O(bd)
39
BFS
As you can see BFS is:
Very systematic
Guaranteed to find a solution
What does this mean? From the four criteria,
it means
BFS is complete. If there exists an answer, it will
be found (b should be finite).
BFS is optimal (if cost = 1 per step). The path
from initial state to goal state is shallow.
40
Backwards Breadth First Search
SI
9 …
4 55 66 77 8
22 3
1
SG
41
Bi-Directional BFS
1 SI
3 4
8 9 10 11 12
13 …
5 6 7
2 SG
42
Uniform Cost Search
Expands the least cost leaf node first. It is
complete, and unlike BFS, is optimal even
when operators have different costs. Its space
and time complexity are the same as for
breadth-first search.
43
Uniform Cost Search: Example
A 10
1
S 5 B 5 G
15 5
C
Solution:
S S
S S
0 C A
C A B C
A B 15
1 B 15 5
5
G G
11 11 G
Solution is SBG 10 44
Depth First Search
• A depth first search begins at the root node
(i.e. Initial node) and works downward to
successively deeper levels.
• An operator is applied to the node to generate
the next deeper node in sequence. This
process continues until a solution is found or
backtracking is forced by reaching a dead end.
45
Depth-first search
46
Depth-first search
47
Depth-first search
48
Depth-first search
49
Depth-first search
50
Depth-first search
51
Depth-first search
52
Depth-first search
53
Depth-first search
54
Depth-first search
55
Depth First Search
A
B C
D E F G
H I J K L M N O
56
Depth-First Search
1
SI
22
33 77
55 88
44 8
… Goal
66
57
Evaluation of DFS
• Since we don’t expand all nodes at a level, space
complexity is modest. For branching factor b and
depth d, we require bd number of nodes to be
stored in memory. This is much better than bd.
• In some cases, DFS can be faster than BFS.
However, the worse case is still O(bd).
• If you have deep search trees (or infinite – which
is quite possible), DFS may end up running off to
infinity and may not be able to recover.
• Thus DFS is neither optimal nor complete.
58
Depth Limited Search
• Depth limited search avoids the pitfalls of
depth search by imposing a cut-off on the
maximum depth of a path.
• Depth limited search is complete but not
optimal.
• If we choose a depth limit that is too small,
then depth limited search is not even
complete.
• The time and space complexity of depth
limited search is similar to depth first search.
59
Depth First Iterative Deepening (DFID)
• BF and DF both have exponential time complexity O(b^d)
BF is complete but has exponential space complexity
DF has linear space complexity but is incomplete
• Space is often a harder resource constraint than time
• Can we have an algorithm that
– Is complete
– Has linear space complexity, and
– Has time complexity of O(b^d)
DFID by Korf in 1985
DFID Algorithm
It involves repeatedly carrying out DFS on the tree, starting with a DFS limited to
depth of one, then DFS of depth two, and so on, until a goal is found.
60
Iterative deepening search l =0
61
Iterative deepening search l =1
62
Iterative deepening search l =2
63
Iterative deepening search l =3
64
Iterative deepening search
• Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
• For b = 10, d = 5,
• NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
–
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
66
Repeated states
Failure to detect repeated states can turn a
linear problem into an exponential one!
•
67
How to deal with repeated states?
• Do not return to the state you just came from.
have the expand function refuse to generate any
successor that is the same state as the node’s
parent.
• Do not repeat paths with cycles in them.
have the expand function refuse to generate any
successor of a node that is the same as any of the
node’s ancestors.
• Do not generate any state that was ever
generated before.
68