IntSys Lec 06 2 State Space Search DR - Mina
IntSys Lec 06 2 State Space Search DR - Mina
20
21
Dr. Mina Younan
Outline of this lecture
// this lecture collected and edited from Prof. Moheb Girgis Lectures in AI
20
21
Dr. Mina Younan
2
Introduction
• Well-formed predicate calculus expressions provide a means of
describing objects and relations in a problem domain, and
inference rules such as modus ponens allow us to infer new
knowledge from these descriptions.
• These inferences define a space that is searched to find a
problem solution.
• By representing a problem as a state space graph, we can use
graph theory to analyze the structure and complexity of both the
problem and the search procedures that we employ to solve it.
• This lecture introduces the theory of state space search.
20
21
Dr. Mina Younan
3
Structures for State Space Search
Graph Theory Basics:
A graph G(N, A) consists of two sets:
N: a set of nodes N1, N2, …, Nn, …, which need not be finite;
A: a set of arcs that connect pairs of nodes.
The directed arc connecting nodes N3 and N4 is represented by the ordered pair (N3,
N4). N N
3 4
The undirected arc connecting nodes N3 and N4 is represented by two ordered pairs
N4
N3 N4 N3
(N3, N4) and (N4, N3)
Terms used to describe relationships between nodes include parent, child, and
sibling. These are used in the usual familial fashion with the parentParent
preceding N3 its
child along a directed arc. The children of a node are called siblings.
A rooted graph has a unique node, called Child N 4 N5
Child
20
21
Dr. Mina Younan
5
Data-driven and Goal-driven Search
A state space may be searched in two directions:
• from the given data of a problem instance toward a goal, or
• from the goal back to the data.
In data-driven search, sometimes called forward chaining:
• The problem solver begins with the given facts of the problem and a set of
legal moves or rules for changing state.
• Search proceeds by applying rules to facts to produce new facts, which are
in turn used by the rules to generate more new facts.
• This process continues until it generates a path that satisfies the goal.
In goal driven search, sometimes called backward chaining:
• The problem solver takes the goal, and finds the rules or legal moves that
could produce this goal, and determines the conditions that must be true to
use them.
• These conditions become the new goals, or sub-goals, for the search.
• Search continues, working backward through successive sub-goals until it
works back to the facts of the problem.
20
21
Dr. Mina Younan
6
Data-driven and Goal-driven Search
• The preferred strategy is determined by careful analysis of the problem,
considering such issues as:
1. The complexity of the rules.
2. The branching factor of rule application (on average how many new
states are generated by rule application in both directions?), i.e. the shape
of the state space.
3. The availability of data.
4. Ease of determining the potential goals.
• In solving a problem using either goal- or data-driven search, a problem
solver must find a path from a start state to a goal through the state space
graph.
• The sequence of arcs in this path corresponds to the ordered steps of the
solution.
• The problem solver must consider different paths through the space until it
finds a goal.
20
21
Dr. Mina Younan
7
The Backtrack Search Algorithm
Backtracking is a technique for systematically trying all paths through a
state space. i.e., begins at the start state and pursues a path until it reaches
either a goal or a "dead end".
• If it finds a goal, it quits and returns the solution path.
• If it reaches a dead end, it "backtracks" to the most recent node on the path
having unexamined children and continues down one of these branches.
1
The algorithm continues until it finds A
20
21
Dr. Mina Younan
9
The Backtrack Search Algorithm
Function backtrack
begin
SL = [Start]; NSL = [Start]; DE = [ ]; CS = Start; % initialize
while NSL [ ] do
begin
//--------------------------------------------------
if CS = goal (or meets goal description) then
return SL; % on success, return list on states in path
//--------------------------------------------------
if CS has no children (excluding nodes already on DE, SL, and NSL) then
begin % backtrack
while SL is not empty and CS = 1st element of SL do
begin % This means the state has been visited before.
add CS to DE; % record state as dead end
remove 1st element from SL; % backtrack
remove 1st element from NSL;
CS = 1st element of NSL;
end
add CS to SL;
end
20
21
Dr. Mina Younan
10
The Backtrack Search Algorithm
else % CS has children
begin
generate and place children of CS (except nodes
already on DE, SL, and NSL) on NSL;
CS = 1st element of NSL;
add CS to SL;
end
end;
return FAIL;
end.
20
21
Dr. Mina Younan
12
The Backtrack Search Algorithm
Initialize: DE NSL SL CS Iteration #
• SL = [A]; [] [A] [A] A 0
• NSL = [A]; [] [BCDA] [BA] B 1
• DE = [ ]; [] [EFBCDA] [EBA] E 2
• CS = A [] [HIEFBCDA] [HEBA] H 3
[H] [IEFBCDA] [EBA] I
[H] [IEFBCDA] [IEBA] I 4
a [IH] [EFBCDA] [EBA] E
[EIH] [FBCDA] [BA] F
20
21
Dr. Mina Younan
14
Depth-First and Breadth-First Search ..
BFS and DFS algorithms use two lists:
• open: lists states that have been generated but whose children
have not been examined. The order in which states are placed
in open determines the order of the search. It is implemented as
a queue in BFS and as a stack in DFS. (open is like NSL in
backtrack).
• closed: records states that have already been examined. (closed
is the union of the DE and SL lists of the backtrack algorithm).
The current state is stored in a variable X.
Child states are generated by inference rules, legal moves of a
game, or other state transition operators. Each iteration
produces all children of the state X and adds them to open.
20
21
Dr. Mina Younan
15
Breadth-First Search Algorithm
Function BFS
begin
open = [Start]; closed = [ ]; % initialize
while open [ ] do
begin
remove leftmost state from open, and store it in X;
if X is a goal then
return SUCCESS; % goal found
else
begin
generate children of X;
put X on closed;
discard children of X if already on open or closed;
% loop check
put remaining children on right end of open; % queue
end
end
return FAIL
end.
20
21
Dr. Mina Younan
16
Breadth-First Search Algorithm
A trace of BFS on the previous graph, where the desired goal is G, appears
below:
closed open X Iteration #
Because BFS considers every node at each level of the graph before going deeper into the
space, all states are first reached along the shortest path from the start State, Breadth-first
search is therefore guaranteed to find the shortest path from the start state to the goal.
20
21
Dr. Mina Younan
17
Depth-First Search Algorithm
Function DFS
begin
open = [Start]; closed = [ ]; % initialize
while open [ ] do
begin
remove leftmost state from open, and store it in X;
if X is a goal then
return SUCCESS; % goal found
else
begin
generate children of X;
put X on closed;
discard children of X if already on open or closed;
% loop check
put remaining children on left end of open; % stack
end
end
return FAIL
end.
20
21
Dr. Mina Younan
18
Depth-First Search Algorithm
A trace of DFS on the previous graph, where the desired goal is G,
appears below:
closed open X Iteration #
a
left end of open; [] [A] - 0
% stack c [A] [BCD] A 1
b d
[BA] [EFCD] B 2
e f g
[EBA] [HIFCD] E 3
h i j [HEBA] [IFCD] H 4
[IHEBA] [FCD] I 5
Unlike BFS, a DFS search is not [FIHEBA] [JCD] F 6
guaranteed to find the shortest path to [JFIHEBA] [CD] J 7
a state the first time that state is
[CJFIHEBA] [GD] C 8
encountered. Later in the search, a
different path may be found to any G is the goal G 9
state.
20
21
Dr. Mina Younan
19
Local Search Algorithms
Travelling Salesman Problem
• Suppose a salesperson has five cities to visit and then must return home.
• The goal of the problem is to find the shortest path for the salesperson to travel,
visiting each city, and then returning to the starting city.
• The following figure gives an instance of this problem.
20
21
Dr. Mina Younan
Intelligent Systems 20
Local Search Algorithms
Travelling Salesman Problem
• The goal description is a property of the entire path, rather than of a single state.
• Now we present three different search techniques to solve the salesperson problem:
20
21
Dr. Mina Younan
Intelligent Systems 21
Local Search Algorithms
Travelling Salesman Problem
• Complexity of N! grows so fast that very soon the search combinations become
intractable. The following two techniques can reduce the search complexity.
20
21
Dr. Mina Younan
Intelligent Systems 22
Local Search Algorithms
Travelling Salesman Problem
(3) Nearest Neighbor Technique
• This technique constructs the path according to the rule "go to the closest
unvisited city".
• The nearest neighbor path through the above graph is [A, E, D. B, C, A], at
a cost of 375 miles.
• This method is highly efficient, as there is only one path to be tried!
• The nearest neighbor heuristic is fallible, as graphs exist for which it does
not find the shortest path, but it is a possible compromise when the time
required makes exhaustive search impractical.
• For example, the algorithm fails to find the shortest path if AC = 300.
20
21
Dr. Mina Younan
Intelligent Systems 23
Local Search Algorithms
Travelling Salesman Problem
(3) Nearest Neighbor Technique
• An instance of the traveling salesperson problem with the nearest neighbor
path in bold.
20
21
Dr. Mina Younan
Intelligent Systems 24