0% found this document useful (0 votes)
62 views

PAI (21AI54) Module 2 Notes

Notes

Uploaded by

VIGNESH T V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

PAI (21AI54) Module 2 Notes

Notes

Uploaded by

VIGNESH T V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Principles of Artificial Intelligence (21AI54) Module 2

MODULE 2
Solving Problems by Searching
1. Problem Solving Agents
2. Example problems
3. Searching for Solutions
4. Uninformed Search Strategies

Introduction
➢ An important aspect of intelligence is goal-based problem solving.
➢ The solution of many problems can be described by finding a sequence of actions
that lead to a desirable goal.
➢ Each action changes the state and the aim is to find the sequence of actions and
states that lead from the initial (start) state to a final (goal) state.
➢ A well-defined problem can be described by:
1. Initial state
2. Operator or successor function - for any state x returns s(x), the set of states
reachable from x with one action.
3. State space - all states reachable from initial by any sequence of actions.
4. Path - sequence through state space.
5. Path cost - function that assigns a cost to a path. Cost of a path is the sum
of costs of individual actions along the path.
6. Goal test - test to determine if at goal state.
➢ Search is the systematic examination of states to find path from the start/root state
to the goal state.

Problem Solving Agents


➢ A Problem-solving agent is a goal-based agent.
➢ It decides what to do by finding sequence of actions that lead to desirable states.
➢ The agent can adopt a goal and aim at satisfying it.
➢ Goal formulation, based on the current situation and the agent’s performance
measure, this is the first step in problem solving. The agent’s task is to find out
which sequence of actions will get to a goal state.
➢ Problem formulation is the process of deciding what actions and states to consider
given a goal.

Department of AI&ML, CIT - Ponnampet Page 1


Principles of Artificial Intelligence (21AI54) Module 2
Well-defined problems and solutions
➢ To illustrate the agent’s behavior, let us take an example (Figure 3.2) where our
agent is in the city of Arad, which is in Romania.
➢ The agent has to adopt a goal of getting to Bucharest.

A problem can be defined formally by five components:


1. Initial state
The initial state that the agent starts in.
For example, the initial state for our agent in Romania might be described as
In(Arad).
2. Actions
A description of the possible actions available to the agent. Given a particular state
‘s’, ACTIONS(s) return the set of actions that can be executed in s. We say that
each of these actions is applicable in s.
For example, from the state In(Arad), the applicable actions are {Go(Sibiu),
Go(Timisoara), Go(Zerind)}.
3. Transition Model
A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s. We also use the term successor to refer to any state

Department of AI&ML, CIT - Ponnampet Page 2


Principles of Artificial Intelligence (21AI54) Module 2
reachable from a given state by a single action. For example, we have
RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
4. State Space Graph
Together, the initial state, actions, and transition model implicitly define the state
space of the problem—the set of all states reachable from the initial state by any
sequence of actions. The state space forms a directed network or graph in which the
nodes are states and the links between nodes are actions. The map of Romania
shown in Figure 3.2 can be interpreted as a state-space graph.
5. Goal Test
The goal test, which determines whether a given state is a goal state. The agent’s
goal in Romania is the singleton set {In(Bucharest)}.
6. Path Cost
A path cost function that assigns a numeric cost to each path. The problem-solving
agent chooses a cost function that reflects its own performance measure. For the
agent trying to get to Bucharest, time is of the essence, so the cost of a path might
be its length in kilometers.
The preceding elements define a problem and can be gathered into a single data structure
that is given as input to a problem-solving algorithm. A solution to a problem is an action
sequence that leads from the initial state to a goal state. Solution quality is measured by the
path cost function, and an optimal solution has the lowest path cost among all solutions

Example Problems
➢ A toy problem is intended to illustrate various problem-solving methods. It can be
easily used by different researchers to compare the performance of algorithms.
➢ A real-world problem is one whose solutions people actually care about.

Toy Problems
1. Vacuum World:
➢ States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might
not contain dirt. Thus, there are 2 × 22 = 8 possible world states. A larger
environment with n locations has n · 2n states.
➢ Initial state: Any state can be designated as the initial state.
➢ Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.

Department of AI&ML, CIT - Ponnampet Page 3


Principles of Artificial Intelligence (21AI54) Module 2
➢ Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square,
and Sucking in a clean square have no effect. The complete state space is
shown in Figure 3.3.
➢ Goal test: This checks whether all the squares are clean.
➢ Path cost: Each step costs 1, so the path cost is the number of steps in the
path.

2. The 8 – Puzzle:
➢ The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3×3
board with eight numbered tiles and a blank space.
➢ A tile adjacent to the blank space can slide into the space.
➢ The objective is to reach a specified goal state, such as the one shown on
the right of the figure.

Department of AI&ML, CIT - Ponnampet Page 4


Principles of Artificial Intelligence (21AI54) Module 2
➢ States: A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.
➢ Initial state: Any state can be designated as the initial state.
➢ Actions: The simplest formulation defines the actions as movements of the
blank space Left, Right, Up, or Down. Different subsets of these are
possible depending on where the blank is.
➢ Transition model: Given a state and action, this returns the resulting state;
for example, if we apply Left to the start state in Figure 3.4, the resulting
state has the 5 and the blank switched.
➢ Goal test: This checks whether the state matches the goal configuration
shown in Figure 3.4. (Other goal configurations are possible.)
➢ Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
3. 8 – Queens Problem:
➢ The goal of the 8-queens problem is to place eight queens on a chessboard
such that no queen attacks any other. (A queen attacks any piece in the same
row, column or diagonally).
➢ There are two main kinds of formulation.
➢ An Incremental formulation involves operators that augments the
state description, starting with an empty state. For 8-queens
problem, this means each action adds a queen to the state.
➢ A complete-state formulation starts with all 8 queens on the board
and move them around.
➢ States: Any arrangement of 0 to 8 queens on the board is a state.
➢ Initial state: No queens on the board.
➢ Actions: Add a queen to any empty square.
➢ Transition model: Returns the board with a queen added to the specified
square.
➢ Goal test: 8 queens are on the board, none attacked.

Solution for 8 - Queens Problem

Department of AI&ML, CIT - Ponnampet Page 5


Principles of Artificial Intelligence (21AI54) Module 2
4. Donald Knuth’s problem:
➢ Knuth conjectured that, starting with the number 4, a sequence of factorial,
square root, and floor operations will reach any desired positive integer.
➢ For example, we can reach 5 from 4 as follows:

The problem definition can be formulated as:


➢ States: Positive numbers.
➢ Initial state: 4.
➢ Actions: Apply factorial, square root, or floor operation (factorial for integers
only).
➢ Transition model: As given by the mathematical definitions of the operations.
➢ Goal test: State is the desired positive integer.

Real World Problems


1. Route - Finding Problems:
Consider the airline travel problems that must be solved by a travel-planning Web
site:
➢ States: Each state obviously includes a location (e.g., an airport) and the
current time. Furthermore, because the cost of an action (a flight segment)
may depend on previous segments, their fare bases, and their status as
domestic or international, the state must record extra information about
these “historical” aspects.
➢ Initial state: This is specified by the user’s query.
➢ Actions: Take any flight from the current location, in any seat class, leaving
after the current time, leaving enough time for within-airport transfer if
needed.
➢ Transition model: The state resulting from taking a flight will have the
flight’s destination as the current location and the flight’s arrival time as the
current time.
➢ Goal test: Are we at the final destination specified by the user?

Department of AI&ML, CIT - Ponnampet Page 6


Principles of Artificial Intelligence (21AI54) Module 2
➢ Path cost: This depends on monetary cost, waiting time, flight time,
customs and immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage awards, and so on.
2. Touring Problems:
➢ Closely related to route-finding problems, but with an important difference.
➢ Consider, for example, the problem “Visit every city in Figure 3.2 at least
once, starting and ending in Bucharest.”
➢ As with route finding, the actions correspond to trips between adjacent
cities. The state space, however, is quite different. Each state must include
not just the current location but also the set of cities the agent has visited.
➢ So the initial state would be In(Bucharest), Visited({Bucharest}), a typical
intermediate state would be In(Vaslui), Visited({Bucharest, Urziceni,
Vaslui}), and the goal test would check whether the agent is in Bucharest
and all 20 cities have been visited.
3. Travelling Salesman Problem (TSP):
➢ Is a touring problem in which each city must be visited exactly once.
➢ The aim is to find the shortest tour.
➢ The problem is known to be NP-hard.
➢ Enormous efforts have been expended to improve the capabilities of TSP
algorithms.
➢ These algorithms are also used in tasks such as planning movements of
automatic circuit-board drills and of stocking machines on shop floors.
4. VLSI Layout Problem:
➢ A VLSI layout problem requires positioning millions of components and
connections on a chip to minimize area, minimize circuit delays, minimize
stray capacitances, and maximize manufacturing yield.
➢ The layout problem is split into two parts: cell layout and channel routing.
➢ In cell layout, the primitive components of the circuit are grouped into cells,
each of which performs some recognized function.
➢ Channel routing finds a specific route for each wire through the gaps
between the cells.
5. Robot Navigation:
➢ Robot navigation is a generalization of the route-finding problem.
➢ Rather than a discrete set of routes, a robot can move in a continuous space
with an infinite set of possible actions and states.

Department of AI&ML, CIT - Ponnampet Page 7


Principles of Artificial Intelligence (21AI54) Module 2
➢ For a circular Robot moving on a flat surface, the space is essentially two-
dimensional.
➢ When the robot has arms and legs or wheels that also must be controlled,
the search space becomes multi-dimensional.
➢ Advanced techniques are required to make the search space finite.
6. Automatic Assembly Sequencing:
➢ The example includes assembly of intricate objects such as electric motors.
➢ The aim in assembly problems is to find the order in which to assemble the
parts of some objects.
➢ If the wrong order is chosen, there will be no way to add some part later
without undoing some work already done.
➢ Another important assembly problem is protein design, in which the goal is
to find a sequence of Amino acids that will be fold into a three-dimensional
protein with the right properties to cure some disease.
Examples:
Give a complete problem formulation for each of the following.
1. Using only four colors, you have to color a planar map in such a way that no two
adjacent regions have the same color.
States: Any color on any region is a state.
Initial State: All regions are uncolored.
Actions: Assign color to an uncolored area.
Transition Model: The uncolored area is
now colored and cannot be colored again.
Goal Test: All regions have color, and no two adjacent regions have the same color.
Path Cost: Number of areas.

2. You have a program that outputs the message “illegal input record” when fed a certain
file of input records. You know that processing of each record is independent of the
other records. You want to discover what record is illegal.
States: Any combination/order of records.
Initial State: All input records.
Actions: Searching through records for the illegal record.
Transition Model: Dividing records up into parts and searching each part.
Goal Test: Finding the illegal record.
Path Cost: Amount of attempts.

Department of AI&ML, CIT - Ponnampet Page 8


Principles of Artificial Intelligence (21AI54) Module 2
3. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot
ceiling. He would like to get the bananas. The room contains two stackable, movable,
climbable 3-foot-high crates.
States: Any combination of the crates in the
room, with or without the monkey on them.
Initial State: An 8-foot-high room, 2 crates, 1
monkey, bananas.
Actions: Monkey moving and stacking boxes
to reach bananas.
Transition Model: the boxes have either
moved, been stacked, or both.
Goal Test: The monkey gets the bananas.
Path Cost: Number of actions.

4. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet.
You can fill the jugs up or empty them out from one to another or onto the ground. You
need to measure out exactly one gallon.
States: Any combination of filled/non-filled jugs.
Initial State: Jugs are empty.
Actions: Fill jugs up or transfer water between them.
Transition Model: Amount of water in each jug changes.
Goal Test: Is there exactly one gallon?
Path Cost: Number of actions.

5. The missionaries and cannibals’ problem is usually stated as follows. Three


missionaries and three cannibals are on one side of a river, along with a boat that can
hold one or two people. Find a way to get everyone to the other side without ever
leaving a group of missionaries in one place outnumbered by the cannibals in that place.
This problem is famous in AI because it was the subject of the first paper that
approached problem formulation from an analytical viewpoint (Amarel, 1968).

States:
➢ Number of missionaries, cannibals and a boat on the banks of the river.
➢ Illegal states: Missionaries outnumbered by cannibals on either bank of the
river.
Initial State: All missionaries, cannibals and a boat on one side of the river.

Department of AI&ML, CIT - Ponnampet Page 9


Principles of Artificial Intelligence (21AI54) Module 2
Actions: Move missionaries and cannibals across a river, 2 at a time. 1 person must
bring the boat back.
Transition Model: Transport a set of up to two participants to the other bank.
[1 missionary] or [1 cannibal] or [2 missionaries] or [2 cannibals] or [1 missionary and
1 cannibal]
Goal State: All missionaries and cannibals have made it to the other side of the river.
Path Cost: Number of crossings.

Searching for Solutions


Search tree
➢ Having formulated some problems, we now need to solve them.
➢ This is done by a search through the state space.
➢ A search tree is generated by the initial state and the successor function that together
define the state space.
➢ In general, we may have a search graph rather than a search tree, when the same
state can be reached from multiple paths.
Figure 3.6 shows the first few steps in growing the search tree for finding a route from
Arad to Bucharest.
➢ The root node of the tree corresponds to the initial state, In(Arad).
➢ The first step is to test whether this is a goal state.
➢ Then we need to consider taking various actions.
➢ We do this by expanding the current state; that is, applying each legal action to the
current state, thereby generating a new set of states.
➢ In this case, we add three branches from the parent node In(Arad) leading to three
new child nodes: In(Sibiu), In(Timisoara), and In(Zerind).
➢ Now we must choose which of these three possibilities to consider further.
➢ This is the essence of search—following up one option now and putting the others
aside for later, in case the first choice does not lead to a solution.
➢ Suppose we choose Sibiu first.
➢ We check to see whether it is a goal state (it is not) and then expand it to get
In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea).
➢ We can then choose any of these four or go back and choose Timisoara or Zerind.
➢ Each of these six nodes is a leaf node, that is, a node with no children in the tree.

Department of AI&ML, CIT - Ponnampet Page 10


Principles of Artificial Intelligence (21AI54) Module 2

➢ The set of all leaf nodes available for expansion at any given point is called the
frontier.

➢ The process of expanding nodes on the frontier continues until either a solution is
found or there are no more states to expand.
➢ Note that in the search tree shown in Figure 3.6: it includes the path from Arad to
Sibiu and back to Arad again!
➢ We say that In(Arad) is a repeated state in the search tree, generated in this case
by a loopy path.
➢ Considering such loopy paths means that the complete search tree for Romania is
infinite because there is no limit to how often one can traverse a loop.

Tree Search Algorithm

Department of AI&ML, CIT - Ponnampet Page 11


Principles of Artificial Intelligence (21AI54) Module 2
Infrastructure for Search Algorithms
➢ Search algorithms require a data structure to keep track of the search tree that
is being constructed.
➢ For each node n of the tree, we have a structure that contains four components:
1. n.STATE: the state in the state space to which the node corresponds;
2. n.PARENT: the node in the search tree that generated this node;
3. n.ACTION: the action that was applied to the parent to generate the node.
4. n.PATH-COST: the cost, traditionally denoted by g(n), of the path from
the initial state to the node, as indicated by the parent pointers.
➢ Given the components for a parent node, it is easy to see how to compute the
necessary components for a child node.
➢ The function CHILD-NODE takes a parent node and an action and returns the
resulting child node:

➢ The node data structure is depicted in Figure 3.10.

➢ Notice how the PARENT pointers string the nodes together into a tree structure.
➢ These pointers also allow the solution path to be extracted when a goal node is
found.
➢ The frontier needs to be stored in such a way that the search algorithm can easily
choose the next node to expand according to its preferred strategy.
➢ The appropriate data structure for this is a queue.

Department of AI&ML, CIT - Ponnampet Page 12


Principles of Artificial Intelligence (21AI54) Module 2
➢ The operations on a queue are as follows:
1. EMPTY?(queue) returns true only if there are no more elements in the
queue.
2. POP(queue) removes the first element of the queue and returns it.
3. INSERT(element, queue) inserts an element and returns the resulting
queue.
➢ Queues are characterized by the order in which they store the inserted nodes.
➢ Three common variants are:
1. The first-in, first-out or FIFO queue, which pops the oldest element of the
queue.
2. The last-in, first-out or LIFO queue (also known as a stack), which pops the
newest element of the queue.
3. The priority queue, which pops the element of the queue with the highest
priority according to some ordering function.

Measuring problem solving performance


Algorithm’s performance can be evaluated in four ways:
1. Completeness: Is the algorithm guaranteed to find a solution when there is
one?
2. Optimality: Does the strategy find the optimal solution.
3. Time complexity: How long does it take to find a solution?
4. Space complexity: How much memory is needed to perform the search?
Major factors affecting time and space complexity:
More complex state space graph more is the space and time required.
The state space graph’s complexity is affected by following factors:
1. Branching factor: It is the minimum number of successors of any node.
2. Depth of goal node: It is the depth of the shallowest goal node, where one can reach
fast. If this value is small, we will reach the goal node faster.
3. Maximum length of the path: It is the maximum length of any path in the state
space. If length value is more, complexity of state space is more.
4. Search cost: The major activity in searching is the node expansion. The time taken
for this is called as search cost.
5. Path cost: It is the time bound cost which is incurred for reaching or going to a
particular node.
6. Total cost: Sum of the search cost, path cost of the solution along with with memory
usage.

Department of AI&ML, CIT - Ponnampet Page 13


Principles of Artificial Intelligence (21AI54) Module 2

Uninformed Search Strategies (Blind Search)


➢ Uninformed Search is a search technique that have no additional information about
states beyond that provided in the problem definition.
➢ All they can do is generate successors and distinguish a goal state from a non-goal
state.
➢ There are five uninformed search strategies as given below.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search

Breadth First Search (BFS)


➢ Breadth-first search is a simple strategy in which the root node is expanded first,
then all the successors of the root node are expanded next, then their successors,
and so on.
➢ In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
➢ Breadth-first search is an instance of the general graph-search algorithm in which
the shallowest unexpanded node is chosen for expansion.
➢ This is achieved very simply by using a FIFO queue for the frontier.
➢ Thus, new nodes (which are always deeper than their parents) go to the back of the
queue, and old nodes, which are shallower than the new nodes, get expanded first.
➢ Below figure shows the progress of the search on a simple binary tree.

➢ BFS is complete—if the shallowest goal node is at some finite depth d, breadth-first
search will eventually find it after generating all shallower nodes (provided the
branching factor b is finite).
➢ Note that as soon as a goal node is generated, we know it is the shallowest goal
node because all shallower nodes must have been generated already and failed the
goal test.

Department of AI&ML, CIT - Ponnampet Page 14


Principles of Artificial Intelligence (21AI54) Module 2
➢ Now, the shallowest goal node is not necessarily the optimal one technically,
breadth-first search is optimal if the path cost is a nondecreasing function of the
depth of the node.
➢ Below algorithm demonstrates the BFS.

Time Complexity of BFS


➢ Imagine searching a uniform tree where every state has b successors.
➢ The root of the search tree generates b nodes at the first level, each of which
generates b more nodes, for a total of b2 at the second level.
➢ Each of these generates b more nodes, yielding b3 nodes at the third level, and so
on.
➢ Now suppose that the solution is at depth d. In the worst case, it is the last node
generated at that level.
➢ Then the total number of nodes generated is b + b2 + b3 + ··· + bd = O(bd).

➢ Above table lists various values of the solution depth d, the time and memory
required for a breadth-first search with branching factor b = 10.
➢ The table assumes that 1 million nodes can be generated per second and that a node
requires 1000 bytes of storage.

Department of AI&ML, CIT - Ponnampet Page 15


Principles of Artificial Intelligence (21AI54) Module 2

Uniform Cost Search (UCS)


➢ When all step costs are equal, breadth-first search is optimal because it always
expands the shallowest unexpanded node.
➢ By a simple extension, we can find an algorithm that is optimal with any step-cost
function.
➢ Instead of expanding the shallowest node, uniform-cost search expands the node n
with the lowest path cost g(n).
➢ This is done by storing the frontier as a priority queue ordered by g.
➢ Consider the below example, where the problem is to get from Sibiu to Bucharest.

➢ The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs 80 and 99,
respectively.
➢ The least-cost node, Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80
+ 97 = 177.
➢ The least cost node is now Fagaras, so it is expanded, adding Bucharest with cost
99 + 211 = 310.
➢ Now a goal node has been generated, but uniform-cost search keeps going, choosing
Pitesti for expansion and adding a second path to Bucharest with cost 80+ 97+ 101
= 278.
➢ Now the algorithm checks to see if this new path is better than the old one; it is, so
the old one is discarded.
➢ Bucharest, now with g-cost 278, is selected for expansion and the solution is
returned.
➢ It is easy to see that uniform-cost search is optimal in general.
➢ First, we observe that whenever uniform-cost search selects a node n for expansion,
the optimal path to that node has been found.
➢ Then, because step costs are nonnegative, paths never get shorter as nodes are
added.

Department of AI&ML, CIT - Ponnampet Page 16


Principles of Artificial Intelligence (21AI54) Module 2
➢ These two facts together imply that uniform cost search expands nodes in order of
their optimal path cost.
➢ Hence, the first goal node selected for expansion must be the optimal solution.

Depth First Search (DFS)


➢ Depth-first search always expands the deepest node in the current frontier of the
search tree.
➢ The search proceeds immediately to the deepest level of the search tree, where the
nodes have no successors.
➢ As those nodes are expanded, they are dropped from the frontier, so then the search
“backs up” to the next deepest node that still has unexplored successors.
➢ The depth-first search algorithm is an instance of the graph-search algorithm;
whereas breadth-first-search uses a FIFO queue, depth-first search uses a LIFO
queue.
➢ A LIFO queue means that the most recently generated node is chosen for expansion.
➢ This must be the deepest unexpanded node because it is one deeper than its parent
- which, in turn, was the deepest unexpanded node when it was selected.
➢ Depth-first search is shown in below figure.

Department of AI&ML, CIT - Ponnampet Page 17


Principles of Artificial Intelligence (21AI54) Module 2
➢ The drawback of depth-first-search is that it can make a wrong choice and get stuck
going down very long (or even infinite) path when a different choice would lead to
solution near the root of the search tree.
➢ For example, depth-first-search will explore the entire left subtree even if node C
is a goal node.
➢ A variant of depth-first search called backtracking search uses still less memory.
➢ In backtracking, only one successor is generated at a time rather than all successors;
each partially expanded node remembers which successor to generate next.
➢ In this way, only O(m) memory is needed rather than O(bm).

Depth Limited Search (DLS)


➢ The embarrassing failure of depth-first search in infinite state spaces can be
alleviated by supplying depth-first search with a predetermined depth limit.
➢ That is, nodes at depth are treated as if they have no successors. This approach is
called depth-limited search.
➢ The depth limit solves the infinite-path problem.
➢ Unfortunately, it also introduces an additional source of incompleteness if we
choose d.
➢ Its time complexity is O(bl) and its space complexity is O(b).
➢ Depth-first search can be viewed as a special case of depth-limited search with =∞.
➢ This number, known as the diameter of the state space, gives us a better depth limit,
which leads to a more efficient depth-limited search.
➢ For most problems, however, we will not know a good depth limit until we have
solved the problem.

Iterative Deepening depth – First Search (IDFS)


➢ Iterative deepening search (or iterative deepening depth-first search) is a general
strategy, often used in combination with depth-first tree search that finds the best
depth limit.
➢ It does this by gradually increasing the limit—first 0, then 1, then 2, and so on—
until a goal is found.
➢ This will occur when the depth limit reaches d, the depth of the shallowest goal
node.
➢ Iterative deepening combines the benefits of depth-first and breadth-first search.
➢ Like depth-first search, its memory requirements are modest: O(bd) to be precise.

Department of AI&ML, CIT - Ponnampet Page 18


Principles of Artificial Intelligence (21AI54) Module 2
➢ Like breadth-first search, it is complete when the branching factor is finite and
optimal when the path cost is a nondecreasing function of the depth of the node.
➢ The below snapshot shows the algorithm of IDFS.

➢ Below figure shows four iterations of iterative-deepening-search on a binary search


tree, where the solution is found on the fourth iteration (Goal Node: M).

➢ Iterative deepening search may seem wasteful because states are generated multiple
times.
➢ It turns out this is not too costly.

Department of AI&ML, CIT - Ponnampet Page 19


Principles of Artificial Intelligence (21AI54) Module 2
➢ The reason is that in a search tree with the same (or nearly the same) branching
factor at each level, most of the nodes are in the bottom level, so it does not matter
much that the upper levels are generated multiple times.
➢ In an iterative deepening search, the nodes on the bottom level (depth d) are
generated once, those on the next-to-bottom level are generated twice, and so on,
up to the children of the root, which are generated d times.
➢ In general, iterative deepening is the preferred uninformed search method when the
search space is large and the depth of the solution is not known.

Department of AI&ML, CIT - Ponnampet Page 20

You might also like