Ai 2
Ai 2
2 MARKS QUESTIONS
5. What is a fringe?
To represent the collection of nodes that have been generated but not yet FRINGE
expanded-this collection is called the fringe. Each element of the fringe is a leaf node.
Problem formulation is the process of deciding what actions and states to consider, given
a goal.
Goals help organize behavior GOAL FORMULATION by limiting the objectives that the
agent is trying to achieve. Goal formulation, based on the current situation and the agent's
performance measure, is the first step in problem solving.
problem
Initial State:
Successor Function:
State Space
Goal Test:
Solution:
Path:
Optimal Solution
o States: A state description specifies the location of each of the eight tiles and
the blank
o Initial state: Any state can be designated as the initial state. Note that any
given goal
o Goal test: This checks whether the state matches the goal configuration
o Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
o States: Arrangements of n queens (0 < n 5 8), one per column in the leftmost n
o Successor function: Add a queen to any square in the leftmost empty colum~~
such that
12.Define following
a. Cell layout: In cell layout, the primitive components of the circuit are grouped into cells, each
of which performs some recognized function.
b. Channel layout.: Channel routing finds a specific route for each wire through the gaps
between the cells. These search problems are extremely complex, but definitely worth solving
o EMPTY?(queue) returns true only if there are no more elements in the queue.
§ queue. (We will see that different types of queues insert elements in
different orders.)
resulting queue.
14.Define following
a. Step Cost: A path cost function that assigns a numeric cost to each path.
b. Path Cost: The step cost of taking action a to go from state x to state y is denoted by c(x, a,
y).
COMPLETENESS
OPTIMALUTY
TIME COMPLEXITY
SPACE COMPLEXITY
Using Knowledge uses knowledge for the searching oesn’t use knowledge for the searching
process. process.
Performance It finds a solution more quickly. finds solution slow as compared to an
informed search.
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
6. Bidirectional Search
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
20.What is the difference between Breadth First search and Uniform Cost
· BFS expands nodes level by level, exploring all nodes at the current level before moving to
nodes at the next level.
· It does not consider the cost of the path from the start node to the current node. Instead, it
focuses solely on the breadth of the search, ensuring that all nodes at the current level are
explored before moving to deeper levels.
· BFS guarantees the shortest path to the goal if the edge costs are uniform (i.e., all edges
have the same cost), as it explores nodes in increasing order of their depth from the start
node.
· UCS expands nodes based on their cumulative path cost from the start node. It always
selects the node with the lowest cumulative cost for expansion.
· Unlike BFS, UCS considers the cost of the path from the start node to the current node. It
prioritizes nodes with lower path costs, ensuring that it explores the cheapest paths first.
· UCS guarantees finding the optimal solution in terms of path cost, regardless of whether the
edge costs are uniform or not, as it systematically explores paths in increasing order of cost.
Search?
Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is
O(bm).
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the
goal node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we
start from state 0 and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).
25.Give the time and space complexity of Iterative deepening depth first
search.
Time Complexity: Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(bd).
26.Give the time and space complexity of Greedy Best First Search.
The worst-case time and space complexity is O(bm), where m is the maximum depth of
the search space
Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-search, to
find the goal node. Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and other starts from
goal vertex. The search stops when these two graphs intersect each other.
information.
1.Sensorless problems
2. Contingency problems
3. Exploration problems:
For example, in Romania, one might estimate the cost of the cheapest path from Arad to
Bucharest via the straight-line distance from Arad to Bucharest.
The heuristic is admissible because the optimal solution in the original problem is, by
definition, also a solution in the relaxed problem and therefore must be at least as expensive as
the optimal solution in the relaxed problem.
Because the derived heuristic is an exact cost for the relaxed problem, it must obey
the triangle inequality and is therefore consistent
.A heuristic h(n) is consistent if, for every node n and every successor n' of n generated
by any action a, the estimated cost of reaching the goal from n is no greater than the step cost
of getting to n' plus the estimated cost of reaching the goal from n':
The simples1 way to reduce memory requirements for A" is to adapt the idea of iterative
deepening to the heuristic search context, resulting in the iterative-deepening A" (IDA*)
algorithm. IDA* is practical for many problems with unit step costs and avoids the substantial
overhead associated with keeping a sorted queue of nodes.
35.What is Absolver?
(2) they can often find reasonable solutions in large or infinite (continuous) state spaces
for which systematic algorithms are unsuitable.
37.What is local search?
Local search algorithms operate using a single current state (rather than multiple paths)
and generally move only to neighbors of that state. Typically, the paths followed by the search
are not retained. Although local search algorithms are not systematic
Initial Population
Fitness Function
selection
Crossover
Mutation
plateau is an area of the state space landscape where the evaluation function is flat. It
can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which it is
possible to make progress.
a local maximum is a peak that is higher than each of its neighboring states, but lower
than the global maximum
if elevation corresponds to an objective function, then the aim is to find the highest
peak-a global GLOBALMAXIMUM maximum
42.Find the two types of heuristic values for the 8 puzzle instance given
below:
Long Answer Questions (3,4,5,6 Marks)
goal + FORMULATE-GOAL(state)
seq + SEARCH(problem)
action t FIRST(seq)
seq + REST(seq)
return action
A simple problem-solving agent. It first formulates a goal and a problem, searches for a
sequence of actions that would solve the problem, and then executes the actions one at
a time. When this is complete, it formulates another goal and starts over. Note that when
it is executing the sequence it ignores its percepts: it assumes that the solution it has
found will always work.
1. INITIAL STATE: The initial state that the agent starts in. For example, the
initial state for our agent in Romania might be described as In(Arad).
STATE SPACE: Together, the initial state and successor function implicitly define
the state space of the problem-the set of all states reachable from the initial state.
The state space forms a graph in which the nodes are states and the arcs
between nodes are actions
3. GOAL TEST: The goal test, which determines whether a given state is a goal
state. Sometimes there is an explicit set of possible goal states, and the test
simply checks whether the given state is one of them.
4. PATH COST: A path cost function that assigns a numeric cost to each path.
The problem-solving agent chooses a cost function that reflects its own
performance measure.
The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3 x 3 board with
eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into
the
space. The object is to reach a specified goal state, such as the one shown on the right
of the
· States: A state description specifies the location of each of the eight tiles and
the blank in one of the nine squares.
· Initial state: Any state can be designated as the initial state. Note that any
given goal
· Successor function: This generates the legal states that result from trying
the four
actions (blank moves Left, Right, Up, or Down).
· Goal test: This checks whether the state matches the goal configuration
shown in Figure 3.4. (Other goal configurations are possible.)
· Path cost: Each step costs 1, so the path cost is the number of steps in
the path
The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test
problems for new search algorithms in AI. The 8-puzzle has 9!/2 = 181,440 reachable
states and is easily solved.(give one example for 8 puzzel)
The goal of the 8-queens problem is to place eight queens on a chessboard such that no
queen attacks any other. (A queen attacks any piece in the same row, column or diagonal.)
Figure 3.5 shows an attempted solution that fails: the queen in the rightmost column is attacked
by the queen at the top left.
There are two main kinds of formulation. An incremental formulation involves operators
that augment the state description, starting with an empty state; for the 8-queens problem, this
means that each action adds a queen to the state. A complete-state formulation starts with all 8
queens on the board and moves them around. In either case, the path cost is of no interest
because
only the final state counts. The first incremental formulation one might try is the following:
A better formulation would prohibit placing a queen in any square that is already attacked:
· States: Arrangements of n queens (0 < n 5 8), one per column in the leftmost n columns,
with no queen attacking another are states.
· Successor function: Add a queen to any square in the leftmost empty column such that it is
not attacked by any other queen.
This formulation reduces the 8-queens state space from 3 x 1014 to just 2,057, and solutions
Breadth-first Search:
● Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
● BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
● The breadth-first search algorithm is an example of a general-graph search
algorithm.
● Breadth-first search implemented using FIFO queue data structure.
Advantages:
Disadvantages:
● It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
● BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in layers,
so it will follow the path which is shown by the dotted arrow, and the traversed path will
be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
Advantages:
● Uniform cost search is optimal because at every state the path with the least cost
is chosen.
Disadvantages:
● It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal
node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from
state 0 and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Advantage:
● DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
● It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
Disadvantage:
● There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
● DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow
the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by
the algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is
O(bm).
● Standard failure value: It indicates that problem does not have any solution.
● Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Advantages:
Disadvantages:
Example:
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.
This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Advantages:
● Itcombines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
Disadvantages:
● The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).
Space Complexity:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.
Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-search, to
find the goal node. Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and other starts from
goal vertex. The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides
one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward
direction and starts from goal node 16 in the backward direction.
The traveling salesperson problem (TSP) is a touring problem in which each city must be
visited exactly once. The aim is to find the shortest tour. The problem is known to be NP-hard,
but an enormous amount of effort has been expended to improve the capabilities of TSP
algorithms. In addition to planning trips for traveling salespersons, these algorithms have been
used for tasks such as planning movements of automatic circuit-board drills and of stocking
chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize
manufacturing yield. The layout problem comes after the logical design phase, and is usually
split
into two parts: cell layout and channel routing. In cell layout, the primitive components of the
circuit are grouped into cells, each of which performs some recognized function. Each cell
has a fixed footprint (size and shape) and requires a certain number of connections to each of
the other cells. The aim is to place the cells on the chip so that they do not overlap and so that
there is room for the connecting wires to be placed between the cells. Channel routing finds a
specific
route for each wire through the gaps between the cells.
14.Explain the various components of a node.
PARENT-NODE: the node in the search tree that generated this node;
ACTION: the action that was applied to the parent to generate the node,
PATH-COST g(n), cost of the path from initial state to the node, as indicated by the parent
pointers; and
DEPTH: the number of steps along the path from the initial state.
It is important to remember the distinction between nodes and states. A node is a book keeping
data structure used to represent the search tree. A state corresponds to configuration of the
world.
Breadth-first search SEARCH Breadth-first search is a simple strategy in which the root node is
expanded first, then all the BREADTH-FIRST successors of the root node are expanded next,
then their successors, and so on. In general, all the nodes are expanded at a given depth in the
search tree before any nodes at the next level are expanded. Breadth-first search can be
implemented by calling TREE-SEARCH with an empty fringe that is a first-in-first-out (FIFO)
queue, assuring that the nodes that are visited first will be expanded first. In other words, calling
results in a breadth-first search. The FIFO queue puts all newly generated successors at the
end of the queue, which means that shallow nodes are expanded before deeper nodes. Figure
3.10 shows the progress of the search on a simple binary tree. We will evaluate breadth-first
search using the four criteria from the previous section. We can easily see that it is complete-if
the shallowest goal node is at some finite depth d, breadth-first search will eventually find it after
expanding all shallower nodes (prolvided the branching factor b is finite). The shallowest goal
node is not necessarily the optimal one; technically, breadth-first search is optimal if the path
cost is a nondecreasing function of the depth of the node. (For example, when all actions have
the same cost.) So far, the news about breadth-first search has been good. To see why it is not
always the strategy of choice, we have to consider the amount of time and memory it takes to
complete a search. To do this, we consider a hypothetical state space where every state has b
successors. The root of the search tree generates b nodes at the first level, each of which
generates b more nodes, for a total of b2 at the second level. Each of these generates b more
nodes, yielding b3 nodes at the third level, and so on. Now suppose that the solution is at depth
d. In the worst 74 Chapter 3. Solving Problems by Searching case, we would expand all but the
last node at level d (since the goal itself is not expanded), generating bdS1 - b nodes at level d f
1. Then the total number of nodes generated is
Every node that is generated must remain in memory, because it is either part of the fringe or is
an ancestor of a fringe node. The space complexity is, therefore, the same as the time
complexity (plus one node for the root). Those who do complexity analysis are worried (or
excited, if they like a challenge) by exponential complexity bounds such as O(bdsl). Figure 3.1 1
shows why. It lists the time and memory required for a breadth-first search with branching factor
b = 10, for various values of the solution depth d. The table assumes that 10,000 nodes can be
generated per second and that a node requires 1000 bytes of storage. Many search problems fit
roughly within these assumptions (give or take a factor of 100) when run on a modern personal
computer. There are two lessons to be learned from The second lesson is that the time
requirements are still a major factor. If your problem has a solution at depth 12, then (given our
assumptions) it will take 35 years for breadth-first search (or indeed any uninformed search) to
find it. In general, exponential-complexity search problems cannot be solved by uninformed
methods for any but the smallest instances.
Refer question5
17.. Explain how to avoid repeated states. Also write the General Graph Search
Algorithm.
Repeated states Failure to detect repeated states can turn a linear problem into an exponential
one.
If an algorithm remembers every state that it has visited, then it can be viewed as exploring the
state-space graph directly. We can modify the general TREE-SEARCH algorithm to include a
data structure called the closed list, which stores every expanded node. (The fringe of
unexpanded nodes is sometimes called the open list.)
If the current node matches a node on the closed list, it is discarded instead of being expanded.
The new algorithm is called GRAPH-SEARCH (Figure 3.19). On problems with many repeated
states, GRAPH-SEARCH is much more efficient than TREE-SEARCH. Its worst-case time and
space requirements are proportional to the size of the state space. This may be much smaller
thanO(bd).
Graph search
18.Write a note on conformant problem (Sensorless problems)
Sensorless problems
Suppose that the vacuum agent knows all the effects of its actions, but has no sensors. Then
it knows only that its initial state is one of the set {1,2,3,4,5,6,7,8). One might suppose
that the agent's predicament is hopeless, but in fact it can do quite well. Because it knows
what its actions do, it can, for example, calculate that the action Right will cause it to be in
one of the states {2,4,6,8), and the action sequence [Right,Suck] will always end up in one
of the states {4,8}. Finally, the sequence [Right,Suck,Left,Suck] is guaranteed to reach the
COERCION goal state 7 no matter what the start state. We say that the agent can coerce the
world into
state 7, even when it doesn't know where it started. To summarise: when the world is not
fully observable, the agent must reason about sets of states that it might get to, rather than
BELIEF STATE single states. We call each such set of states a belief state, representing the
agent's current
belief about the possible physical states it might be in. (In a fully observable environment,
To solve sensorless problems, we search in the space of belief states rather than physical
states. The initial state is a belief state, and each action maps from a belief state to another
belief state. An action is applied to a belief state by unioning the results of applying the
action to each physical state in the belief state. A path now connects several belief states,
and a solution is now a path that leads to a belief state, all of whose members are goal states.
Figure 3.21 shows the reachable belief-state space for the deterministic, sensorless vacuum
world. There are only 12 reachable belief states, but the entire belief state space contains
every possible set of physical states, i.e., 28 = 256 belief states. In general, if the physical
state space has S states, the belief state space has 2' belief states.
Our discussion of sensorless problems so far has assumed deterministic actions, but the
may have several possible outcomes. The reason is that, in the absence of sensors, the agent
has no way to tell which outcome actually occurred, so the various possible outcomes are
just additional physical states in the successor belief state. For example, suppose the environ-
ment obeys Murphy's Law: the so-called Suck action sometimes deposits dirt on the carpet
but only if there is no dirt there already.6 Then, if Suck is applied in physical state 4 (see
Figure 3.20), there are two possible outcomes: states 2 and 4. Applied to the initial belief
state, {1,2,3,4,5,6,7,8), Suck now leads to the belief state that is the union of the out-
come sets for the eight physical states. Calculating this, we find that the new belief state is
{1,2,3,4,5,6,7,8). So, for a sensorless agent in the Murphy's Law world, the Suck action
leaves the belief state unchanged! In fact, the problem is unsolvable. (See Exercise 3.18.) In-
intuitively, the reason is that the agent cannot tell whether the current square is dirty and hence
cannot tell whether the Suck action will clean it up or create more dirt.
19.Explain Greedy Best First Search. Consider the map of Romania and the
hSLD value table given below. Use greedy best-first search to find a path
Greedy best-first search3 tries to expand the node that is closest to the goal, on the: grounds
that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic
function: f (n) = h(n). Let us see how this works for route-finding problems in Romania, using the
straight- line distance heuristic, which we will call hsLD. If the goal is Bucharest, we will need to
know the straight-line distances to Bucharest, which are shown in Figure 4.1. For example,
hsLD(In(Arad)) = 366. Notice that the values of hsLD cannot be computed from the prob- lem
description itself. Moreover, it takes a certain amount of experience to know that hSLD is
correlated with actual road distances and is, therefore, a useful heuristic.
shows the progress of a greedy best-first search using hsLD to find a path from Arad to
Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer to
Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest. Fagaras in turn generates Bucharest, which is the goal. For this particular
problem, greedy best-first search using hsLD finds a solution without ever expanding a node
that is not on the solution path; hence, its search cost is minimal. It is not optimal, however: the
path via Sibiu and Fagaras to Bucharest is 32 kilometers longer than the path through Rimnicu
Vilcea and Pitesti. This shows why the algorithm is called "greedy2'-at each step it tries to get as
close to the goal as it can. Minimizing h(n) is susceptible to false starts. Consider the problem of
getting from Iasi to Fagaras. The heuristic suggests that Neamt be expanded first, because it is
closest to Fagaras, but it is a dead end. The solution is to go first to Vaslui-a step that is actually
farther from the goal according to the heuristic-and then to continue to Urziceni, Bucharest, and
Fagaras. In this case, then, the heuristic causes unnecessary nodes to be expanded. Fur-
thermore, if we are not careful to detect repeated states, the solution will never be found-the
search will oscillate between Neamt and Iasi. Greedy best-first search resembles depth-first
search in the way it prefers to follow a single path all the way to the goal, but will back up when
it hits a dead end. It suffers from the same defects as depth-first search-it is not optimal, and it is
incomplete (becixuse it can start down an infinite path and never return to try other possibilities).
The worst-case time and space complexity is O(bm), where m is the maximum depth of the
search space. With a good heuristic function, however, the complexity can be reduced
substantially. Tlie amount of the reduction depends on the particular problem and on the quality
of the heuristic.
20.Explain A* algorithm. Consider the map of Romania and the hSLD value
A* search is a popular algorithm used to find the shortest path between two points. It
works by evaluating nodes based on two costs: g(n), the cost to reach the node, and h(n), the
estimated cost to get from the node to the goal. The algorithm chooses the node with the lowest
combined cost, g(n) + h(n), to explore next. This strategy is not only reasonable but also
guarantees the shortest path if the heuristic function h(n) meets certain conditions.
The optimality of A* search relies on the use of an admissible heuristic, which never
overestimates the cost to reach the goal. Admissible heuristics are optimistic, thinking the cost is
less than it actually is. Since g(n) is the exact cost to reach a node, the combined cost f(n) =
g(n) + h(n) never overestimates the true cost of a solution. A simple example of an admissible
heuristic is the straight-line distance between two points, which is always an underestimate.
A* search is used with the TREE-SEARCH algorithm to find the shortest path. In this example,
the algorithm is used to find the shortest path to Bucharest. The values of g(n) are computed
from the step costs, and the values of h(n) are given as the straight-line distance. The algorithm
chooses the node with the lowest combined cost to explore next. In this case, Bucharest
appears on the fringe but is not selected because its cost is higher than that of Pitesti. The
algorithm will not settle for a solution that costs more than the potential solution through Pitesti.
21.Explain Recursive Best First Search algorithm. Consider the map of
Romania and the hSLD value table given. Use RFBS* algorithm to find a
RBFS is a search algorithm that uses a best-first strategy but limits memory usage by using
recursion and backtracking with minimal state information. It keeps track of the best (lowest
cost) option found so far and recursively explores the best paths.
Expand nodes:
1. Choose the node with the lowest estimated cost f(n)f(n)f(n) for expansion.
2. If the current path exceeds the best alternative cost, backtrack.
1. Recursively apply the process, keeping track of the best path cost found so far.
2. Prune paths that exceed the best alternative cost.
Terminate:
1. The process terminates when the goal (Bucharest) is reached or no better paths
are available.
Step-by-Step Execution
1. Start at Arad
· f(Arad)=g(Arad)+h(Arad)=0+366=366
2. Expand Arad
Successors of Arad:
3. Expand Sibiu
Successors of Sibiu:
5. Expand Pitesti
Successors of Pitesti:
Choose Bucharest.
Path Found:
· Arad -> Sibiu -> Rimnicu Vilcea -> Pitesti -> Bucharest.
· Total cost: 418.
Components of IDA*:
Algorithm Steps:
1. Initialization:
○ Start with an initial state and set the initial depth limit bound to the estimated cost
of the initial state (using the heuristic function).
2. Iterative Deepening Loop:
○ Perform a depth-first search with the current depth limit bound.
○ If a solution is found at depth d, return it.
○ If the depth limit is exceeded and no solution is found, increase the depth limit
(set bound to the next minimum depth encountered during the failed search).
3. Repeat:
○ Repeat the search with the new depth limit until a solution is found.
4. Termination:
○ When a solution is found, it is guaranteed to be optimal due to the properties of
the depth-first search with iterative deepening and the admissibility of the
heuristic function.
● Advantages:
○ Memory efficient compared to traditional A* because it does not store all visited
states.
○ Guarantees finding an optimal solution when the heuristic function is admissible.
○ Can handle large state spaces efficiently.
● Disadvantages:
○ Can be less efficient than A* in some cases due to repeated states at different
depths.
○ The effectiveness depends heavily on the quality of the heuristic function.
Applications:
IDA* is a powerful algorithm for finding optimal paths in state spaces, combining the benefits of
iterative deepening with A*'s heuristic guidance. Its efficiency and optimality make it a preferred
choice in various domains requiring pathfinding or state space search.
24.What are heuristic functions? Explain the two heuristics used in 8 puzzle.
Also compute the same for the 8 puzzle instance given below :
A heuristic function or simply a heuristic is a function that ranks alternatives in various search
algorithms at each branching step basing on an available information in order to make a
decision which branch is to be followed during a search.
hl = the number of misplaced tiles. For Figure 4.7, all of the eight tiles are out of
is clear that any tile that is out of place must be moved at least once.
h2 = the sum of the distances of the tiles from their goal positions. Because tiles
cannot move along diagonals, the distance we will count is the sum of the horizontal
and vertical distances. This is sometimes called the city block distance or Manhattan
distance. h2 is also admissible, because all any move can do is move one tile one step
closer to the goal. Tiles 1 to 8 in the start state give a Manhattan distance of
h2=3+l+2+2+2+3+3+2=18
To invent admissible heuristics we use Relaxed problems: A problem with fewer restrictions on
the actions. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the
original problem. The optimal solution to the original problem is, by definition also a solution to
the relaxed problem.
the 8-puzzle actions are described formally as
A tile can move from square A to square B if
A is horizontally or vertically adjacent to B and B is blank
We can generate three relaxed problems by removing one or both of the conditions:
(a) A tile can move from square A to square B if A is adjacent to B.
(b) A tile can move from square A to square B if B is blank.
(c) A tile can move from square A to square B.
From (a), we can derive h2 (Manhattan distance) and from (c) we can derive h1
A program called ABSOLVER can generate heuristics automatically from problem definitions,
using the "relaxed problem" method and various other techniques. ABSOLVER generated a
better new heuristic for the 8-puzzle and found the first useful heuristic for the famous Rubik's
cube puzzle.
If a collection of admissible heuristics h1 …hm is available for a problem, and none of them
dominates any of the others, then h(n)= max{h1 (n),..,hm(n)}
Admissible heuristics can also be derived from the solution cost of a subproblem of a given
problem. For example, Figure below shows a subproblem of the 8-puzzle instance
26.Explain Hill Climbing Search.
27.Explain the issues due to which hill climbing gets stuck.
28.Briefly Explain Simulated Annealing algorithm.
Keeping just one node in memory might seem to be an extreme reaction to the problem
of memory limitations. The local beam search algorithm keeps track of k states rather than just
one. It begins with k randomly generated states. At each step, all the successors of all k states
are generated. If any one is a goal, the algorithm halts. Otherwise, it selects the k best
successors from the complete list and repeats. At first sight, a local beam search with k states
imight seem to be nothing more than running k random restarts in parallel instead of in
sequence. In fact, the two algorithms are quite different. In a random-restart search, each
search process runs independently of the others. In a local beam search, useful information is
passed among the k parallel search threads. For example, if one state generates several good
successors and the other k - 1 states all generate bad successors, then the effect is that the first
state says to the others, "Come over here, the grass is greener!" The algorithm quickly
abandons unfruitful searches and moves its resources to where the most progress is being
made.
In its simplest form, local beam search can suffer from a lack of diversity among the k
states-they can quickly become concentrated in a small region of the state space, making the
search little more than an expensive version of hill climbing. A variant called stochastic beam
search, analogous to stochastic hill climbing, helps to alleviate this problem. Instead of choosing
the best k from the the pool of candidate successors, stochastic beam search chooses k
successors at random, with the probability of choosing a given successor being an increasing
function of its value. Stochastic beam search bears some resemblance to the process of natural
selection, whereby the "successors" (offspring) of a "state" (organism) populate the next
generation according to its "value" (fitness).