ARTIFICIAL INTELLIGENCE Unit-2
ARTIFICIAL INTELLIGENCE Unit-2
UNIT-02
It uses the queue to store data in the It uses the stack to store data in the
memory. memory.
The structure of a BFS tree is wide and The structure of a DFS tree is narrow and
short. long.
BFS DFS
Advantages:
Uniform cost search is optimal because at every state the path with
the least cost is chosen.
Disadvantages:
It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.
Bidirectional Search Algorithm
Bidirectional search algorithm runs two simultaneous searches,
one form initial state called as forward-search and other from goal
node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two
small sub graphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS,
DLS, etc.
Advantages:
Bidirectional search is fast.
Bidirectional search requires less memory
Disadvantages:
Implementation of the bidirectional search tree is difficult.
In bidirectional search, one should know the goal state
in advance.
When to use bidirectional approach?
Disadvantages of BFS:
• It can behave as an unguided depth-first search in the
worst case scenario.
• It can get stuck in a loop as DFS.
Example
In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration
for traversing the above example.
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be:
S----> B----->F----> G
A* Search Algorithm
A* search is the most commonly known form of best-first
search.
It uses heuristic function h(n), and cost to reach the node n
from the start state g(n).
It has combined features of UCS and greedy best-first search,
by which it solve the problem efficiently.
A* search algorithm finds the shortest path through the search
space using the heuristic function.
• This search algorithm expands less search tree and provides
optimal result faster.
• A* algorithm is similar to UCS except that it uses g(n)+h(n)
instead of g(n).
• In A* search algorithm, we use search heuristic as well as the
cost to reach the node.
• Hence we can combine both costs as following, and this sum is
called as a fitness number.
Algorithm of A* search
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the
list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has
the smallest value of evaluation function (g+h), if
node n is goal node then return success and stop,
otherwise
Step 4: Expand node n and generate all of its successors,
and put n into the closed list. For each successor n', check
whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into
Open list.
Step 5: Else if node n' is already in OPEN and CLOSED,
then it should be attached to the back pointer which
reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
Disadvantages:
• It does not always produce the shortest path as it mostly based on
heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.
Example
We will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we
will calculate the f(n) of each state using the formula f(n)= g(n)
+ h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.
Solution
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G,
10)}
Iteration3: {(S--> A-->C-->G, 6), (S--> A-->C-->D, 11),
(S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C---
>G it provides the optimal path with cost 6.
Local Search Algorithms
A local search algorithm completes its task by traversing on a single current
node rather than multiple paths and following the neighbors of that node
generally.
Although local search algorithms are not systematic, still they have the
following two advantages:
Local search algorithms use a very little or constant amount of memory as they
operate only on a single path.
Most often, they find a reasonable solution in large or infinite state spaces where
the classical or systematic algorithms do not work.
Local search algorithm works for pure optimized problems. A pure optimization
problem is one where all the nodes can give a solution. But the target is to find
the best state out of all according to the objective function. Unfortunately, the
pure optimization problem fails to find high-quality solutions to reach the goal
state from the current state.
Types of local searches
Hill-climbing Search
Simulated Annealing
Local Beam Search
Hill-climbing Search
The topographical regions shown in the figure can be defined as:
Global Maximum: It is the highest point on the hill, which is the goal
state.
Local Maximum: It is the peak higher than all other peaks but lower than
the global maximum.
Flat local maximum: It is the flat area over the hill where it has no uphill
or downhill. It is a saturated point of the hill.
Shoulder: It is also a flat area where the summit is possible.
Current state: It is the current position of the person.
Types of Hill climbing search algorithm
There are following types of hill-climbing search:
Simple hill climbing
Steepest-ascent hill climbing
Stochastic hill climbing
Simple hill climbing search
Simple hill climbing is the simplest technique to
climb a hill. The task is to reach the highest peak of
the mountain. Here, the movement of the climber
depends on his move/steps. If he finds his next step
better than the previous one, he continues to move
else remain in the same state. This search focus only
on his previous and next step.
Simple hill climbing Algorithm
Create a CURRENT node, NEIGHBOUR node, and
a GOAL node.
If the CURRENT node=GOAL node,
return GOAL and terminate the search.
Else CURRENT node<= NEIGHBOUR
node, move ahead.
Loop until the goal is not reached or a point is not
found.
Steepest-ascent hill climbing
Steepest-ascent hill climbing is different from simple
hill climbing search. Unlike simple hill climbing
search, It considers all the successive nodes, compares
them, and choose the node which is closest to the
solution. Steepest hill climbing search is similar
to best-first search because it focuses on each node
instead of one.
Steepest-ascent hill climbing algorithm
Create a CURRENT node and a GOAL node.
If the CURRENT node=GOAL node, return GOAL and
terminate the search.
Loop until a better node is not found to reach the solution.
If there is any better successor node present, expand it.
When the GOAL is attained, return GOAL and terminate.
Stochastic hill climbing
Stochastic hill climbing does not focus on all the
nodes. It selects one node at random and decides
whether it should be expanded or search for a
better one.
Limitations of Hill climbing algorithm
Ridges: It is a challenging problem where the person finds two or more local
maxima of the same height commonly. It becomes difficult for the person to
navigate the right point and stuck to that point itself.
Features of Hill Climbing
Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.
Greedy approach: Hill-climbing algorithm search moves in
the direction which optimizes the cost.
No backtracking: It does not backtrack the search space, as it
does not remember the previous states.
Local Beam Search
Local beam search is quite different from random-restart
search. It keeps track of k states instead of just one. It
selects k randomly generated states, and expand them at each
step. If any state is a goal state, the search stops with success.
Else it selects the best k successors from the complete list and
repeats the same process. In random-restart search where each
search process runs independently, but in local beam search,
the necessary information is shared between the parallel search
processes.
Adversarial Search
Adversarial search is a game-playing technique where the agents are
surrounded by a competitive environment.
A conflicting goal is given to the agents (multiagent).
These agents compete with one another and try to defeat one another in
order to win the game.
Such conflicting goals give rise to the adversarial search. Here, game-
playing means discussing those games where human intelligence and logic
factor is used, excluding other factors such as luck factor. Tic-tac-toe,
chess, checkers, etc., are such type of games where no luck factor works,
only mind works.
Techniques required to get the best
optimal solution
Pruning: A technique which allows ignoring the unwanted
portions of a search tree which make no difference in its final
result.
Heuristic Evaluation Function: It allows to approximate the
cost value at each level of the search tree, before reaching the
goal node.
Elements of Game Playing search
S0: It is the initial state from where a game begins.
PLAYER (s): It defines which player is having the current turn
to make a move in the state.
ACTIONS (a): It defines the set of legal moves to be used in a
state.
RESULT (s, a): It is a transition model which defines the
result of a move.
TERMINAL-TEST (p): It defines that the game has ended and
returns true.
UTILITY (s,p): It defines the final value with which the game has
ended. This function is also known as Objective function or Payoff
function. The price which the winner will get i.e.
(-1): If the PLAYER loses.
(+1): If the PLAYER wins.
(0): If there is a draw between the PLAYERS.
Types of algorithms in Adversarial search
α>=β
The Max player will only update the value of alpha.
The Min player will only update the value of beta.
While backtracking the tree, the node values will be passed to
upper nodes instead of values of alpha and beta.
We will only pass the alpha, beta values to the child nodes.
Pseudo-code for Alpha-beta Pruning