0% found this document useful (0 votes)
0 views

UNIT-2-AI mca1

The document discusses search algorithms in Artificial Intelligence, highlighting their importance in problem-solving for rational agents. It categorizes search algorithms into uninformed (blind) and informed (heuristic) types, detailing various algorithms such as breadth-first search, depth-first search, and uniform-cost search, along with their properties and complexities. Additionally, it explains the concept of heuristics and their role in informed search strategies to efficiently find solutions in large search spaces.

Uploaded by

sachintaba9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

UNIT-2-AI mca1

The document discusses search algorithms in Artificial Intelligence, highlighting their importance in problem-solving for rational agents. It categorizes search algorithms into uninformed (blind) and informed (heuristic) types, detailing various algorithms such as breadth-first search, depth-first search, and uniform-cost search, along with their properties and complexities. Additionally, it explains the concept of heuristics and their role in informed search strategies to efficiently find solutions in large search spaces.

Uploaded by

sachintaba9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

UNIT-2

Searching Techniques
By: Naveen Kumar
Search Algorithms in Artificial Intelligence
Search algorithms are one of the most important areas of
Artificial Intelligence. This topic will explain all about the
search algorithms in AI.
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal
problem-solving methods. Rational agents or Problem-
solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best
result. Problem-solving agents are the goal-based agents
and use atomic representation. In this topic, we will learn
various problem-solving search algorithms.
Search Algorithm Terminologies:
•Search: Searching is a step by step procedure to solve a
search-problem in a given search space. A search problem
can have three main factors:
• Search Space: Search space represents a set of possible
solutions, which a system may have.
• Start State: It is a state from where agent begins the
search.
• Goal test: It is a function which observe the current state
and returns whether the goal state is achieved or not.
•Search tree: A tree representation of search problem is
called Search tree. The root of the search tree is the root
node which is corresponding to the initial state.
•Actions: It gives the description of all the available actions
to the agent.
•Transition model: A description of what each action do,
can be represented as a transition model.
•Path Cost: It is a function which assigns a numeric cost to
each path.
•Solution: It is an action sequence which leads from the
start node to the goal node.
Optimal Solution: If a solution has the lowest cost among
all solutions.
Properties of Search Algorithms:
Following are the four essential properties of search
algorithms to compare the efficiency of these algorithms:
Completeness: A search algorithm is said to be complete if
it guarantees to return a solution if at least any solution
exists for any random input.
Optimality: If a solution found for an algorithm is
guaranteed to be the best solution (lowest path cost)
among all other solutions, then such a solution for is said to
be an optimal solution.
Time Complexity: Time complexity is a measure of time for
an algorithm to complete its task.
Space Complexity: It is the maximum storage space
required at any point during the search, as the complexity
of the problem.
Types of search algorithms
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge
such as closeness, the location of the goal. It operates in a brute-
force way as it only includes information about how to traverse the
tree and how to identify leaf and goal nodes. Uninformed search
applies a way in which search tree is searched without any
information about the search space like initial state operators and
test for the goal, so it is also called blind search. It examines each
node of the tree until it achieves the goal node.
It can be divided into five main types:
•Breadth-first search
•Uniform cost search
•Depth-first search
•Iterative deepening depth-first search
•Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed
search, problem information is available which can guide the search.
Informed search strategies can find a solution more efficiently than an
uninformed search strategy. Informed search is also called a Heuristic
search.
A heuristic is a way which might not always be guaranteed for best
solutions but guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be
solved in another way.
An example of informed search algorithms is a traveling salesman
problem.
•Greedy Search
•A* Search
Uninformed Search Algorithms
Uninformed search is a class of general-purpose search
algorithms which operates in brute force-way. Uninformed
search algorithms do not have additional information about
state or search space other than how to traverse the tree,
so it is also called blind search.
Following are the various types of uninformed search
algorithms:
•Breadth-first Search
•Depth-first Search
•Depth-limited Search
•Iterative deepening depth-first search
•Uniform cost search
•Bidirectional Search
1. Breadth-first Search:
Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadth wise in a tree or graph, so it is called
breadth-first search.
BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
The breadth-first search algorithm is an example of a general-graph search
algorithm.
Breadth-first search implemented using FIFO queue data structure.
Advantages:
•BFS will provide a solution if any solution exists.
•If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
•It requires lots of memory since each level of the tree must be saved into memory
to expand the next level.
•BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
•Time Complexity: Time Complexity of BFS algorithm can be
obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
•Space Complexity: Space complexity of BFS algorithm is
given by the Memory size of frontier which is O(bd).
•Completeness: BFS is complete, which means if the
shallowest goal node is at some finite depth, then BFS will
find a solution.
•Optimality: BFS is optimal if path cost is a non-decreasing
function of the depth of the node.
2. Depth-first Search
•Depth-first search is a recursive algorithm for traversing a tree or graph data
structure.
•It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
•DFS uses a stack data structure for its implementation.
•The process of the DFS algorithm is similar to the BFS algorithm.

Advantage:
DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here it
will terminate as it found goal node.
•Completeness: DFS search algorithm is complete within
finite state space as it will expand every node within a
limited search tree.
•Time Complexity: Time complexity of DFS will be equivalent
to the node traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)
•Space Complexity: DFS algorithm needs to store only single
path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).
•Optimal: DFS search algorithm is non-optimal, as it may
generate a large number of steps or high cost to reach to the
goal node.
3. Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:


•Standard failure value: It indicates that problem does not have any solution.
•Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
Depth-limited search also has a disadvantage of incompleteness.
It may not be optimal if the problem has more than one solution.
•Completeness: DLS search algorithm is complete if
the solution is above the depth-limit.
•Time Complexity: Time complexity of DLS
algorithm is O(bℓ).
•Space Complexity: Space complexity of DLS
algorithm is O(b×ℓ).
•Optimal: Depth-limited search can be viewed as a
special case of DFS, and it is also not optimal even if
ℓ>d.
4. Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph. This algorithm comes into play when a different cost is available for each
edge. The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs from the root node. It can be used to solve any
graph/tree where the optimal cost is in demand. A uniform-cost search algorithm
is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost
of all edges is the same.
Advantages:
Uniform cost search is optimal because at every state the path with the least cost
is chosen.
Disadvantages:
It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will
find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer
to the goal node. Then the number of steps is = C*/ε+1. Here we have
taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 +
[C*/ε]
)/.
Space Complexity:
The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the
lowest path cost.
5. Iterative deepening depth-first Search:
The iterative deepening algorithm is a combination of DFS and BFS algorithms.
This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit", and it
keeps increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Advantages:
It combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
Disadvantages:
The main drawback of IDDFS is that it repeats all the work of the previous
phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1st Iteration-----> A
2nd Iteration----> A, B, C
3rd Iteration------>A, B, D, E, C, F, G
4th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
•Completeness:
This algorithm is complete is if the branching factor is finite.
•Time Complexity:
Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(bd).
•Space Complexity:
The space complexity of IDDFS will be O(bxd).
•Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing
function of the depth of the node.
6. Bidirectional Search Algorithm:
Bidirectional search algorithm runs two simultaneous searches, one form
initial state called as forward-search and other from goal node called as
backward-search, to find the goal node. Bidirectional search replaces one
single search graph with two small sub graphs in which one starts the
search from an initial vertex and other starts from goal vertex. The search
stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
•Bidirectional search is fast.
•Bidirectional search requires less memory
Disadvantages:
•Implementation of the bidirectional search tree is difficult.
In bidirectional search, one should know the goal state in advance.
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.
•Completeness: Bidirectional Search is complete if
we use BFS in both searches.
•Time Complexity: Time complexity of bidirectional
search using BFS is O(bd).
•Space Complexity: Space complexity of
bidirectional search is O(bd).
•Optimal: Bidirectional search is Optimal.
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which looked through search
space for all possible solutions of the problem without having any additional knowledge
about search space. But informed search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to goal node, etc. This knowledge help
agents to explore less to the search space and find more efficiently the goal node.
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however, might not
always give the best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states. The value of the heuristic
function is always positive.
Admissibility of the heuristic function is given as:
h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be
less than or equal to the estimated cost.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic
search algorithms. It expands nodes based on their
heuristic value h(n). It maintains two lists, OPEN and
CLOSED list. In the CLOSED list, it places those nodes
which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded.
On each iteration, each node n with the lowest
heuristic value is expanded and generates all its
successors and n is placed to the closed list. The
algorithm continues until a goal state is found.
1.) Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path
which appears best at that moment. It is the combination
of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms. With
the help of best-first search, at each step, we can choose
the most promising node. In the best first search
algorithm, we expand the node which is closest to the goal
node and the closest cost is estimated by heuristic
function, i.e.
f(n)= h(n).
Were, h(n)= estimated cost from node n to the goal.
The greedy best first algorithm is implemented by the
priority queue.
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest
value of h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a
goal node or not. If any successor node is goal node, then return success
and terminate the search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
Step 7: Return to Step 2.
Advantages:
•Best first search can switch between BFS and DFS
by gaining the advantages of both the algorithms.
•This algorithm is more efficient than BFS and DFS
algorithms.
Disadvantages:
•It can behave as an unguided depth-first search in
the worst case scenario.
•It can get stuck in a loop as DFS.
•This algorithm is not optimal.
Example:
Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S----> B----->F----> G
Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.
Optimal: Greedy best first search algorithm is not optimal.
2.) A* Search Algorithm:
A* search is the most commonly known form of best-first
search. It uses heuristic function h(n), and cost to reach the
node n from the start state g(n). It has combined features of
UCS and greedy best-first search, by which it solve the problem
efficiently. A* search algorithm finds the shortest path through
the search space using the heuristic function. This search
algorithm expands less search tree and provides optimal result
faster. A* algorithm is similar to UCS except that it uses g(n)
+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the
cost to reach the node. Hence we can combine both costs as
following, and this sum is called as a fitness number.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.
Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise.
Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED list, if
not then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
•A* search algorithm is the best algorithm than other search
algorithms.
•A* search algorithm is optimal and complete.
•This algorithm can solve very complex problems.
Disadvantages:
•It does not always produce the shortest path as it mostly
based on heuristics and approximation.
•A* search algorithm has some complexity issues.
•The main drawback of A* is memory requirement as it keeps
all generated nodes in the memory, so it is not practical for
various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n)
of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach
any node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S--
>G, 10)}
Iteration 4: will give the final result, as S--->A--->C--->G it provides the
optimal path with cost 6.
Points to remember:
•A* algorithm returns the path which occurred first, and it does not search
for all remaining paths.
•The efficiency of A* algorithm depends on the quality of heuristic.
•A* algorithm expands all nodes which satisfy the condition f(n) <= h(n)
Complete: A* algorithm is complete as long as:
•Branching factor is finite.
•Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:
Admissible: the first condition requires for optimality is that h(n) should be
an admissible heuristic for A* tree search. An admissible heuristic is
optimistic in nature.
Consistency: Second required condition is consistency for only A* graph-
search. If the heuristic function is admissible, then A* tree search will
always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on
heuristic function, and the number of nodes expanded is exponential to
the depth of solution d. So the time complexity is O(bd), where b is the
branching factor.
Space Complexity: The space complexity of A* search algorithm is O(bd)
Hill Climbing Algorithm in Artificial Intelligence:
•Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.
•Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
•It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
•A node of hill climbing algorithm has two components which are state
and value.
•Hill Climbing is mostly used when a good heuristic is available.
•In this algorithm, we don't need to maintain and handle the search tree
or graph as it only keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
•Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.
•Greedy approach: Hill-climbing algorithm search moves in
the direction which optimizes the cost.
•No backtracking: It does not backtrack the search space, as
it does not remember the previous states.
State-space Diagram for Hill Climbing:
The state-space landscape is a graphical representation of
the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.
On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on the x-
axis. If the function on Y-axis is cost then, the goal of search
is to find the global minimum and local minimum. If the
function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better
than its neighbor states, but there is also another state which
is higher than it.
Global Maximum: Global maximum is the best possible state
of state space landscape. It has the highest value of objective
function.
Current state: It is a state in a landscape diagram where an
agent is currently present.
Flat local maximum: It is a flat space in the landscape where
all the neighbor states of current states have the same value.
Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
•Simple hill Climbing:
•Steepest-Ascent hill-climbing:
•Stochastic hill Climbing:
1. Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill
climbing algorithm. It only evaluates the neighbor node
state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's
one successor state, and if it finds better than the current
state, then move else be in the same state. This algorithm
has the following features:
•Less time consuming
•Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then
return success and Stop.
Step 2: Loop Until a solution is found or there is no new
operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new
state as a current state.
• Else if not better than the current state, then return to
step2.
Step 5: Exit.
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This
algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more
time as it searches for multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
Let SUCC be a state such that any successor of the current state will be better
than it.
For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before moving. Rather,
this search algorithm selects one neighbor node at random and decides whether to
choose it as a current state or examine another state.
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighboring states, but there is another state also present which is
higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state
space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of this
algorithm does not find any best direction to move. A hill-climbing search might
be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps
while searching, to solve the problem. Randomly select a state which is far away
from the current state so it is possible that the algorithm could find non-plateau
region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in
a single move.
Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum. And if
algorithm applies a random walk, by moving a successor, then it may complete
but not efficient. Simulated Annealing is an algorithm which yields both efficiency
and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-
energy crystalline state. The same process is used in simulated annealing in which
the algorithm picks a random move, instead of picking the best move. If the
random move improves the state, then it follows the same path. Otherwise, the
algorithm follows the path which has a probability of less than 1 or it moves
downhill and chooses another path.
Example:
Question. Find the path to reach from S to G using A* search.
Solution. Starting from S, the algorithm computes g(x) + h(x)
for all nodes in the fringe at each step, choosing the node
with the lowest sum. The entire working is shown in the
table below.
Note that in the fourth set of iteration, we get two paths
with equal summed cost f(x), so we expand them both in the
next set. The path with lower cost on further expansion is
the chosen path.
D. A* Graph Search
•A* tree search works well, except that it takes time re-
exploring the branches it has already explored. In other
words, if the same node has expanded twice in different
branches of the search tree, A* search might explore both of
those branches, thus wasting time
•A* Graph Search, or simply Graph Search, removes this
limitation by adding this rule: do not expand the same node
more than once.
•Heuristic. Graph search is optimal only when the forward
cost between two successive nodes A and B, given by h(A) -
h (B) , is less than or equal to the backward cost between
those two nodes g(A -> B). This property of graph search
heuristic is called consistency.
Consistency: h(A) - h(B) <= g(A B)
Question. Use graph search to find path from S to G in the following graph.
Solution: We solve this question pretty much the same way we solved last question,
but in this case, we keep a track of nodes explored so that we don’t re-explore them.

Path: S -> D -> B -> C -> E -> G


Cost: 7
Adversarial Search in Artificial Intelligence:
AI Adversarial search: Adversarial search is a game-playing technique where the
agents are surrounded by a competitive environment. A conflicting goal is given to
the agents (multi agent). These agents compete with one another and try to
defeat one another in order to win the game. Such conflicting goals give rise to
the adversarial search. Here, game-playing means discussing those games
where human intelligence and logic factor is used, excluding other factors such
as luck factor. Tic-tac-toe, chess, checkers, etc., are such type of games where no
luck factor works, only mind works.
Mathematically, this search is based on the concept of ‘Game Theory.’ According
to game theory, a game is played between two players. To complete the game,
one has to win the game and the other looses automatically.’
Techniques required to get the best optimal solution
There is always a need to choose those algorithms which provide the best optimal
solution in a limited time. So, we use the following techniques which could fulfill
our requirements:
Pruning: A technique which allows ignoring the unwanted portions of a search
tree which make no difference in its final result.
Heuristic Evaluation Function: It allows to approximate the cost value at each level
of the search tree, before reaching the goal node.
Elements of Game Playing search
To play a game, we use a game tree to know all the possible choices and to pick the
best one out. There are following elements of a game-playing:
S0: It is the initial state from where a game begins.
PLAYER (s): It defines which player is having the current turn to make a move in the
state.
ACTIONS (s): It defines the set of legal moves to be used in a state.
RESULT (s, a): It is a transition model which defines the result of a move.
TERMINAL-TEST (s): It defines that the game has ended and returns true.
UTILITY (s,p): It defines the final value with which the game has ended. This function
is also known as Objective function or Payoff function. The price which the winner
will get i.e.
(-1): If the PLAYER loses.
(+1): If the PLAYER wins.
(0): If there is a draw between the PLAYERS.
For example, in chess, tic-tac-toe, we have two or three possible outcomes. Either to
win, to lose, or to draw the match with values +1,-1 or 0.
Let’s understand the working of the elements with the help of a game tree designed
for tic-tac-toe. Here, the node represents the game state and edges represent the
moves taken by the players.
A game-tree for tic-tac-toe

•INITIAL STATE (S0): The top node in the game-tree represents the initial
state in the tree and shows all the possible choice to pick out one.
•PLAYER (s): There are two players, MAX and MIN. MAX begins the
game by picking one best move and place X in the empty square box.
•ACTIONS (s): Both the players can make moves in the empty boxes
chance by chance.
•RESULT (s, a): The moves made by MIN and MAX will decide the
outcome of the game.
•TERMINAL-TEST(s): When all the empty boxes will be filled, it will be the
terminating state of the game.
•UTILITY: At the end, we will get to know who wins: MAX or MIN, and
accordingly, the price will be given to them.
Types of algorithms in Adversarial search
In a normal search, we follow a sequence of actions to reach the goal or to
finish the game optimally. But in an adversarial search, the result depends
on the players which will decide the result of the game. It is also obvious
that the solution for the goal state will be an optimal solution because the
player will try to win the game with the shortest path and under limited
time.
There are following types of adversarial search:
Min-max Algorithm
Alpha-beta Pruning.
Minimax Strategy
In artificial intelligence, minimax is a decision-making strategy under game
theory, which is used to minimize the losing chances in a game and to
maximize the winning chances. This strategy is also known as ‘Minmax,’ ’MM,’
or ‘Saddle point.’ Basically, it is a two-player game strategy where if one wins,
the other loose the game. This strategy simulates those games that we play in
our day-to-day life. Like, if two persons are playing chess, the result will be in
favor of one player and will unfavor the other one. The person who will make
his best try, efforts as well as cleverness, will surely win.
We can easily understand this strategy via game tree- where the nodes
represent the states of the game and edges represent the moves made by the
players in the game. Players will be two namely:
MIN: Decrease the chances of MAX to win the game.
MAX: Increases his chances of winning the game.
They both play the game alternatively, i.e., turn by turn and following the
above strategy, i.e., if one wins, the other will definitely lose it. Both players
look at one another as competitors and will try to defeat one-another, giving
their best.
In minimax strategy, the result of the game or the utility value is generated by
a heuristic function by propagating from the initial node to the root node. It
follows the backtracking technique and backtracks to find the best choice.
MAX will choose that path which will increase its utility value and MIN will
choose the opposite path which could help it to minimize MAX’s utility value.
MINIMAX Algorithm
MINIMAX algorithm is a backtracking algorithm where it backtracks to pick the
best move out of several choices. MINIMAX strategy follows the DFS (Depth-
first search) concept. Here, we have two players MIN and MAX, and the game is
played alternatively between them, i.e., when MAX made a move, then the next
turn is of MIN. It means the move made by MAX is fixed and, he cannot change
it. The same concept is followed in DFS strategy, i.e., we follow the same path
and cannot change in the middle. That’s why in MINIMAX algorithm, instead of
BFS, we follow DFS.
Keep on generating the game tree/ search tree till a limit d.
Compute the move using a heuristic function.
Propagate the values from the leaf node till the current position following the
minimax strategy.
Make the best move from the choices.
For example, in the above figure, the two players MAX and MIN are
there. MAX starts the game by choosing one path and propagating all the nodes
of that path. Now, MAX will backtrack to the initial node and choose the best
path where his utility value will be the maximum. After this,
its MIN chance. MIN will also propagate through a path and again will
backtrack, but MIN will choose the path which could minimize MAX winning
chances or the utility value.
So, if the level is minimizing, the node will accept the minimum value from the
successor nodes. If the level is maximizing, the node will accept the maximum
value from the successor.

You might also like