0% found this document useful (0 votes)
23 views

Introduction To AI - Mod 2 Part 2

Module 2_Part II_Informed Search Algorithm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Introduction To AI - Mod 2 Part 2

Module 2_Part II_Informed Search Algorithm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Module 2 – Part II

Informed Search Algorithms


Informed search algorithms
Informed search Algorithms: Best First Search, A*, AO*, Hill
Climbing, Generate & Test, Alpha-Beta pruning, Min-max search.
Introduction
In uninformed search algorithms, agent looks the entire search space for all possible solutions
of the problem without having any additional knowledge about search space. Because of this
it takes time to get the solution.
But informed search algorithm contains an knowledge such as how far we are from the goal,
path cost, how to reach to goal node, etc. This knowledge help agents to explore less to the
search space and find more efficiently the goal node.
Informed search algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however, might not
always give the best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal.
In the informed search:
✔ Best First Search,
✔ A*,
✔ AO*,
✔ Hill Climbing,
✔ Generate & Test,
✔ Alpha-Beta pruning,
✔ Min-max search.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search algorithms.
It expands nodes based on their heuristic value.
It maintains two lists, OPEN and CLOSED list.
In the CLOSED list, it places those nodes which have already
expanded and
In the OPEN list, it places nodes which have yet not been
expanded.
Best-first Search Algorithm (Greedy
Search)
Greedy best-first search algorithm always selects the path which appears
best at that moment.
It is the combination of depth-first search and breadth-first search
algorithms.
It uses the heuristic function and search. Best-first search allows us to take
the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node
which is closest to the goal node and the closest cost is estimated by
heuristic function.
3)

node h(n) node h(n) node h(n)


A 11 E 4 I,J 3
B 5 F 2 S 15
C,D 9 H 7 G 0
4)
A* Search Algorithm
• A* Search Algorithm is a simple and efficient search algorithm that can be
used to find the optimal path between two nodes in a graph.

• It is used for the shortest path finding.

• It is an extension of Dijkstra’s shortest path algorithm (Dijkstra’s


Algorithm).

• It is the sum of two variables’ values that determines the node it picks at
any point in time.

• At each step, it picks the node with the smallest value of ‘f’ (the sum of ‘g’
and ‘h’) and processes that node/cell.
• ‘g’ is the distance it takes to get to a certain square on the grid from the
starting point, following the path we generated to get there.

• ‘h’ is the heuristic, which is the estimation of the distance it takes to get to
the finish line from that square on the grid.

• Steps:
1. Add start node to list.
2. For all the neighbouring nodes, find the least cost F node.
3. Switch to the closed list. For 8 nodes adjacent to the current node. If the node is not
reachable, ignore it. Else. ...
4. Stop working when you find the destination. It is not possible to find the destination going
through all possible points.
9)

Cost of edge between D and G


is 2
AO* Search Algorithm
• AO* algorithm is a best first search algorithm.

• AO* algorithm uses the concept of AND-OR


graphs to decompose any complex problem
given into smaller set of problems which are
further solved.

• AND-OR graphs are specialized graphs that are


used in problems that can be broken down into
sub problems where AND side of the graph
represent a set of task that need to be done to
achieve the main goal, whereas the OR side of
the graph represent the different ways of
performing task to achieve the same main goal.
Working of AO algorithm:
The AO* algorithm works on the formula given below :
f(n) = g(n) + h(n)
where,
•g(n): The actual cost of traversal from initial state to the current state.
•h(n): The estimated cost of traversal from the current state to the goal state.
•f(n): The actual cost of traversal from the initial state to the goal state.
Hill Climbing Algorithm
• Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best solution
to the problem.
• It terminates when it reaches a peak value where no neighbor
has a higher value.
• It is also called greedy local search as it only looks to its good
immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which are
state and value.
• In this algorithm, we don't need to maintain and handle the
search tree or graph as it only keeps a single current state.
Features of Hill Climbing:
• Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
• No backtracking: It does not backtrack the search space, as it
does not remember the previous states.
State-space Diagram for Hill Climbing:

• The state-space landscape is a graphical representation of the


hill-climbing algorithm which is showing a graph between
various states of algorithm and Objective function/Cost.
• On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis.
• If the function on Y-axis is cost then, the goal of search is to find
the global minimum and local minimum.
• If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
Different regions:
• Local Maximum: Local maximum is a
state which is better than its neighbor
states, but there is also another state which
is higher than it.
• Global Maximum: Global maximum is the
best possible state of state space
landscape. It has the highest value of
objective function.
• Current state: It is a state in a landscape
diagram where an agent is currently
present.
• Flat local maximum: It is a flat space in
the landscape where all the neighbor states
of current states have the same value.
• Shoulder: It is a plateau region which has
an uphill edge.
Problems in Hill Climbing Algorithm:
• 1. Local Maximum: A local maximum is a
peak state in the landscape which is better
than each of its neighboring states, but there
is another state also present which is higher
than the local maximum.
• 2. Plateau: A plateau is the flat area of the
search space in which all the neighbor states
of the current state contains the same value,
because of this algorithm does not find any
best direction to move. A hill-climbing search
might be lost in the plateau area.
• 3. Ridges: A ridge is a special form of the
local maximum. It has an area which is
higher than its surrounding areas, but itself
has a slope, and cannot be reached in a
single move.
Generate and Test Search Algorithm
• Generate and Test Search is a heuristic search technique based
on Depth First Search with Backtracking which guarantees to
find a solution if done systematically and there exists a solution.
• In this technique, all the solutions are generated and tested for
the best solution.
• It ensures that the best solution is checked against all possible
generated solutions.
• The evaluation is carried out by the heuristic function.
Algorithm steps:

1. Generate a possible solution. For


example, generating a particular point
in the problem space or generating a
path for a start state.
2. Test to see if this is a actual solution
by comparing the chosen point or the
endpoint of the chosen path to the set
of acceptable goal states
3. If a solution is found, quit. Otherwise
go to Step 1.
Properties of Good Generators:
• The good generators need to have the following properties:
❑ Complete: Good Generators need to be complete i.e. they should
generate all the possible solutions and cover all the possible states.
In this way, we can guaranty our algorithm to converge to the correct
solution at some point in time.
❑ Non Redundant: Good Generators should not yield a duplicate
solution at any point of time as it reduces the efficiency of algorithm
thereby increasing the time of search and making the time complexity
exponential.
❑ Informed: Good Generators have the knowledge about the search
space which they maintain in the form of an array of knowledge. This
can be used to search how far the agent is from the goal, calculate
the path cost and even find a way to reach the goal.
Mini-Max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which is
used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.
• Min-Max algorithm is mostly used for game playing in AI such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and
other is called MIN.
• Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node
• The steps for the mini max algorithm can be stated as follows:
1. Create the entire game tree.
2. Evaluate the scores for the leaf nodes based on the evaluation
function.
3. Backtrack from the leaf to the root nodes:
• For Maximizer, choose the node with the maximum score.
• For Minimizer, choose the node with the minimum score.
4. At the root node, choose the node with the maximum value and
select the respective move.
Alpha-Beta Pruning
• Alpha-beta pruning is a modified version of the minimax algorithm.
• It is an optimization technique for the minimax algorithm.
• This is a technique by which without checking each node of the game
tree we can compute the correct minimax decision, and this
technique is called pruning.
• This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune the tree leaves but also entire sub-tree.
• The condition which is required for alpha-beta pruning is: α>=β.
• The two-parameter can be defined as:
❑ Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
❑ Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
• The Alpha-beta pruning returns the same move as the standard
minimax algorithm does, but it removes all the nodes which are
not really affecting the final decision but making algorithm slow.
Hence by pruning these nodes, it makes the algorithm fast.
Numerical Problems on Alpha-Beta Pruning
Detailed steps on
https://www.javatpoint.com/ai-alpha-beta-pruni
ng
Thank you

You might also like