0% found this document useful (0 votes)
3 views

6. Detailed Notes_Module 2

Informed search algorithms, such as Best First Search and A*, utilize evaluation functions to efficiently navigate search spaces by leveraging problem-specific knowledge. Heuristic functions play a crucial role in these algorithms, providing shortcuts to solutions and optimizing performance in complex problems. Various informed search methods, including Hill Climbing and Generate and Test, offer distinct approaches to problem-solving, each with its own advantages and limitations.

Uploaded by

Lakshmi Kharvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

6. Detailed Notes_Module 2

Informed search algorithms, such as Best First Search and A*, utilize evaluation functions to efficiently navigate search spaces by leveraging problem-specific knowledge. Heuristic functions play a crucial role in these algorithms, providing shortcuts to solutions and optimizing performance in complex problems. Various informed search methods, including Hill Climbing and Generate and Test, offer distinct approaches to problem-solving, each with its own advantages and limitations.

Uploaded by

Lakshmi Kharvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Informed search algorithm

➢ An uninformed search algorithm would blindly traverse to the next node in each
manner without considering the cost associated with that step.
➢ An informed search, like Best First Search, on the other hand, would use an
evaluation function to decide which among the various available nodes is the most
promising (or ‘BEST’) before traversing to that node.
➢ Informed search algorithms make use of problem-specific knowledge beyond
problem definition.
➢ The knowledge will be like how far we are from the goal, path cost, how to reach
the goal node, etc.
➢ This knowledge helps agents to explore less search space and find the goal node.
➢ This problem-specific knowledge is provided in the form of evaluation functions or
heuristic functions.
➢ An informed search is more efficient than an Un-Informed search.
➢ Informed search algorithms are also referred to as “Heuristic Search.”
➢ Types of Informed search algorithms
➢ Best First Search, A*, AO*, Hill Climbing, Generate & Test, Mini-max search,
Alpha-Beta pruning.
Heuristic functions
➢ Heuristic functions are the most common form in which additional knowledge of
the problem is imparted to the search algorithm.
➢ Traditional methods (or) step-by-step algorithms compromise on optimality,
accuracy, precision, completeness, or speed.
➢ Heuristics are used when it is impractical to solve a problem using a step-by-step
algorithm like BFS and DFS
➢ Heuristics are shortcuts to getting the solution.
➢ They make use of optimization algorithms to improve results.
➢ The solution is obtained quickly and more efficiently than traditional methods.
➢ Heuristics solves complex problems and helps to make quick decisions.
➢ Common heuristic methods
o Trial and Error Method
o Guessing
o Process of elimination
o Historical data analysis
➢ Heuristics are performed in two categories.
o Direct heuristic search, E.g., BFS, DFS
o Weak heuristic search, E.g., Best First Search
➢ The heuristic function that is being used by algorithms is given by,
o f(n) = g(n) + h(n)
o g(n) = path cost from the initial node to the current node
o h(n) = cost from the current node to the goal node
➢ The heuristics g(n) and h(n) are determined based on the Euclidean distance
(straight line distances)
➢ It denotes the shortest distance between the two nodes.

Best First Search

Best First Search is a heuristic search algorithm used to find the optimal path between
two points in a graph or grid.
In Best First Search, the algorithm chooses the next node to explore based on a
heuristic evaluation function, h(n), which estimates how close the node is to the goal.

Steps in Best First Search algorithm:


1. The algorithm starts at the start node and creates an OPEN list.
2. The algorithm selects the node from the open list with lowest h(n) value.
3. The algorithm examines each of the neighboring nodes of the selected node
and calculates their h(n) scores.
4. If a neighbor node is not in the open list, the algorithm adds it to the open list
and calculates its h(n) value. If it is already in the open list, the algorithm updates
its h(n) score if the newly calculated score is better.
5. The algorithm repeats steps 2-4 until the goal node is reached or the open set
is empty.
6. Once the goal node is reached, the algorithm can backtrack to find the optimal
path from the start node to the goal node.

➢ Best First Search uses the concept of a Priority queue and heuristic search.
➢ It is a combination of Breadth First Search and Depth First Search
➢ It can be represented in the form of a Tree search or Graph Search
➢ To search in the graph space, the method uses two lists for tracking the traversal.
➢ An ‘OPEN’ list that keeps track of the current ‘immediate’ nodes.
➢ A ‘CLOSED’ list that keeps track of the nodes already traversed.
➢ The Best First Search algorithm makes use of the following heuristic function.
o f(n) = h(n)
o Here, h(n) denotes the cost to reach the goal from the current state.
➢ It discards the function g(n), which represents the cost of the path from the initial
node to the current state.
➢ The best first is the greedy algorithm because it would choose a state with lower
heuristics, h(n). Hence, it is also called as “Greedy Best First Search” algorithm
Best First Search Algorithm
1. Create two empty lists: OPEN and CLOSED
2. Start from the initial node and put it in the OPEN list.
3. If the OPEN list is empty, then EXIT the loop, returning ‘False.’
4. Else, Select the first/top node in the OPEN list and move it to the CLOSED list.
5. If the node moved to the CLOSED list is a GOAL node, EXIT the loop returning
‘True.’
6. Else, expand the node moved to the CLOSED list to generate the successors.
7. Add all the successors to the OPEN list.
8. Reorder the nodes in the OPEN list in ascending order according to an evaluation
/ heuristic function f(n)
Repeat the steps from 3 till the goal is reached or the OPEN list is empty.

Best First Search Tabulation


OPEN LIST CLOSED LIST SUCCESSORS
Initial Add all the
Check if the recently
Node Check if OPEN List successor nodes to
added Node to
is Empty OPEN list and
CLOSED list is GOAL
If, NO: Move the arrange them in
If, NO: Expand the
first Element to ascending order
node and generate
CLOSED List based on their
successors
heuristics
Conclusion:

A* algorithm
The A* algorithm is a pathfinding algorithm that finds the shortest path between two
points on a graph. It uses a heuristic function to estimate the distance between the
current node and the goal node. A* algorithm is widely used in video games, robotics,
and other applications that require finding the shortest path between two points in a
graph or grid.

Steps in A* algorithm:
1. Initialize starting node and goal node: Set the starting node as the current
node and the goal node as the target node.

2. Create a priority queue: Each node is assigned a priority based on its f(n) value,
which is the sum of the g(n) (the actual distance from the starting node to the
current node) and the h(n) (the estimated distance from the current node to the
goal node).
3. Add the starting node to the priority queue: The priority queue should be
sorted based on the f(n) values, with the lowest f(n) score node at the top of the
queue.
4. While the priority queue is not empty: Dequeue the node with the lowest f(n)
score from the priority queue. If this node is the goal node, then the shortest
path has been found. Otherwise, expand the node by generating its neighbors.
5. Calculate the g(n) for each neighbor: The g(n) score is the actual distance
from the starting node to the neighbor node. Calculate the g(n) score by adding
the distance from the current node to the neighbor node to the g-score of the
current node.
6. Calculate the f(n) for each neighbor: The f(n) is the sum of the g(n) score and
the h(n) score for the neighbor node.
7. Add each neighbor to the priority queue: If the neighbor node has not been
visited, add it to the priority queue and assign it the calculated f(n) score. If the
neighbor node has already been visited, update its f(n) score if the new path
has a lower cost.
8. Continue until the goal node is reached: Repeat steps 6-9 until the goal node
is dequeued from the priority queue.
9. Trace the path: Once the goal node has been reached, trace the path back to
the starting node by following the parent pointers from the goal node to the
starting node.

AO* algorithm.

AO* algorithm is a heuristic search algorithm used for finding the shortest path
between two points in a graph or grid, similar to the A* algorithm.
However, unlike A*, AO* is designed to work in dynamic environments where the graph
can change over time due to changing conditions or new information becoming
available.
The AO* algorithm is designed to find the shortest path in the initial state of the graph
and then modify the path as new information becomes available.
The algorithm uses a heuristic function to estimate the distance to the goal and uses
this information to guide the search towards the optimal solution.

Steps in AO* algorithm:


1. The algorithm starts by performing an A* search on the initial state of the graph
and finds the optimal path from the start node to the goal node.
2. The algorithm then examines the nodes along the path and assigns them a "g-
value" (the actual cost of reaching that node from the start node).
3. If the graph changes, the algorithm re-performs the A* search on the new state
of the graph and finds a new optimal path.
4. The algorithm then examines the new path and compares it with the previous
path, repairing the old path where possible to make it closer to the new optimal
path.
5. The algorithm repeats steps 3 and 4 until the optimal path is found or a
termination condition is met.

One of the main advantages of the AO* algorithm is that it can adapt to changes in
the graph and still provide an optimal solution.
It is also an "anytime" algorithm, meaning that it can be interrupted at any time to
return the best solution found so far.
However, the AO* algorithm can be computationally expensive, as it may need to
repeatedly search the graph to repair the path as it changes. As a result, it may not be
suitable for real-time applications with large or complex graphs.

Generate and Test Algorithm

Generate and Test is a simple algorithm that works by generating all possible solutions
to a problem and testing each solution to see if it meets the requirements of the
problem.

The Generate and Test algorithm is used when the problem space is small enough to
generate all possible solutions and the testing of each solution is not too expensive.

Steps in Generate and Test algorithm:


1. Generate all possible solutions to the problem.
2. Test each solution to see if it satisfies the requirements of the problem.
3. If a solution satisfies the requirements, return it as the solution to the problem.
4. If none of the generated solutions satisfies the requirements, return "No
solution found."

One of the main drawbacks of the Generate and Test algorithm is that it can be very
slow and inefficient for large problem spaces, as it generates and tests all possible
solutions. As a result, it is usually not the preferred algorithm for solving complex
problems with large problem spaces.

Hill Climbing Algorithm

Definition

Hill climbing is a local search algorithm that continuously moves in the direction of
elevation to find the peak of the mountain or the best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.

➢ Hill climbing is a heuristic search used for mathematical optimization problems.


➢ If a large set of inputs and a good heuristic function (a self-learning function) is
given, it tries to find a solution to the problem.
➢ It starts with an initial state and evaluates the solution.
➢ Later it considers the neighbors of the current state and finds the solution.
➢ It selects the neighbor that results in a solution better than the previous one.
➢ If no better solution is found among the neighbors, the algorithm terminates.
➢ The solution obtained may not be the optimal or the best solution for the problem,
but it's basically the best possible solution in a very reasonable time period.
➢ Hill climbing solves the problems where we need to maximize or minimize a given
real function by choosing values from the given inputs.

State Space Diagram

State-space diagram
1. State-space diagram: State-space diagram is a graphical representation of a set
of states that a search algorithm can reach versus the value of the objective
function.
2. X-axis: denotes the state space, i.e., states or configuration our algorithm may
reach.
3. Y-axis: denotes the values of objective function corresponding to a particular
state.
4. Objective Function: The function which needs to be maximized or minimized.
5. Local Maximum: Local maximum is a state which is better than its neighbor states,
but there is also another state which is higher than it.
At a local maximum, all neighboring states have a value that is worse than the
current state. Since hill-climbing uses a greedy approach, it will not move to the
worse state and terminate itself. The process will end even though a better
solution may exist.
6. Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of the objective function.
7. Current state: A state in a landscape diagram where an agent is currently present.
8. Flat local maximum or plateau: It is a flat space in the landscape where all the
neighbor states of current states have the same value. Hence, it is not possible to
select the best direction.
9. Shoulder: It is a plateau that has an uphill edge.
10. Ridge: A ridge is a special form of the local maximum. It has an area that is higher
than its surrounding areas, but it itself has a slope and cannot be reached in a single
move.
Types of Hill-Climbing Algorithm:

1. Simple hill Climbing


• This is a simple form of hill climbing that evaluates the neighboring solutions.
• This algorithm selects one neighbor at a time.
• If the next neighbor state has a higher value than the current state, the
algorithm will move.
• The neighboring state will then be set as the current one.
• This algorithm consumes less time and requires little computational power.
• However, the solutions produced by the algorithm are sub-optimal.
• In some cases, an optimal solution may not be guaranteed.

2. Steepest-Ascent hill-climbing
• This algorithm is a variation of the simple hill-climbing algorithm.
• It examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal State.
• This algorithm consumes more time as it searches for multiple neighbors

3. Stochastic hill Climbing


• In this algorithm, the neighboring nodes are selected randomly, and the
selected node is evaluated.
• The algorithm does not go around searching the entire graph for a better node.
• It just picks up points at random and decides to choose if it is a better solution.

Steps in Simple Hill climbing algorithm.

1. Start with an initial solution: The algorithm starts with an initial solution, which
can be randomly generated or selected.
2. Evaluate the current solution: The algorithm evaluates the current solution to
determine its value.
3. Generate neighbors: The algorithm generates a set of neighboring solutions by
making small changes to the current solution.
4. Evaluate the neighbors: The algorithm evaluates the value of each of the
neighboring solutions.
5. Choose the best neighbor: The algorithm selects the neighbor that results in the
biggest improvement and sets it as the current solution.
6. Repeat steps 2 to 5: The algorithm repeats steps 2 to 5 until no further
improvement can be found or a stopping criterion is met.
7. Return the best solution: The algorithm returns the best solution found, which is
either a local maximum or minimum.
Problems associated with Simple Hill climbing algorithm.

1. Flat local maximum or plateau: It is a flat space in the landscape where all the
neighbor states of current states have the same value. Hence, it is not possible to
select the best direction.

2. Local Maximum: Local maximum is a state which is better than its neighbor states,
but there is also another state which is higher than it. At a local maximum, all
neighboring states have a value that is worse than the current state. Since hill-
climbing uses a greedy approach, it will not move to the worse state and terminate
itself. The process will end even though a better solution may exist.

3. Ridge: A ridge is a special form of the local maximum. It has an area that is higher
than its surrounding areas, but itself has a slope and cannot be reached in a single
move

Applications of Hill-Climbing Algorithm:

The hill-climbing algorithm can be applied in the following areas:

Marketing
➢ A hill-climbing algorithm can help a marketing manager to develop the best
marketing plans.
➢ This algorithm is widely used in solving Traveling-Salesman problems.
➢ It can help by optimizing the distance covered and improving the travel time of
sales team members.
➢ The algorithm helps establish the local minima efficiently.
➢ A very good example of this is the traveling Salesman problem, where you need
to minimize the distance traveled by the salesman.

Robotics
➢ Hill climbing is useful in the effective operation of robotics.
➢ It enhances the coordination of different systems and components in robots.

Job Scheduling
➢ The hill climbing algorithm has also been applied in job scheduling.
➢ This is a process in which system resources are allocated to different tasks within
a computer system.
➢ Job scheduling is achieved through the migration of jobs from one node to a
neighboring node.
➢ A hill-climbing technique helps establish the right migration route.
Mini-Max Algorithm

Introduction

The Mini-Max algorithm is a recursive algorithm that explores all possible moves and
outcomes of a game to determine the best possible move for a player, assuming the
opponent will play optimally. It is commonly used in games such as chess, checkers,
and tic-tac-toe, and can be implemented in a wide range of programming languages.

Explanation
The Mini-Max algorithm is a decision-making algorithm used in game theory,
particularly in two-player games. It is designed to help a computer determine the best
possible move to make in a game given the current state of the game board. In this
algorithm, the computer assumes that the opponent will play optimally and chooses
the move that maximizes its own score while minimizing the opponent's score. It does
this by recursively evaluating the possible outcomes of each move it could make,
assuming that the opponent will always make the move that minimizes the computer's
score.

Steps in Mini-Max algorithm:

1. The algorithm first looks at the current state of the game board and generates
a list of all possible moves that the computer can make.
2. For each of these moves, the algorithm assumes that the opponent will make
the move that results in the worst possible outcome for the computer (i.e., the
move that minimizes the computer's score).
3. The algorithm then recursively evaluates the resulting game state, assuming
that the opponent will continue to make the move that minimizes the
computer's score, and the computer will make the move that maximizes its
score.
4. This process continues until a certain depth of the game tree is reached or a
terminal state (i.e., win, loss, or draw) is reached.
5. Once the algorithm has evaluated all possible outcomes, it chooses the move
that leads to the best possible outcome for the computer, given the assumption
that the opponent will play optimally.
Alpha-beta pruning

Introduction
The Alpha-Beta pruning algorithm is a powerful optimization technique that can
greatly reduce the number of nodes that need to be evaluated in the Minimax
algorithm, making it a more efficient way to find the optimal move in a game.

Explanation
Alpha-beta pruning is an optimization technique used in the Minimax algorithm to
reduce the number of nodes that need to be evaluated in a search tree. The basic idea
behind alpha-beta pruning is to eliminate branches of the search tree that cannot
possibly affect the final result, thereby reducing the overall number of nodes that need
to be evaluated.
In the Minimax algorithm, the search tree represents all possible moves and
countermoves in a game, and the goal is to find the optimal move for the current
player.
Alpha-Beta pruning algorithm searches through the tree by recursively evaluating
nodes, starting at the root node and moving down the tree to the leaves. During this
search, the algorithm keeps track of two values: alpha and beta.
Alpha represents the maximum score that the current player can guarantee, assuming
that the opponent plays optimally.
Beta represents the minimum score that the opponent can guarantee, assuming that
the current player plays optimally.
As the algorithm evaluates nodes, it updates alpha and beta based on the values of its
children nodes.
If the alpha value at a node is greater than or equal to the beta value, then the
algorithm knows that the parent node can never be reached by the optimal play of the
opposing player, and the search can be pruned at that node. This is because any moves
beyond that node will not change the optimal outcome of the game.
Steps in Alpha-Beta pruning algorithm:
1. The algorithm starts at the root node of the search tree and evaluates the child
nodes.
2. As it evaluates each node, it keeps track of the alpha and beta values.
3. If the alpha value at a node is greater than or equal to the beta value, the
algorithm prunes the search at that node.
4. The algorithm then continues evaluating the remaining child nodes.
5. As the algorithm progresses, it updates the alpha and beta values based on the
results of its evaluations.
6. The algorithm continues evaluating nodes until it has searched the entire tree
or until it has pruned all the unnecessary branches.

You might also like