0% found this document useful (0 votes)
8 views

Practical No1 3

The document outlines the implementation of Depth First Search (DFS) and Breadth First Search (BFS) algorithms using an undirected graph, detailing their theoretical foundations, pseudocode, and complexities. It also introduces the A* search algorithm, explaining its heuristic approach and providing a practical implementation in Python. The document concludes with a comparison of the algorithms and sample outputs for both search methods.

Uploaded by

Shirish Goyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Practical No1 3

The document outlines the implementation of Depth First Search (DFS) and Breadth First Search (BFS) algorithms using an undirected graph, detailing their theoretical foundations, pseudocode, and complexities. It also introduces the A* search algorithm, explaining its heuristic approach and providing a practical implementation in Python. The document concludes with a comparison of the algorithms and sample outputs for both search methods.

Uploaded by

Shirish Goyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Experiment No: 1

Aim:- Implement Depth First Search algorithm and Breadth First Search algorithm. Use an
undirected graph and develop a recursive algorithm for searching all the vertices of a graph or tree
data structure.

Operating System Recommended:- 64-bit Windows OS and Linux

Theory:-
Depth First Search
Depth first search (DFS) algorithm starts with the initial node of the graph G, and then goes to
deeper and deeper until we find the goal node or the node which has no children. The algorithm, then
backtracks from the dead end towards the most recent node that is yet to be completely unexplored.
The data structure which is being used in DFS is stack. In DFS, the edges that leads to an unvisited
node are called discovery edges while the edges that leads to an already visited node are called block
edges.
Depth first Search or Depth first traversal is a recursive algorithm for searching all the vertices of a
graph or tree data structure. Traversal means visiting all the nodes of a graph.

Depth First Search Algorithm


A standard DFS implementation puts each vertex of the graph into one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding cycles. The DFS
algorithm works as follows:
1. Start by putting any one of the graph's vertices on top of a stack.
2. Take the top item of the stack and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to the
top of the stack.
4. Keep repeating steps 2 and 3 until the stack is empty.

Depth First Search Example


Let's see how the Depth First Search algorithm works with an example. We use an undirected graph
with 5 vertices.
We start from vertex 0, the DFS algorithm starts by putting it in the Visited list and putting all its
adjacent vertices in the stack.

Next, we visit the element at the top of stack i.e. 1 and go to its adjacent nodes. Since 0 has already
been visited, we visit 2 instead.

Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the top of the stack and visit it.

Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the top of the stack and visit it.
Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the top of the stack and visit it.
After we visit the last element 3, it doesn’t have any unvisited adjacent nodes, so we have completed
the Depth First Traversal of the graph.

DFS Pseudocode (recursive implementation)


The pseudocode for DFS is shown below. In the init() function, notice that we run the DFS function
on every node. This is because the graph might have two different disconnected parts so to make sure
that we cover every vertex, we can also run the DFS algorithm on every node.
DFS(G, u)
u.visited = true

for each v ∈ G.Adj[u] if


v.visited == false
DFS(G, v)

init() {
for each u ∈ G
u.visited = false

for each u ∈ G
DFS(G, u)
}

Complexity of Depth First Search


The time complexity of the DFS algorithm is represented in the form of O(V + E), where V is the
number of nodes and E is the number of edges.
The space complexity of the algorithm is O(V).

Breadth First Search


Traversal means visiting all the nodes of a graph. Breadth First Traversal or Breadth First Search is a
recursive algorithm for searching all the vertices of a graph or tree data structure.

BFS Algorithm
A standard BFS implementation puts each vertex of the graph into one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding cycles.
The algorithm works as follows:
1. Start by putting any one of the graph’s vertices at the back of a queue.
2. Take the front item of the queue and add it to the visited list.
3. Create a list of that vertex’s adjacent nodes. Add the ones which aren’t in the visited list to the
back of the queue.
4. Keep repeating steps 2 and 3 until the queue is empty.
BFS Example
Let’s see how the Breadth First Search algorithm works with an example. We use an undirected
graph with 5 vertices.

Undirected graph with 5 vertices.


We start from vertex 0, the BFS algorithm starts by putting it in the Visited list and putting all its
adjacent vertices in the stack.

Visit start vertex and add its adjacent vertices to queue.


Next, we visit the element at the front of queue i.e., 1 and go to its adjacent nodes. Since 0 has
already been visited, we visit 2 instead.
Visit the first neighbour of start node 0, which is 1.
Vertex 2 has unvisited adjacent vertex in 4, so we add that to the back of the queue and visit 3, which
is at the front of the queue.

Visit 2 which was added to queue earlier to add its neighbours.

4 remains in the queue.


Only 4 remains in the queue since the only adjacent node of 3 i.e., 0 is already visited. We visit it.
Since the queue is empty, we have complete the Breadth First Traversal of the graph.

BFS pseudocode
Create a queue Q
mark v as visited and put v into Q
while Q is non empty
remove the head u of Q
mark and enqueue all (unvisited) neighbours of u
BFS Algorithm Complexity
The time complexity of the BFS algorithm is represented in the form of O(V + E), where V is the
number of nodes and E is the number of edges.
The space complexity of the algorithm is O(V).

Conclusion:-
Thus, we have implemented Depth First Search algorithm and Breadth First Search algorithm. Using
undirected graph and developed a recursive algorithm for searching all the vertices of a graph.

Sample Expert Viva-vice Questions:-


1. What is the difference between DFS and BFS?
2. Is BFS & DFS a complete algorithm?Is BFS & DFS optimal algorithm?
3. Why do we prefer queues instead of other data structures while implementing BFS?
4. Why can we not implement DFS using Queues? Why do we prefer stacks instead of other
data structures?
5. Why can we not use DFS for finding shortest possible path?
Practical No. 1

Implement depth first search algorithm and Breadth First Search algorithm. Use an
undirected graph and develop a recursive algorithm for searching all the vertices of a graph
or tree data structure.

Program:
from collections import defaultdict, deque

class Graph:
def __init__(self):
# Default dictionary to store graph
self.graph = defaultdict(list)

def add_edge(self, u, v):


self.graph[u].append(v)
self.graph[v].append(u) # Since it's undirected, add the reverse
edge as well.

def dfs_recursive(self, vertex, visited=None):


if visited is None:
visited = set()
visited.add(vertex)
print(vertex, end=' ')
for neighbor in self.graph[vertex]:
if neighbor not in visited:
self.dfs_recursive(neighbor, visited)

def bfs(self, start):


visited = set() # To keep track of visited nodes
queue = deque([start]) # Use deque for an efficient queue
implementation
visited.add(start)

while queue:
vertex = queue.popleft() # Pop the front of the queue
print(vertex, end=' ')

# Add all unvisited neighbors to the queue


for neighbor in self.graph[vertex]:
if neighbor not in visited:
queue.append(neighbor)
visited.add(neighbor)

# Example usage with new input data starting from vertex 0:


g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 3)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(2, 6)
g.add_edge(3, 7)
g.add_edge(4, 8)
g.add_edge(5, 9)
g.add_edge(6, 10)

print("Depth First Search (starting from vertex 0):")


g.dfs_recursive(0)

print("\nBreadth First Search (starting from vertex 0):")


g.bfs(0)

Output:
Depth First Search (starting from vertex 0):
0 1 3 7 4 8 2 5 9 6 10
Breadth First Search (starting from vertex 0):
0 1 2 3 4 5 6 7 8 9 10
Experiment No: 2

Aim:- Implement a A* (star) algorithm for any game search problem.

Operating System Recommended:- 64-bit Windows OS and Linux

Theory:-
A* Search
➢ A* Search is the most commonly know form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of UCS
and greedy best first search, by which it solve the problem efficiently.

➢ A* search algorithm finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides optimal result
faster.

➢ Algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

The Basic Concept of A* Algorithm


➢ A heuristic algorithm sacrifices optimality, with precision and accuracy for speed, to solve
problems faster and more efficiently.

➢ All graphs have different nodes or points which the algorithm has to take, to reach the final
node. The paths between these nodes all have a numerical value, which is considered as the
weight of the path. The total of all path’s transverse gives you the cost of that route.

➢ Initially, the Algorithm calculates the cost to all its immediate neighboring nodes,n, and
chooses the one incurring the least cost. This process repeats until no new nodes can be
chosen and all paths have been traversed. Then, you should consider the best path among
them. If f(n) represents the final cost, then it can be denoted as:
f(n) = g(n) + h(n), where:
g(n) = cost of traversing from one to another. Thus will vary from node to node.
h(n) = heuristic approximation of the node’s value. This is not a real value but an
approximation cost.

How Does the A* Algorithm Work?


Figure of Weighted Graph

Consider the weighted graph depicted above, which contains nodes and the distance between them.
Let's say you start from A and have to go to D.
Now, since the start is at the source A, which will have some initial heuristic value. Hence, the results
are f(A) = g(A) + h(A)
f(A) = 0 + 6 = 6
Next, take the path to other neighbouring vertices :
f(A-B) = 1 + 4
f(A-C) = 5 + 2
Now take the path to the destination from these nodes, and calculate the weights:
f(A-B-D) = (1+ 7) + 0
f(A-C-D) = (5 + 10) + 0
It is clear that node B gives you the best path, so that is the node you need to take to reach the
destination.

Algorithm of A* Search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2
Advantages:
1. A* Search algorithm is the best algorithm than other search algorithms.
2. A* search algorithm is optimal and complete.
3. This algorithm can solve very complex problems.

Disadvantages:
1. It does not always produce the shortest path as it mostly based on heuristics and
approximation.
2. A* search algorithm has some complexity issues.
3. The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.

Conclusion:-
In this Experiment, an introduction to the powerful search algorithm, we learned about everything
about the algorithm and saw the basic concept behind it. Also implement the algorithm in
python/Java.
Practical No. 2

Implement A star (A*) Algorithm for any game search problem.

Program:
import heapq
class AStar:
def __init__(self, grid, start, goal):
self.grid = grid # 2D grid where 0 = walkable, 1 = blocked
self.start = start # Start position (x, y)
self.goal = goal # Goal position (x, y)
self.rows = len(grid)
self.cols = len(grid[0])

def heuristic(self, node):


# Manhattan distance heuristic
return abs(node[0] - self.goal[0]) + abs(node[1] - self.goal[1])

def neighbors(self, node):


# Return valid neighbors (up, down, left, right)
dirs = [(0, 1), (1, 0), (0, -1), (-1, 0)] # Directions: right,
down, left, up
result = []
for d in dirs:
neighbor = (node[0] + d[0], node[1] + d[1])
if 0 <= neighbor[0] < self.rows and 0 <= neighbor[1] < self.cols
and self.grid[neighbor[0]][neighbor[1]] == 0:
result.append(neighbor)
return result

def a_star_search(self):
# Priority queue to store (f_score, node)
open_list = []
heapq.heappush(open_list, (0, self.start))

came_from = {} # For reconstructing path


g_score = {self.start: 0} # Cost from start to each node
f_score = {self.start: self.heuristic(self.start)} # Estimated cost
from start to goal

while open_list:
current = heapq.heappop(open_list)[1]

# If we reached the goal, reconstruct the path


if current == self.goal:
return self.reconstruct_path(came_from, current)
for neighbor in self.neighbors(current):
tentative_g_score = g_score[current] + 1 # Distance from
current to neighbor is 1

if neighbor not in g_score or tentative_g_score <


g_score[neighbor]:
# Update the best path to the neighbor
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = tentative_g_score +
self.heuristic(neighbor)
heapq.heappush(open_list, (f_score[neighbor], neighbor))

return [] # Return empty path if no solution

def reconstruct_path(self, came_from, current):


# Reconstruct path from came_from dictionary
total_path = [current]
while current in came_from:
current = came_from[current]
total_path.append(current)
return total_path[::-1] # Reverse path to start from the beginning

# Updated 5x5 Grid (Solvable Path)


grid = [
[0, 1, 0, 1, 0],
[0, 1, 0, 1, 0],
[0, 0, 0, 1, 0],
[1, 1, 0, 1, 0],
[0, 0, 0, 0, 0]
]

start = (0, 2) # Start Position


goal = (4, 0) # Goal Position

a_star = AStar(grid, start, goal)


path = a_star.a_star_search()

print("Path from start to goal:", path)

Output:
Path from start to goal: [(0, 2), (1, 2), (2, 2), (3, 2), (4, 2), (4, 1), (4,
0)]
Experiment No: 3

Aim:- Implement Greedy Search Algorithm for Prim’s Minimal Spanning Tree Algorithm.

Operating System recommended:- 64-bit Windows OS and Linux

Theory:-
Minimum Spanning Tree
A minimum spanning tree can be defined as the spanning tree in which the sum of the weights of the
edge is minimum. The weight of the spanning tree is the sum of the weights given to the edges of the
spanning tree. In the real world, this weight can be considered as the distance, traffic load,
congestion, or any random value.
Example of minimum spanning tree
Let's understand the minimum spanning tree with the help of an example.

The sum of the edges of the above graph is 16. Now, some of the possible spanning trees created
from the above graph are –

So, the minimum spanning tree that is selected from the above spanning trees for the given weighted
graph is –
Applications of Minimum Spanning Tree
The applications of the minimum spanning tree are given as follows -
o Minimum spanning tree can be used to design water-supply networks, telecommunication
networks, and electrical grids.

o It can be used to find paths in the map.

Algorithms for Minimum Spanning tree


A minimum spanning tree can be found from a weighted graph by using the algorithms given below -
o Prim's Algorithm

o Kruskal's Algorithm

Prim’s Algorithm
It is a greedy algorithm that starts with an empty spanning tree. It is used to find the minimum
spanning tree from the graph. This algorithm finds the subset of edges that includes every vertex of
the graph such that the sum of the weights of the edges can be minimized.
Prim's algorithm starts with the single node and explores all the adjacent nodes with all the
connecting edges at every step. The edges with the minimal weights causing no cycles in the graph
got selected.

How does the Prim’s Algorithm work?


Prim’s algorithm is a greedy algorithm that starts from one vertex and continue to add the edges with
the smallest weight until the goal is reached. The steps to implement the prim's algorithm are given
as follows –
o First, we have to initialize an MST with the randomly chosen vertex.

o Now, we have to find all the edges that connect the tree in the above step with the new
vertices. From the edges found, select the minimum edge and add it to the tree.

o Repeat step 2 until the minimum spanning tree is formed.

The applications of Prim’s Algorithm are


o Prim's algorithm can be used in network designing.

o It can be used to make network cycles.

o It can also be used to lay down electrical wiring cables.


Example of Prim’s Algorithm
Now, let's see the working of prim's algorithm using an example. It will be easier to understand the
prim's algorithm using an example.
Suppose, a weighted graph is –

Step 1 – First, we have to choose a vertex from the above graph. Let’s choose B.

Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two edges from
vertex B that are B to C with weight 10 and edge B to D with weight 4. Among the edges, the edge
BD has the minimum weight. So, add it to the MST.

Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In this
case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of C, i.e., E
and A. So, select the edge DE and add it to the MST.

Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a cycle to
the graph. So, choose the edge CA and add it to the MST.

So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of the
MST is given below -
Cost of MST = 4 + 2 + 1 + 3 = 10 units.

Algorithm
Step 1: Select a starting vertex
Step 2: Repeat Steps 3 and 4 until there are fringe vertices
Step 3: Select an edge 'e' connecting the tree vertex and fringe vertex that has minimum weight
Step 4: Add the selected edge and the vertex to the minimum spanning tree T
[END OF LOOP]
Step 5: EXIT

Complexity of Prim’s Algorithm


Now, let's see the time complexity of Prim's algorithm. The running time of the prim's algorithm
depends upon using the data structure for the graph and the ordering of edges. Below table shows
some choice.

Data structure used for the minimum edge weight Time Complexity

Adjacency matrix, linear searching O(|V|2)

Adjacency list and binary heap O(|E| log |V|)

Adjacency list and Fibonacci heap O(|E|+ |V| log |V|)

Conclusion:-
Prim’s algorithm can be simply implemented by using the adjacency matrix or adjacency list graph
representation, and to add the edge with the minimum weight requires the linearly searching of an
array of weights. It requires O(|V|2) running time. It can be improved further by using the
implementation of heap to find the minimum weight edges in the inner loop of the algorithm.
Practical No. 3

Implement Alpha-Beta Tree search for any game search problem.

Program:
import math
MAX_PLAYER = 'X'
MIN_PLAYER = 'O'
EMPTY = '_'

class TicTacToe:
def __init__(self):
self.board = [
[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY]
]

def is_moves_left(self, board):


for row in board:
if EMPTY in row:
return True
return False

def evaluate(self, board):


for row in range(3):
if board[row][0] == board[row][1] == board[row][2] and
board[row][0] != EMPTY:
return 10 if board[row][0] == MAX_PLAYER else -10

for col in range(3):


if board[0][col] == board[1][col] == board[2][col] and
board[0][col] != EMPTY:
return 10 if board[0][col] == MAX_PLAYER else -10

if board[0][0] == board[1][1] == board[2][2] and board[0][0] !=


EMPTY:
return 10 if board[0][0] == MAX_PLAYER else -10

if board[0][2] == board[1][1] == board[2][0] and board[0][2] !=


EMPTY:
return 10 if board[0][2] == MAX_PLAYER else -10

return 0

def minimax(self, board, depth, is_maximizing, alpha, beta):


score = self.evaluate(board)
if score == 10:
return score - depth
if score == -10:
return score + depth

if not self.is_moves_left(board):
return 0

if is_maximizing:
best = -math.inf
for i in range(3):
for j in range(3):
if board[i][j] == EMPTY:
board[i][j] = MAX_PLAYER
best = max(best, self.minimax(board, depth + 1,
False, alpha, beta))
board[i][j] = EMPTY
alpha = max(alpha, best)
if beta <= alpha:
break
return best

else:
best = math.inf
for i in range(3):
for j in range(3):
if board[i][j] == EMPTY:
board[i][j] = MIN_PLAYER
best = min(best, self.minimax(board, depth + 1,
True, alpha, beta))
board[i][j] = EMPTY
beta = min(beta, best)
if beta <= alpha:
break
return best

def find_best_move(self):
best_val = -math.inf
best_move = (-1, -1)

for i in range(3):
for j in range(3):
if self.board[i][j] == EMPTY:
self.board[i][j] = MAX_PLAYER
move_val = self.minimax(self.board, 0, False, -math.inf,
math.inf)
self.board[i][j] = EMPTY

if move_val > best_val:


best_move = (i, j)
best_val = move_val
return best_move

def print_board(self):
for row in self.board:
print(" | ".join(row))
print("-" * 9)

game = TicTacToe()
game.board = [
['X', '_', 'O'],
['O', 'X', '_'],
['_', 'O', '_']
]

print("Current board:")
game.print_board()

best_move = game.find_best_move()
print(f"\nThe best move for 'X' is: {best_move}")

Output:
Current board:
X | _ | O
---------
O | X | _
---------
_ | O | _
---------

The best move for 'X' is: (2, 2)

You might also like