_DAA Exp5
_DAA Exp5
UID: 2023300242
Experiment No. 5
AIM: Experiment on Graphs: To understand Graphs and implement Prim’s and Kruskal’s
Algorithm to find Minimum Spanning Trees.
Program 1
3. To find the best, worst and average cases of each algorithm and find
time and space complexity for each
Prim’s algorithm builds the Minimum Spanning Tree (MST) by starting from any
vertex and greedily adding the smallest edge that connects a new vertex to the
growing MST.
Kruskal’s Algorithm:
Kruskal’s algorithm sorts all edges by weight and greedily picks the smallest edge
that does not form a cycle, using the Union-Find data structure to manage
connected components.
Prim’s Time and Space Complexity:
Kruskal’s Time and Space Complexity:
Program: Code for Prim’s Algorithm:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
printf("\n");
printf("\n");
int visited[V];
int totalWeight = 0;
visited[0] = 1;
printf("Chosen edges:\n");
if (visited[j]) {
minWeight = graph[j][k];
prev = j;
next = k;
span[prev][next] = minWeight;
span[next][prev] = minWeight;
visited[next] = 1;
totalWeight += minWeight;
return span;
int main() {
int V, E;
int graph[V][V];
graph[i][j] = 0;
graph[src][dest] = weight;
graph[dest][src] = weight;
}
printf("Original Graph:\n");
print(V, graph);
return 0;
Example 1
Output:
Dry Run:
Observations:
Prim’s Algorithm:
● Best Case:
○ Occurs when the graph is dense (many edges), and a priority queue with a
Fibonacci heap is used.
● Worst Case:
○ Happens when the graph is sparse (few edges) and implemented with an adjacency
matrix instead of a priority queue.
○ Every edge lookup takes O(V), leading to inefficient edge selection.
○ Time Complexity: O(V²) (with adjacency matrix).
● Average Case:
○ When the graph is neither too dense nor too sparse, the priority queue performs well,
balancing edge selection and heap updates.
○ Time Complexity: O(E log V) (using a binary heap).
Program 2
Problem: Kruskal’s
int parent[MAX];
int findParent(int v) {
while (parent[v] != v)
v = parent[v];
return v;
}
return i - 1;
}
recursive_quick_sort(edges, 0, E - 1);
if (x != y) {
result[e][0] = edge[0];
result[e][1] = edge[1];
result[e][2] = edge[2];
totalCost += edge[2];
e++;
parent[x] = y;
}
}
int main() {
int V, E, edges[MAX][3];
printf("Enter number of vertices and edges: ");
scanf("%d %d", &V, &E);
kruskals(edges, V, E);
return 0;
}
Example 1
Output:
Dry Run:
Observations:
Kruskal’s Algorithm:
● Best Case:
○ Occurs when edges are already sorted by weight. The algorithm directly processes
edges without additional sorting overhead.
○ Time Complexity: O(E log V) (due to Union-Find operations).
● Worst Case:
○ Happens when edges are in random order, requiring O(E log E) sorting.
○ If Union-Find without path compression is used, the merge operations can be slow.
○ Time Complexity: O(E log E) (sorting dominates).
● Average Case:
○ When edges are randomly distributed, the sorting step and Union-Find operations
balance out.
○ Time Complexity: O(E log E) (sorting + Union-Find).
Conclusion:
By applying both Prim’s and Kruskal’s algorithms efficiently construct a Minimum Spanning
Tree (MST) while making locally optimal choices at each step.
From Prim’s algorithm, I learned that by expanding the MST from a starting node and always
selecting the smallest edge connected to the tree, we can efficiently build the MST, especially in
dense graphs.
From Kruskal’s algorithm, I observed that sorting edges by weight and merging components
using the Union-Find technique allows us to construct the MST efficiently, making it ideal for
sparse graphs.
These applications reinforced how the greedy approach ensures an optimal solution in different
scenarios by prioritizing the smallest possible edge at each step, leading to an efficient MST
construction.