0% found this document useful (0 votes)
14 views

Programs of DAA (6)

Uploaded by

ANJALI MODI
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Programs of DAA (6)

Uploaded by

ANJALI MODI
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

PROGRAM 1

Implementation of Insertion Sort Algorithm with example.


LOGIC: Insertion Sort is a comparison-based sorting algorithm. It builds the final sorted array
(or list) one item at a time, with the overall process resembling the way you might sort playing
cards in your hands.
1. Time Complexity: O(n) for the best case, O(n²) in the worst/average case.
2. Space Complexity: Insertion Sort is an in-place sorting algorithm, meaning it requires only a
constant amount O(1) of additional memory space.

Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
for i = 1 to length(A) - 1
key = A[i]
j=i-1
while j >= 0 and A[j] > key
A[j + 1] = A[j]
j=j-1
A[j + 1] = key

Code:
#include <iostream>
#include <vector>
void insertionSort(std::vector<int>& array) {
for (int i = 1; i < array.size(); i++) {
int key = array[i];
int j = i - 1;
// Move elements of array[0..i-1], that are greater than key,
// to one position ahead of their current position
while (j >= 0 && array[j] > key) {
array[j + 1] = array[j];

1
j = j - 1;
}
array[j + 1] = key;
}
}
int main() {
std::vector<int> array = {5, 2, 9, 1, 5, 6};
std::cout << "Original array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
insertionSort(array);
std::cout << "Sorted array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
}
Output:

Viva Questions:
1. When should you use Insertion Sort over other sorting algorithms?

2
2. What happens when there’s a duplicate key in a list that needs to be sorted using Insertion
Sort?
3. How many passes are there in Insertion Sort?
PROGRAM 2
Implementation of Quick Sort Algorithm with example.
LOGIC: Quick Sort is an efficient, comparison-based, divide-and-conquer sorting algorithm. It
works by selecting a 'pivot' element from the array and partitioning the other elements into two
sub-arrays according to whether they are less than or greater than the pivot.
 Pivot Selection: Pick an element as the pivot.
 Partition: Rearrange elements around the pivot.
 Recursion: Apply the algorithm to sub-arrays.
1. Time Complexity: O(n log n) on average, O(n²) in the worst case.
2. Space Complexity: O(log n) for the best/average case, O(n) for the worst case.

Algorithm(Pseudo Code):
function quicksort(A, low, high)
if low < high
pivotIndex = partition(A, low, high)
quicksort(A, low, pivotIndex - 1)
quicksort(A, pivotIndex + 1, high)
function partition(A, low, high)
pivot = A[high]
i = low - 1
for j = low to high - 1
if A[j] <= pivot
i=i+1
swap A[i] with A[j]
swap A[i + 1] with A[high]
return i + 1

Code :

3
#include <iostream>
using namespace std;
void swap(int* a, int* b) {
int t = *a;
*a = *b;
*b = t;
}
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main() {
int arr[] = {10, 7, 8, 9, 1, 5};

4
int n = sizeof(arr) / sizeof(arr[0]);
cout << "unsorted array: ";
for (int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
quickSort(arr, 0, n - 1);
cout << "\n";
cout << "Sorted array: ";
for (int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
return 0;
}
Output:

Viva Questions:
1. Is Quick Sort suitable for use on large datasets? Why or why not?
2. What happens if all the items being sorted have the same value?
3. Are there any limitations with Quick Sort? If yes, then what are they?

5
PROGRAM 3
Implementation of Merge Sort Algorithm with example.
LOGIC: Merge sort is a divide-and-conquer algorithm that divides the input array into
two halves, recursively sorts each half, and then merges the two sorted halves
back together.
Steps of Merge Sort
1. Divide: Split the array into two halves.
2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves to produce the sorted array.
1. Time Complexity: O(n⋅logn)
2. Space Complexity: O(n)

Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
for i = 1 to length(A) - 1
key = A[i]
j=i-1
while j >= 0 and A[j] > key
A[j + 1] = A[j]
j=j-1
A[j + 1] = key

Code:
#include <iostream>
6
#include <vector>
void insertionSort(std::vector<int>& array) {
for (int i = 1; i < array.size(); i++) {
int key = array[i];
int j = i - 1;
// Move elements of array[0..i-1], that are greater than key,
// to one position ahead of their current position
while (j >= 0 && array[j] > key) {
array[j + 1] = array[j];
j = j - 1;
}
array[j + 1] = key;
}
}
int main() {
std::vector<int> array = {5, 2, 9, 1, 5, 6};
std::cout << "Original array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
insertionSort(array);
std::cout << "Sorted array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;

7
}
Output:

Viva Questions:
4. When should you use Insertion Sort over other sorting algorithms?
5. What happens when there’s a duplicate key in a list that needs to be sorted using Insertion
Sort?
6. How many passes are there in Insertion Sort?

8
PROGRAM 4
Implementation of Selection Sort Algorithm with example.
LOGIC: Selection Sort is a simple comparison-based sorting algorithm. It works by repeatedly
finding the minimum (or maximum) element from the unsorted portion of the array and moving
it to the sorted portion.
1. Find the minimum element in the unsorted portion of the array.
2. Swap this minimum element with the first element of the unsorted portion.
3. Move the boundary of the sorted portion one element to the right.
4. Repeat steps 1-3 until the entire array is sorted.

Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
SelectionSort(A, n)
Input:
A: An array of n elements
n: The number of elements in array A
for i = 0 to n - 1 do
minIndex = i
for j = i + 1 to n - 1 do
if A[j] < A[minIndex] then
minIndex = j
end if

9
end for
if minIndex != i then
swap A[i] with A[minIndex]
end if
end for

Code:
#include <bits/stdc++.h>
using namespace std;

void selectionSort(vector<int> &arr) {


int n = arr.size();

for (int i = 0; i < n - 1; ++i) {

// Assume the current position holds


// the minimum element
int min_idx = i;

// Iterate through the unsorted portion


// to find the actual minimum
for (int j = i + 1; j < n; ++j) {
if (arr[j] < arr[min_idx]) {

// Update min_idx if a smaller element is found


min_idx = j;
}
}

// If a new minimum is found,


// swap it with the element at index i
if (min_idx != i) {
swap(arr[i], arr[min_idx]);
}
}
}
void printArray(vector<int> &arr) {

10
for (int &val : arr) {
cout << val << " ";
}
cout << endl;
}
int main() {
vector<int> arr = {64, 25, 12, 22, 11};

cout << "Original array: ";


printArray(arr);

selectionSort(arr);

cout << "Sorted array: ";


printArray(arr);

return 0;
}

Output:

Viva Questions:
1. What is the Time and Space Complexity of Selection Sort?
The Selection sort algorithm has a time complexity of O(n2) and a space complexity of
O(1) since it does not require any additional memory space apart from a temporary
variable used for swapping.
2. How is Selection Sort different from other sorting algorithms like Bubble Sort and
Insertion Sort?
Selection Sort continuously selects the smallest (or largest) element and swaps it with the
first unsorted element. Bubble Sort compares adjacent elements and swaps them if they
are in the wrong order, and Insertion Sort builds the final sorted array one element at a
time by repeatedly taking the next element and inserting it into the correct position.
3. Can Selection Sort be considered stable or unstable?

11
Selection Sort is generally considered unstable because it does not preserve the relative
order of equal elements.

PROGRAM 5
Implementation of Binary Search Tree(BST) Algorithm with example.
LOGIC: A Binary Search Tree (BST) is a special type of tree structure used to store data in an
organized way. It makes it easy to search, insert, or delete elements efficiently.

Algorithm(Pseudo Code):
search(t, value):
if t is empty:
return "not found"

if t equals value:
return "found"

if value is smaller than t:


search in the left side (left subtree)

else:
search in the right side (right subtree)

Code:
#include <iostream>
using namespace std;

struct Node {

12
int key;
Node* left;
Node* right;
Node(int item) {
key = item;
left = right = NULL;
}
};

// function to search a key in a BST


Node* search(Node* root, int key) {

// Base Cases: root is null or key


// is present at root
if (root == NULL || root->key == key)
return root;

// Key is greater than root's key


if (root->key < key)
return search(root->right, key);

// Key is smaller than root's key


return search(root->left, key);
}

// Driver Code
int main() {

// Creating a hard coded tree for keeping


// the length of the code small. We need
// to make sure that BST properties are
// maintained if we try some other cases.
Node* root = new Node(50);
root->left = new Node(30);
root->right = new Node(70);
root->left->left = new Node(20);
root->left->right = new Node(40);
root->right->left = new Node(60);
root->right->right = new Node(80);

(search(root, 19) != NULL)? cout << "Found\n":

13
cout << "Not Found\n";

(search(root, 80) != NULL)? cout << "Found\n":


cout << "Not Found\n";

return 0;

Output:

Viva Questions:
1. What is a self-balanced tree?
Self-balanced binary search trees automatically keep their height as small as possible
when operations like insertion and deletion take place.
For a BST to be self-balanced, it is important that it consistently follows the rules of BST
so that the left subtree has lower-valued keys while the right subtree has high valued
keys.
This is done using two operations:
– Left rotation
– Right rotation
2. What distinguishes binary search trees from binary trees?
A binary search tree (BST) is a particular kind of binary tree in which the left subtree of a
node only contains nodes with keys less than the node’s key, and the right subtree
contains nodes with keys greater than the node’s key. A binary tree is a hierarchical
structure in which each node has at most two children.

3. How do you discover the kth smallest/largest entry in a binary search tree?
In an in-order traversal of a binary search tree, note the items encountered in order to get
the kth smallest element. Stop when you reach the kth element. Similarly, run a reverse
in-order traverse for the kth greatest element.

14
PROGRAM 6
Implementation of Greedy Knapsack Algorithm with example.

LOGIC: The Greedy Knapsack Problem aims to maximize the profit of items placed in a
knapsack with a fixed capacity by using a greedy approach. The steps are:

1. Sort items by profit-to-weight ratio in descending order.


2. Select items greedily: Pick items with the highest ratio first, fully if possible, or partially
if not.
3. Maximize profit: Continue selecting until the knapsack is full, summing the profits of
the selected items, including fractions.

Algorithm(Pseudo Code):
//P[1:n] and the W[1:n] contain the profit
// & weight respectively of the n object ordered
//such that P[i]>=P[i+1]/W[i+1]
//n is the knapsack size and x[1:n] is the solution vector
{
For i:= 1 to n do x[i]:=0.0;
U:=m;
For i:=1 to n do
{

26 August 2024 22

15
If(W[i]>U) then break;
X[i]:=1.0; U = U-W[i]
}
If(i<=n) then X[i] := U/W[i]
}

Code:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Structure to represent an item
struct Item {
int weight;
int profit;
double ratio;
};
// Comparator to sort items by profit-to-weight ratio
bool compare(Item a, Item b) {
return a.ratio > b.ratio; // Sort in decreasing order
}
// Function to solve the Greedy Knapsack problem
double knapsackGreedy(int capacity, vector<Item>& items, vector<double>& x) {
int n = items.size();
double totalProfit = 0.0;
int remainingCapacity = capacity;

// Initialize solution vector to 0.0


for (int i = 0; i < n; i++) {
x[i] = 0.0;
}

// Sort items by their profit-to-weight ratio


sort(items.begin(), items.end(), compare);

// Iterate over items


for (int i = 0; i < n; i++) {
if (items[i].weight <= remainingCapacity) {
x[i] = 1.0; // Take the entire item

16
remainingCapacity -= items[i].weight;
totalProfit += items[i].profit;
} else {
x[i] = (double)remainingCapacity / items[i].weight; // Take fraction of the item
totalProfit += x[i] * items[i].profit;
break; // Knapsack is full
}
}

return totalProfit;
}
int main() {
int n; // Number of items
int capacity; // Knapsack capacity
// Input number of items and capacity of knapsack
cout << "Enter the number of items: ";
cin >> n;
cout << "Enter the capacity of the knapsack: ";
cin >> capacity;
vector<Item> items(n);
vector<double> x(n); // Solution vector
// Input the weight and profit of each item
cout << "Enter the weight and profit of each item:" << endl;
for (int i = 0; i < n; i++) {
cout << "Item " << i + 1 << " - Weight: ";
cin >> items[i].weight;
cout << "Profit: ";
cin >> items[i].profit;
items[i].ratio = (double)items[i].profit / items[i].weight; // Calculate profit-to-weight ratio
}
// Call knapsackGreedy function to solve the problem
double maxProfit = knapsackGreedy(capacity, items, x);

// Output the solution vector and the maximum profit


cout << "\nSolution vector (fractions of items taken):" << endl;
for (int i = 0; i < n; i++) {
cout << "Item " << i + 1 << ": " << x[i] << endl;
}
cout << "\nMaximum profit: " << maxProfit << endl;

return 0;
17
}
Output:

Viva Questions:
1. Why is the Greedy approach suitable for solving the Fractional Knapsack problem but
not the 0/1 Knapsack problem?
The Greedy approach works for the Fractional Knapsack problem because selecting items
based on the highest profit-to-weight ratio leads to an optimal solution. However, for the
0/1 Knapsack problem, this approach does not guarantee an optimal solution because you
cannot take fractional parts of items.

2. What is the time complexity of the Greedy Knapsack algorithm?


The time complexity is O(n log n) due to sorting the items based on their profit-to-weight
ratio. After sorting, the algorithm has a linear complexity of O(n) for selecting the items.

3. What are some real-world applications of the Knapsack problem?


The Knapsack problem can be applied in various fields such as resource allocation,
financial budgeting, cargo loading, and optimizing investments.

18
PROGRAM 7
Implementation of Strassen’s Matrix Algorithm with example.

LOGIC: Strassen's algorithm is an efficient algorithm for matrix multiplication that improves
upon the standard O(n3) time complexity to O(nlog27) = O (n2.81). It achieves this by breaking down
larger matrices into smaller submatrices and using recursive multiplications and additions to
reduce the number of operations.

Algorithm(Pseudo Code):
function Strassen(A, B, n)
if n == 1 then
return A[0][0] * B[0][0] // Base case for 1x1 matrix

else
// Divide A and B into 4 submatrices of size n/2 x n/2
(A11, A12, A21, A22) = splitMatrix(A, n)
(B11, B12, B21, B22) = splitMatrix(B, n)

// Compute the 7 products


M1 = Strassen(add(A11, A22), add(B11, B22), n/2)
M2 = Strassen(add(A21, A22), B11, n/2)
M3 = Strassen(A11, subtract(B12, B22), n/2)

19
M4 = Strassen(A22, subtract(B21, B11), n/2)
M5 = Strassen(add(A11, A12), B22, n/2)
M6 = Strassen(subtract(A21, A11), add(B11, B12), n/2)
M7 = Strassen(subtract(A12, A22), add(B21, B22), n/2)

// Combine the products to form submatrices of C


C11 = add(subtract(add(M1, M4), M5), M7)
C12 = add(M3, M5)
C21 = add(M2, M4)
C22 = add(subtract(add(M1, M3), M2), M6)

// Combine C11, C12, C21, C22 into a single matrix C


C = combineMatrices(C11, C12, C21, C22, n)

return C
end if
end function

// Helper functions for matrix addition, subtraction, and combination


function add(Matrix1, Matrix2)
// Returns the element-wise sum of Matrix1 and Matrix2
end function

function subtract(Matrix1, Matrix2)


// Returns the element-wise difference of Matrix1 and Matrix2
end function

function splitMatrix(Matrix, n)
// Splits Matrix into four submatrices and returns (Matrix11, Matrix12, Matrix21, Matrix22)
end function

function combineMatrices(C11, C12, C21, C22, n)


// Combines the four quadrants into a single n x n matrix and returns it
end function

Code:
#include <iostream>
#include <vector>
using namespace std;

20
// Function to add two matrices
vector<vector<int>> add(const vector<vector<int>>& A, const vector<vector<int>>& B) {
int n = A.size();
vector<vector<int>> result(n, vector<int>(n));
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
result[i][j] = A[i][j] + B[i][j];
return result;
}

// Function to subtract two matrices


vector<vector<int>> subtract(const vector<vector<int>>& A, const vector<vector<int>>& B) {
int n = A.size();
vector<vector<int>> result(n, vector<int>(n));
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
result[i][j] = A[i][j] - B[i][j];
return result;
}

// Strassen's algorithm for matrix multiplication


vector<vector<int>> strassen(const vector<vector<int>>& A, const vector<vector<int>>& B) {
int n = A.size();
if (n == 1) {
return {{A[0][0] * B[0][0]}}; // Base case for 1x1 matrix
}
int newSize = n / 2;
vector<int> inner(newSize);
vector<vector<int>> A11(newSize, inner), A12(newSize, inner), A21(newSize, inner),
A22(newSize, inner);
vector<vector<int>> B11(newSize, inner), B12(newSize, inner), B21(newSize, inner),
B22(newSize, inner);

// Splitting the matrices into 4 submatrices


for (int i = 0; i < newSize; i++) {
for (int j = 0; j < newSize; j++) {
A11[i][j] = A[i][j];
A12[i][j] = A[i][j + newSize];
A21[i][j] = A[i + newSize][j];
A22[i][j] = A[i + newSize][j + newSize];

21
B11[i][j] = B[i][j];
B12[i][j] = B[i][j + newSize];
B21[i][j] = B[i + newSize][j];
B22[i][j] = B[i + newSize][j + newSize];
}
}

// Calculating the 7 products


auto M1 = strassen(add(A11, A22), add(B11, B22));
auto M2 = strassen(add(A21, A22), B11);
auto M3 = strassen(A11, subtract(B12, B22));
auto M4 = strassen(A22, subtract(B21, B11));
auto M5 = strassen(add(A11, A12), B22);
auto M6 = strassen(subtract(A21, A11), add(B11, B12));
auto M7 = strassen(subtract(A12, A22), add(B21, B22));

// Calculating the resultant submatrices


auto C11 = add(subtract(add(M1, M4), M5), M7);
auto C12 = add(M3, M5);
auto C21 = add(M2, M4);
auto C22 = add(subtract(add(M1, M3), M2), M6);

// Combining the submatrices into the final result


vector<vector<int>> C(n, vector<int>(n));
for (int i = 0; i < newSize; i++) {
for (int j = 0; j < newSize; j++) {
C[i][j] = C11[i][j];
C[i][j + newSize] = C12[i][j];
C[i + newSize][j] = C21[i][j];
C[i + newSize][j + newSize] = C22[i][j];
}
}

return C;
}

int main() {
int n;
cout << "Enter the size of matrix (must be power of 2): ";
cin >> n;

22
vector<vector<int>> A(n, vector<int>(n)), B(n, vector<int>(n));
cout << "Enter elements of matrix A:" << endl;
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
cin >> A[i][j];

cout << "Enter elements of matrix B:" << endl;


for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
cin >> B[i][j];

vector<vector<int>> C = strassen(A, B);

cout << "Resultant matrix after multiplication:" << endl;


for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++)
cout << C[i][j] << " ";
cout << endl;
}

return 0;
}

Output:

Complexity:
 Worst case time complexity: Θ(n^2.8074)
 Best case time complexity: Θ(1)
 Space complexity: Θ(log n)
23
Viva Questions:

1. How is Strassen’s method an example of a divide-and-conquer approach?


It recursively splits the matrix into smaller submatrices, solves them independently, and
then combines the solutions, similar to classic divide-and-conquer strategies like merge
sort.

2. Why is Strassen’s algorithm not always used for small matrices?


Due to the overhead from recursive splitting and recombination, Strassen's is more
efficient only for large matrices, typically beyond 32x32 or 64x64.

3. In which real-world applications is Strassen’s algorithm particularly beneficial?


It's particularly useful in high-dimensional data processing tasks such as neural network
training, large matrix computations in physics simulations, and computer graphics.

PROGRAM 8

Implementation of Dijkstra Algorithm with example.

LOGIC: Dijkstra's Algorithm is a popular algorithm for finding the shortest path from a single
source node to all other nodes in a graph with non-negative edge weights. This algorithm is
particularly useful in network routing and GPS navigation systems to find the quickest route
from one point to another.

Algorithm(Pseudo Code):

function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
while Q IS NOT EMPTY
U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U

24
return distance[], previous[]

Code:
#include <iostream>
#include <vector>
#include <climits>
using namespace std;
int minDistance(const vector<int>& dist, const vector<bool>& visited, int n) {
int minDist = INT_MAX;
int minIndex = -1;
for (int i = 0; i < n; ++i) {
if (!visited[i] && dist[i] < minDist) {
minDist = dist[i];
minIndex = i; }}
return minIndex;}
void dijkstra(const vector<vector<int>>& graph, int source) {
int n = graph.size();
vector<int> dist(n, INT_MAX); // Initialize distances to infinity
vector<bool> visited(n, false); // Track visited nodes
dist[source] = 0; // Distance from source to itself is 0
for (int count = 0; count < n - 1; ++count) {
int u = minDistance(dist, visited, n);
visited[u] = true; // Mark as visited
for (int v = 0; v < n; ++v) {
if (!visited[v] && graph[u][v] && dist[u] != INT_MAX && dist[u] + graph[u][v] <
dist[v]) {
dist[v] = dist[u] + graph[u][v]; } }}
cout << "Node\tDistance from Source " << source << "\n";
for (int i = 0; i < n; ++i) {
cout << i << "\t" << dist[i] << "\n";
}}
int main() {
vector<vector<int>> graph = {
{0, 2, 1, 3, 0}, // Connections from node A (0)
{2, 0, 0, 4, 0}, // Connections from node B (1)
{1, 0, 0, 1, 0}, // Connections from node C (2)
{3, 4, 1, 0, 5}, // Connections from node D (3)
{0, 0, 0, 5, 0} // Connections from node E (4) };
int source = 0; // Starting from node A (index 0)
dijkstra(graph, source);

25
return 0;
}

Output:

Viva Questions:

1. What does Dijkstra’s algorithm do?


It finds the shortest path from a source node to all other nodes in a graph with non-
negative edge weights.

2. What data structure is commonly used in Dijkstra's algorithm?

A priority queue (min-heap) is commonly used to select the node with the smallest
distance efficiently.

3. What happens if a node is unreachable in Dijkstra’s algorithm?

Its distance remains as infinity (∞), indicating that no path exists from the source.

26
27
PROGRAM 9

Implementation of Floyd-Warshall Algorithm with example.

LOGIC: The Floyd-Warshall algorithm finds the shortest path between every pair of nodes in a
weighted graph, particularly useful for dense graphs with non-negative. Below is a line-by-line
explanation of each part of the pseudocode provided.

Algorithm(Pseudo Code):

1. Initialize distances:

 Create a distance matrix `dist` of size N x N where N is the number of nodes.


 Set `dist[i][j]` to the weight of the edge from i to j, or ∞ if there’s no edge.
 Set `dist[i][i]` to 0 for each node i.
2. Update distances:

 For each intermediate node k:


 For each source node i:
 For each destination node j:
 Update dist[i][j] as:
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

3. Output

-The matrix `dist` contains the shortest distances between each pair of nodes.

28
Code:
#include <iostream>
using namespace std;
#define nV 4
#define INF 999
void printMatrix(int matrix[][nV]);
void floydWarshall(int graph[][nV]) {
int matrix[nV][nV], i, j, k;
for (i = 0; i < nV; i++)
for (j = 0; j < nV; j++)
matrix[i][j] = graph[i][j];
for (k = 0; k < nV; k++) {
for (i = 0; i < nV; i++) {
for (j = 0; j < nV; j++) {
if (matrix[i][k] + matrix[k][j] < matrix[i][j])
matrix[i][j] = matrix[i][k] + matrix[k][j];
}
}
}
printMatrix(matrix);
}
void printMatrix(int matrix[][nV]) {
for (int i = 0; i < nV; i++) {
for (int j = 0; j < nV; j++) {
if (matrix[i][j] == INF)
printf("%4s", "INF");
else
printf("%4d", matrix[i][j]); }
printf("\n");
}
}
int main() {
int graph[nV][nV] = {{0, 3, INF, 5},
{2, 0, INF, 4},
{INF, 1, 0, INF},
{INF, INF, 2, 0}};
floydWarshall(graph);
}

29
Output:

Viva Questions:
1. How can the Floyd-Warshall Algorithm be modified to detect negative weight cycles in a
graph?
The algorithm can be modified to detect negative weight cycles by checking for negative
values on the diagonal of the distance matrix after the algorithm completes. If any
diagonal element is negative, it indicates the presence of a negative weight cycle. This is
because a negative weight cycle would allow for infinitely negative distances. To
implement this, we can add a check inside the innermost loop of the algorithm to detect
negative cycles.

2. What is the key idea behind the Floyd-Warshall algorithm?


The Floyd-Warshall algorithm is based on the idea of dynamic programming. It
iteratively considers each vertex as an intermediate node between all pairs of vertices,
updating the shortest path between them if a shorter path is found through the current
intermediate node. This process continues until all vertices have been considered as
intermediate nodes, resulting in the shortest path between every pair of vertices in the
graph.

30

You might also like