Programs of DAA (6)
Programs of DAA (6)
Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
for i = 1 to length(A) - 1
key = A[i]
j=i-1
while j >= 0 and A[j] > key
A[j + 1] = A[j]
j=j-1
A[j + 1] = key
Code:
#include <iostream>
#include <vector>
void insertionSort(std::vector<int>& array) {
for (int i = 1; i < array.size(); i++) {
int key = array[i];
int j = i - 1;
// Move elements of array[0..i-1], that are greater than key,
// to one position ahead of their current position
while (j >= 0 && array[j] > key) {
array[j + 1] = array[j];
1
j = j - 1;
}
array[j + 1] = key;
}
}
int main() {
std::vector<int> array = {5, 2, 9, 1, 5, 6};
std::cout << "Original array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
insertionSort(array);
std::cout << "Sorted array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
}
Output:
Viva Questions:
1. When should you use Insertion Sort over other sorting algorithms?
2
2. What happens when there’s a duplicate key in a list that needs to be sorted using Insertion
Sort?
3. How many passes are there in Insertion Sort?
PROGRAM 2
Implementation of Quick Sort Algorithm with example.
LOGIC: Quick Sort is an efficient, comparison-based, divide-and-conquer sorting algorithm. It
works by selecting a 'pivot' element from the array and partitioning the other elements into two
sub-arrays according to whether they are less than or greater than the pivot.
Pivot Selection: Pick an element as the pivot.
Partition: Rearrange elements around the pivot.
Recursion: Apply the algorithm to sub-arrays.
1. Time Complexity: O(n log n) on average, O(n²) in the worst case.
2. Space Complexity: O(log n) for the best/average case, O(n) for the worst case.
Algorithm(Pseudo Code):
function quicksort(A, low, high)
if low < high
pivotIndex = partition(A, low, high)
quicksort(A, low, pivotIndex - 1)
quicksort(A, pivotIndex + 1, high)
function partition(A, low, high)
pivot = A[high]
i = low - 1
for j = low to high - 1
if A[j] <= pivot
i=i+1
swap A[i] with A[j]
swap A[i + 1] with A[high]
return i + 1
Code :
3
#include <iostream>
using namespace std;
void swap(int* a, int* b) {
int t = *a;
*a = *b;
*b = t;
}
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main() {
int arr[] = {10, 7, 8, 9, 1, 5};
4
int n = sizeof(arr) / sizeof(arr[0]);
cout << "unsorted array: ";
for (int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
quickSort(arr, 0, n - 1);
cout << "\n";
cout << "Sorted array: ";
for (int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
return 0;
}
Output:
Viva Questions:
1. Is Quick Sort suitable for use on large datasets? Why or why not?
2. What happens if all the items being sorted have the same value?
3. Are there any limitations with Quick Sort? If yes, then what are they?
5
PROGRAM 3
Implementation of Merge Sort Algorithm with example.
LOGIC: Merge sort is a divide-and-conquer algorithm that divides the input array into
two halves, recursively sorts each half, and then merges the two sorted halves
back together.
Steps of Merge Sort
1. Divide: Split the array into two halves.
2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves to produce the sorted array.
1. Time Complexity: O(n⋅logn)
2. Space Complexity: O(n)
Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
for i = 1 to length(A) - 1
key = A[i]
j=i-1
while j >= 0 and A[j] > key
A[j + 1] = A[j]
j=j-1
A[j + 1] = key
Code:
#include <iostream>
6
#include <vector>
void insertionSort(std::vector<int>& array) {
for (int i = 1; i < array.size(); i++) {
int key = array[i];
int j = i - 1;
// Move elements of array[0..i-1], that are greater than key,
// to one position ahead of their current position
while (j >= 0 && array[j] > key) {
array[j + 1] = array[j];
j = j - 1;
}
array[j + 1] = key;
}
}
int main() {
std::vector<int> array = {5, 2, 9, 1, 5, 6};
std::cout << "Original array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
insertionSort(array);
std::cout << "Sorted array: ";
for (int num : array) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
7
}
Output:
Viva Questions:
4. When should you use Insertion Sort over other sorting algorithms?
5. What happens when there’s a duplicate key in a list that needs to be sorted using Insertion
Sort?
6. How many passes are there in Insertion Sort?
8
PROGRAM 4
Implementation of Selection Sort Algorithm with example.
LOGIC: Selection Sort is a simple comparison-based sorting algorithm. It works by repeatedly
finding the minimum (or maximum) element from the unsorted portion of the array and moving
it to the sorted portion.
1. Find the minimum element in the unsorted portion of the array.
2. Swap this minimum element with the first element of the unsorted portion.
3. Move the boundary of the sorted portion one element to the right.
4. Repeat steps 1-3 until the entire array is sorted.
Algorithm(Pseudo Code):
Here's a pseudocode representation of the Insertion Sort algorithm:
SelectionSort(A, n)
Input:
A: An array of n elements
n: The number of elements in array A
for i = 0 to n - 1 do
minIndex = i
for j = i + 1 to n - 1 do
if A[j] < A[minIndex] then
minIndex = j
end if
9
end for
if minIndex != i then
swap A[i] with A[minIndex]
end if
end for
Code:
#include <bits/stdc++.h>
using namespace std;
10
for (int &val : arr) {
cout << val << " ";
}
cout << endl;
}
int main() {
vector<int> arr = {64, 25, 12, 22, 11};
selectionSort(arr);
return 0;
}
Output:
Viva Questions:
1. What is the Time and Space Complexity of Selection Sort?
The Selection sort algorithm has a time complexity of O(n2) and a space complexity of
O(1) since it does not require any additional memory space apart from a temporary
variable used for swapping.
2. How is Selection Sort different from other sorting algorithms like Bubble Sort and
Insertion Sort?
Selection Sort continuously selects the smallest (or largest) element and swaps it with the
first unsorted element. Bubble Sort compares adjacent elements and swaps them if they
are in the wrong order, and Insertion Sort builds the final sorted array one element at a
time by repeatedly taking the next element and inserting it into the correct position.
3. Can Selection Sort be considered stable or unstable?
11
Selection Sort is generally considered unstable because it does not preserve the relative
order of equal elements.
PROGRAM 5
Implementation of Binary Search Tree(BST) Algorithm with example.
LOGIC: A Binary Search Tree (BST) is a special type of tree structure used to store data in an
organized way. It makes it easy to search, insert, or delete elements efficiently.
Algorithm(Pseudo Code):
search(t, value):
if t is empty:
return "not found"
if t equals value:
return "found"
else:
search in the right side (right subtree)
Code:
#include <iostream>
using namespace std;
struct Node {
12
int key;
Node* left;
Node* right;
Node(int item) {
key = item;
left = right = NULL;
}
};
// Driver Code
int main() {
13
cout << "Not Found\n";
return 0;
Output:
Viva Questions:
1. What is a self-balanced tree?
Self-balanced binary search trees automatically keep their height as small as possible
when operations like insertion and deletion take place.
For a BST to be self-balanced, it is important that it consistently follows the rules of BST
so that the left subtree has lower-valued keys while the right subtree has high valued
keys.
This is done using two operations:
– Left rotation
– Right rotation
2. What distinguishes binary search trees from binary trees?
A binary search tree (BST) is a particular kind of binary tree in which the left subtree of a
node only contains nodes with keys less than the node’s key, and the right subtree
contains nodes with keys greater than the node’s key. A binary tree is a hierarchical
structure in which each node has at most two children.
3. How do you discover the kth smallest/largest entry in a binary search tree?
In an in-order traversal of a binary search tree, note the items encountered in order to get
the kth smallest element. Stop when you reach the kth element. Similarly, run a reverse
in-order traverse for the kth greatest element.
14
PROGRAM 6
Implementation of Greedy Knapsack Algorithm with example.
LOGIC: The Greedy Knapsack Problem aims to maximize the profit of items placed in a
knapsack with a fixed capacity by using a greedy approach. The steps are:
Algorithm(Pseudo Code):
//P[1:n] and the W[1:n] contain the profit
// & weight respectively of the n object ordered
//such that P[i]>=P[i+1]/W[i+1]
//n is the knapsack size and x[1:n] is the solution vector
{
For i:= 1 to n do x[i]:=0.0;
U:=m;
For i:=1 to n do
{
26 August 2024 22
15
If(W[i]>U) then break;
X[i]:=1.0; U = U-W[i]
}
If(i<=n) then X[i] := U/W[i]
}
Code:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Structure to represent an item
struct Item {
int weight;
int profit;
double ratio;
};
// Comparator to sort items by profit-to-weight ratio
bool compare(Item a, Item b) {
return a.ratio > b.ratio; // Sort in decreasing order
}
// Function to solve the Greedy Knapsack problem
double knapsackGreedy(int capacity, vector<Item>& items, vector<double>& x) {
int n = items.size();
double totalProfit = 0.0;
int remainingCapacity = capacity;
16
remainingCapacity -= items[i].weight;
totalProfit += items[i].profit;
} else {
x[i] = (double)remainingCapacity / items[i].weight; // Take fraction of the item
totalProfit += x[i] * items[i].profit;
break; // Knapsack is full
}
}
return totalProfit;
}
int main() {
int n; // Number of items
int capacity; // Knapsack capacity
// Input number of items and capacity of knapsack
cout << "Enter the number of items: ";
cin >> n;
cout << "Enter the capacity of the knapsack: ";
cin >> capacity;
vector<Item> items(n);
vector<double> x(n); // Solution vector
// Input the weight and profit of each item
cout << "Enter the weight and profit of each item:" << endl;
for (int i = 0; i < n; i++) {
cout << "Item " << i + 1 << " - Weight: ";
cin >> items[i].weight;
cout << "Profit: ";
cin >> items[i].profit;
items[i].ratio = (double)items[i].profit / items[i].weight; // Calculate profit-to-weight ratio
}
// Call knapsackGreedy function to solve the problem
double maxProfit = knapsackGreedy(capacity, items, x);
return 0;
17
}
Output:
Viva Questions:
1. Why is the Greedy approach suitable for solving the Fractional Knapsack problem but
not the 0/1 Knapsack problem?
The Greedy approach works for the Fractional Knapsack problem because selecting items
based on the highest profit-to-weight ratio leads to an optimal solution. However, for the
0/1 Knapsack problem, this approach does not guarantee an optimal solution because you
cannot take fractional parts of items.
18
PROGRAM 7
Implementation of Strassen’s Matrix Algorithm with example.
LOGIC: Strassen's algorithm is an efficient algorithm for matrix multiplication that improves
upon the standard O(n3) time complexity to O(nlog27) = O (n2.81). It achieves this by breaking down
larger matrices into smaller submatrices and using recursive multiplications and additions to
reduce the number of operations.
Algorithm(Pseudo Code):
function Strassen(A, B, n)
if n == 1 then
return A[0][0] * B[0][0] // Base case for 1x1 matrix
else
// Divide A and B into 4 submatrices of size n/2 x n/2
(A11, A12, A21, A22) = splitMatrix(A, n)
(B11, B12, B21, B22) = splitMatrix(B, n)
19
M4 = Strassen(A22, subtract(B21, B11), n/2)
M5 = Strassen(add(A11, A12), B22, n/2)
M6 = Strassen(subtract(A21, A11), add(B11, B12), n/2)
M7 = Strassen(subtract(A12, A22), add(B21, B22), n/2)
return C
end if
end function
function splitMatrix(Matrix, n)
// Splits Matrix into four submatrices and returns (Matrix11, Matrix12, Matrix21, Matrix22)
end function
Code:
#include <iostream>
#include <vector>
using namespace std;
20
// Function to add two matrices
vector<vector<int>> add(const vector<vector<int>>& A, const vector<vector<int>>& B) {
int n = A.size();
vector<vector<int>> result(n, vector<int>(n));
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
result[i][j] = A[i][j] + B[i][j];
return result;
}
21
B11[i][j] = B[i][j];
B12[i][j] = B[i][j + newSize];
B21[i][j] = B[i + newSize][j];
B22[i][j] = B[i + newSize][j + newSize];
}
}
return C;
}
int main() {
int n;
cout << "Enter the size of matrix (must be power of 2): ";
cin >> n;
22
vector<vector<int>> A(n, vector<int>(n)), B(n, vector<int>(n));
cout << "Enter elements of matrix A:" << endl;
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
cin >> A[i][j];
return 0;
}
Output:
Complexity:
Worst case time complexity: Θ(n^2.8074)
Best case time complexity: Θ(1)
Space complexity: Θ(log n)
23
Viva Questions:
PROGRAM 8
LOGIC: Dijkstra's Algorithm is a popular algorithm for finding the shortest path from a single
source node to all other nodes in a graph with non-negative edge weights. This algorithm is
particularly useful in network routing and GPS navigation systems to find the quickest route
from one point to another.
Algorithm(Pseudo Code):
function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
while Q IS NOT EMPTY
U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
24
return distance[], previous[]
Code:
#include <iostream>
#include <vector>
#include <climits>
using namespace std;
int minDistance(const vector<int>& dist, const vector<bool>& visited, int n) {
int minDist = INT_MAX;
int minIndex = -1;
for (int i = 0; i < n; ++i) {
if (!visited[i] && dist[i] < minDist) {
minDist = dist[i];
minIndex = i; }}
return minIndex;}
void dijkstra(const vector<vector<int>>& graph, int source) {
int n = graph.size();
vector<int> dist(n, INT_MAX); // Initialize distances to infinity
vector<bool> visited(n, false); // Track visited nodes
dist[source] = 0; // Distance from source to itself is 0
for (int count = 0; count < n - 1; ++count) {
int u = minDistance(dist, visited, n);
visited[u] = true; // Mark as visited
for (int v = 0; v < n; ++v) {
if (!visited[v] && graph[u][v] && dist[u] != INT_MAX && dist[u] + graph[u][v] <
dist[v]) {
dist[v] = dist[u] + graph[u][v]; } }}
cout << "Node\tDistance from Source " << source << "\n";
for (int i = 0; i < n; ++i) {
cout << i << "\t" << dist[i] << "\n";
}}
int main() {
vector<vector<int>> graph = {
{0, 2, 1, 3, 0}, // Connections from node A (0)
{2, 0, 0, 4, 0}, // Connections from node B (1)
{1, 0, 0, 1, 0}, // Connections from node C (2)
{3, 4, 1, 0, 5}, // Connections from node D (3)
{0, 0, 0, 5, 0} // Connections from node E (4) };
int source = 0; // Starting from node A (index 0)
dijkstra(graph, source);
25
return 0;
}
Output:
Viva Questions:
A priority queue (min-heap) is commonly used to select the node with the smallest
distance efficiently.
Its distance remains as infinity (∞), indicating that no path exists from the source.
26
27
PROGRAM 9
LOGIC: The Floyd-Warshall algorithm finds the shortest path between every pair of nodes in a
weighted graph, particularly useful for dense graphs with non-negative. Below is a line-by-line
explanation of each part of the pseudocode provided.
Algorithm(Pseudo Code):
1. Initialize distances:
3. Output
-The matrix `dist` contains the shortest distances between each pair of nodes.
28
Code:
#include <iostream>
using namespace std;
#define nV 4
#define INF 999
void printMatrix(int matrix[][nV]);
void floydWarshall(int graph[][nV]) {
int matrix[nV][nV], i, j, k;
for (i = 0; i < nV; i++)
for (j = 0; j < nV; j++)
matrix[i][j] = graph[i][j];
for (k = 0; k < nV; k++) {
for (i = 0; i < nV; i++) {
for (j = 0; j < nV; j++) {
if (matrix[i][k] + matrix[k][j] < matrix[i][j])
matrix[i][j] = matrix[i][k] + matrix[k][j];
}
}
}
printMatrix(matrix);
}
void printMatrix(int matrix[][nV]) {
for (int i = 0; i < nV; i++) {
for (int j = 0; j < nV; j++) {
if (matrix[i][j] == INF)
printf("%4s", "INF");
else
printf("%4d", matrix[i][j]); }
printf("\n");
}
}
int main() {
int graph[nV][nV] = {{0, 3, INF, 5},
{2, 0, INF, 4},
{INF, 1, 0, INF},
{INF, INF, 2, 0}};
floydWarshall(graph);
}
29
Output:
Viva Questions:
1. How can the Floyd-Warshall Algorithm be modified to detect negative weight cycles in a
graph?
The algorithm can be modified to detect negative weight cycles by checking for negative
values on the diagonal of the distance matrix after the algorithm completes. If any
diagonal element is negative, it indicates the presence of a negative weight cycle. This is
because a negative weight cycle would allow for infinitely negative distances. To
implement this, we can add a check inside the innermost loop of the algorithm to detect
negative cycles.
30