0% found this document useful (0 votes)
2 views21 pages

Grpc1 Merged

The document discusses various algorithmic techniques including Dynamic Programming, Greedy Method, and Divide and Conquer, with specific examples like Longest Common Subsequence, Strassen's Matrix Multiplication, and the 0/1 Knapsack Problem. It also covers algorithms such as Dijkstra's, Kruskal's, and Prim's for graph-related problems, as well as sorting techniques like Heap Sort and Quick Sort. Additionally, it touches on NP-complete problems and provides insights into backtracking search methods.

Uploaded by

sohaibasn565
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views21 pages

Grpc1 Merged

The document discusses various algorithmic techniques including Dynamic Programming, Greedy Method, and Divide and Conquer, with specific examples like Longest Common Subsequence, Strassen's Matrix Multiplication, and the 0/1 Knapsack Problem. It also covers algorithms such as Dijkstra's, Kruskal's, and Prim's for graph-related problems, as well as sorting techniques like Heap Sort and Quick Sort. Additionally, it touches on NP-complete problems and provides insights into backtracking search methods.

Uploaded by

sohaibasn565
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Question 1: Dynamic Programming vs.

Greedy Method and Longest Common Subsequence


(LCS)
Dynamic Programming:
* Definition: Dynamic programming is a technique for solving problems by breaking them
down into smaller overlapping subproblems, solving each subproblem only once, and storing
the results in a table to avoid redundant computations.
* Characteristics:
* Optimal Substructure: The optimal solution to the problem can be constructed from the
optimal solutions to its subproblems.
* Overlapping Subproblems: The same subproblems are encountered repeatedly during the
recursive solution.
* Example: Fibonacci sequence, Knapsack problem, Longest Common Subsequence (LCS).
Greedy Method:
* Definition: A greedy algorithm makes locally optimal choices at each step in the hope of
finding a global optimum.
* Characteristics:
* Does not always guarantee the optimal solution.
* Often simpler to implement than dynamic programming.
* Example: Activity selection problem, Huffman coding.
LCS Algorithm:
* Goal: Find the longest common subsequence between two strings.
* Dynamic Programming Approach:
* Create a 2D table (DP table):
* The table will have dimensions (m+1) x (n+1), where m is the length of string X and n is
the length of string Y.
* Each cell DP[i][j] represents the length of the LCS for the substrings X[0..i-1] and
Y[0..j-1].
* Initialize:
* Set DP[0][j] = 0 for all j, as the empty string has no common subsequence with any
string.
* Set DP[i][0] = 0 for all i, as the empty string has no common subsequence with any
string.
* Fill the table:
* For i = 1 to m:
* For j = 1 to n:
* If X[i-1] == Y[j-1]:
* DP[i][j] = DP[i-1][j-1] + 1
* Else:
* DP[i][j] = max(DP[i-1][j], DP[i][j-1])
* Retrieve the LCS:
* Starting from DP[m][n], backtrack through the table to find the path that led to the
maximum value.
Question 2: Strassen's Matrix Multiplication and Optimal Parenthesization
Strassen's Matrix Multiplication:
* Time Complexity:
* Recurrence relation: T(n) = 7T(n/2) + O(n^2)
* Using the Master Theorem, we get T(n) = Θ(n^log7) ≈ Θ(n^2.81)
* Dynamic Programming Approach for Optimal Parenthesization:
* Create a 2D table (DP table):
* The table will have dimensions (n) x (n), where n is the number of matrices.
* Each cell DP[i][j] represents the minimum cost of multiplying matrices A[i] to A[j].
* Initialize:
* Set DP[i][i] = 0 for all i, as the cost of multiplying a single matrix is 0.
* Fill the table:
* For chain length l = 2 to n:
* For i = 1 to n-l+1:
* j = i+l-1
* DP[i][j] = min(DP[i][k] + DP[k+1][j] + cost(A[i]..A[k], A[k+1]..A[j])) for k = i to j-1
* Retrieve the optimal parenthesization:
* Backtrack through the table to find the sequence of multiplications that resulted in the
minimum cost.
Question 3: Spanning Tree, Minimum Cost Spanning Tree, and Kruskal's Algorithm
* Spanning Tree: A spanning tree of a connected graph is a subgraph that connects all the
vertices together without forming any cycles.
* Number of Spanning Trees: The number of spanning trees in a graph with n nodes can be
calculated using Kirchhoff's Matrix Tree Theorem.
* Minimum Cost Spanning Tree (MST): An MST is a spanning tree with the minimum total
weight of its edges.
* Kruskal's Algorithm:
* Sort the edges of the graph in non-decreasing order of their weights.
* Create a forest of n trees, one for each vertex.
* For each edge in the sorted list:
* If the edge connects two vertices in different trees, add it to the MST and merge the two
trees.
* Repeat step 3 until all vertices are connected.
Question 4: Huffman Encoding and Heap Sort
* Huffman Encoding: Huffman encoding is a variable-length encoding technique that assigns
shorter codes to more frequent characters and longer codes to less frequent characters. To
calculate the percentage of improvement, we need to compare the average code length of
Huffman encoding with that of fixed-length encoding.
* Heap Sort: Heap sort is a sorting algorithm that uses a binary heap data structure. The
algorithm builds a max-heap from the given list and then repeatedly extracts the maximum
element from the heap until the list is empty.
Question 5: 0/1 Knapsack Problem and Fractional Knapsack
* 0/1 Knapsack Problem: Given a set of items, each with a weight and a value, and a
knapsack with a maximum weight capacity, the goal is to determine the combination of items
that can be placed into the knapsack without exceeding the weight capacity, while
maximizing the total value.
* Dynamic Programming Solution: Similar to the approach discussed in Question 1 for the
LCS problem, we can use a 2D table to store the maximum value that can be obtained by
considering the first i items and using a knapsack of capacity j.
* Fractional Knapsack: In the fractional knapsack problem, we can take fractions of items.
The optimal solution is obtained by greedily selecting items with the highest value-to-weight
ratio until the knapsack is full.
Question 6: Dijkstra's Algorithm and Master Theorem
* Dijkstra's Algorithm: Dijkstra's algorithm is used to find the shortest paths from a single
source vertex to all other vertices in a weighted graph.
* Master Theorem: The Master Theorem is used to solve recurrences of the form T(n) = aT(n/
b) + f(n).
* Recurrence Relations:
* (i) T(n) = 3T(n/2) + n log n: Using the Master Theorem, T(n) = Θ(n log^2 n)
* (ii) T(n) = 9T(n/3) + n: Using the Master Theorem, T(n) = Θ(n^2)
6. 0/1 Knapsack Problem
Problem Statement:
Given a set of items, each with a weight and a value, and a knapsack with a maximum weight
capacity, the goal is to determine the combination of items that can be placed into the
knapsack without exceeding the weight capacity, while maximizing the total value.
Instance:
* Weights (W): [40, 10, 50, 30, 60]
* Values (P): [80, 40, 20, 90, 110]
* Knapsack Capacity (M): 110
Dynamic Programming Solution:
We'll use a 2D array dp where dp[i][j] represents the maximum value that can be obtained by
considering the first i items and using a knapsack of capacity j.
* Base Case:
* dp[0][j] = 0 for all j, as there are no items to consider.
* Recursive Relation:
* If W[i-1] > j, then dp[i][j] = dp[i-1][j], as the current item cannot be included.
* Otherwise, dp[i][j] = max(dp[i-1][j], dp[i-1][j-W[i-1]] + P[i-1]), considering both including
and excluding the current item.
Python Code:
def knapsack_0_1(weights, values, capacity):
"""
Solves the 0/1 knapsack problem using dynamic programming.

Args:
weights: A list of weights for each item.
values: A list of values for each item.
capacity: The maximum weight capacity of the knapsack.

Returns:
The maximum value that can be obtained by filling the knapsack.
"""

n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]

for i in range(1, n + 1):


for w in range(1, capacity + 1):
if weights[i - 1] <= w:
dp[i][w] = max(dp[i - 1][w], dp[i - 1][w - weights[i - 1]] + values[i - 1])
else:
dp[i][w] = dp[i - 1][w]
return dp[n][capacity]

# Example usage
weights = [40, 10, 50, 30, 60]
values = [80, 40, 20, 90, 110]
capacity = 110

max_value = knapsack_0_1(weights, values, capacity)


print("Maximum value:", max_value)

Output:
Maximum value: 300

Fractional Knapsack:
In the fractional knapsack problem, we can take fractions of items. This can be solved
greedily by taking items with the highest value-to-weight ratio until the knapsack is full.
7. Minimum Spanning Tree and Cut-Edge
Minimum Spanning Tree (MST):
A minimum spanning tree (MST) of a connected, weighted graph is a subgraph that connects
all the vertices together with the minimum possible total weight of edges.
Cut-Edge:
A cut-edge (or bridge) is an edge in a connected graph whose removal would disconnect the
graph.
Prim's Algorithm:
Prim's algorithm is a greedy algorithm for finding an MST. It starts with an arbitrary vertex
and repeatedly adds the minimum-weight edge that connects the tree to a vertex not yet in
the tree.
Applying Prim's Algorithm to the Given Graph:
* Start with vertex a.
* Add edge (a, b) with weight 4.
* Add edge (b, c) with weight 2.
* Add edge (c, d) with weight 2.
* Add edge (d, e) with weight 6.
The resulting MST has a total weight of 4 + 2 + 2 + 6 = 14.
8. Max Heap and Max_Heapify
Max Heap:
A max heap is a complete binary tree where the value of each node is greater than or equal to
the values of its children.
Max_Heapify Procedure:
def max_heapify(arr, i, n):
"""
Maintains the max heap property at index i.

Args:
arr: The array representing the heap.
i: The index of the node to heapify.
n: The size of the heap.
"""
largest = i
left = 2 * i + 1
right = 2 * i + 2

if left < n and arr[left] > arr[largest]:


largest = left

if right < n and arr[right] > arr[largest]:


largest = right

if largest != i:
arr[i], arr[largest] = arr[largest], arr[i]
max_heapify(arr, largest, n)

Building a Max Heap:


To build a max heap from an array, we can perform max_heapify on each non-leaf node in
bottom-up order.
9. Backtracking Search and N-Queens Problem
Backtracking Search:
Backtracking is a general algorithm for finding solutions to problems that can be expressed
as making a sequence of choices. It explores a search tree by recursively trying all possible
choices at each level, and backtracking when a choice leads to a dead end.
N-Queens Problem:
The N-Queens problem is to place N chess queens on an N×N chessboard so that no two
queens attack each other.
Backtrack Search Tree for 4-Queens Problem:
Divide and Conquer vs. Dynamic Programming:
* Divide and Conquer: Breaks down a problem into smaller subproblems, solves them
independently, and combines the solutions. Examples: Merge Sort, Quick Sort.
* Dynamic Programming: Solves overlapping subproblems by storing their solutions and
reusing them. Examples: Fibonacci sequence, Knapsack problem.
10. Depth-First Search (DFS) and Breadth-First Search (BFS)
DFS:
DFS explores a graph by going as deep as possible along each branch before backtracking. It
uses a stack to keep track of the vertices to visit.
BFS:
BFS explores a graph level by level. It uses a queue to keep track of the vertices to visit.
DFS Algorithm:
def dfs(graph, start):
"""
Performs depth-first search on a graph.

Args:
graph: The adjacency list representation of the graph.
start: The starting vertex.
"""
visited = set()
stack = [start]

while stack:
vertex = stack.pop()

if vertex not in visited:


visited.add(vertex)
print(vertex, end=' ')

for neighbor in graph[vertex]:


if neighbor not in visited:
stack.append(neighbor)

NP-Complete Problem:
A decision problem is called NP-complete if it satisfies two conditions:
* It belongs to the class NP (Nondeterministic Polynomial time), meaning a solution can be
verified in polynomial time.
* Every problem in NP can be reduced to this problem in polynomial time.
Example of NP-Complete Problem:
The Traveling Salesman Problem (TSP) is an example of an NP-complete problem. Given a list
of cities and the distances between them, the goal is to find the shortest possible route that
visits each city exactly once and returns to the starting city.
1. Discuss the main steps of divide and conquer technique for solving problems. In light of
this, explain the quick sort algorithm. Analyze time complexity of quick sort algorithm (write
the recurrence relation and solve it). When it achieves quadratic complexity.
Divide and Conquer Technique:
The divide and conquer technique is a powerful problem-solving strategy that breaks down a
problem into smaller subproblems, solves them recursively, and then combines the solutions
to solve the original problem. It typically involves three main steps:
* Divide: The problem is divided into smaller subproblems, which are similar in structure to
the original problem but smaller in size.
* Conquer: The subproblems are solved recursively. If the subproblem is small enough, it is
solved directly.
* Combine: The solutions to the subproblems are combined to obtain the solution to the
original problem.
Quick Sort Algorithm:
Quick sort is a classic sorting algorithm that exemplifies the divide and conquer technique.
Here's how it works:
* Divide:
* Select a pivot element from the array.
* Partition the array around the pivot, such that all elements less than the pivot are placed
to its left, and all elements greater than the pivot are placed to its right.
* Conquer:
* Recursively sort the subarray to the left of the pivot.
* Recursively sort the subarray to the right of the pivot.
* Combine:
* The subarrays are already sorted, so no further action is needed.
Time Complexity of Quick Sort:
The time complexity of quick sort depends on the choice of the pivot element.
* Best Case: If the pivot is always chosen such that it divides the array into two equal halves,
the time complexity is O(n log n).
* Average Case: On average, quick sort also has a time complexity of O(n log n).
* Worst Case: If the pivot is always the smallest or largest element in the array, the time
complexity degrades to O(n^2).
Recurrence Relation:
The recurrence relation for quick sort is:
T(n) = T(p) + T(n-p-1) + cn

where:
* T(n) is the time complexity of sorting an array of size n.
* p is the position of the pivot element.
* cn represents the time taken for partitioning the array.
Quadratic Complexity:
Quick sort achieves quadratic complexity (O(n^2)) in the worst case when the pivot is always
chosen to be the smallest or largest element in the array. This results in an unbalanced
partitioning, where one subarray is empty and the other subarray contains all but one
element.
2. Given an array of size n devise an algorithm for finding minimum and second minimum
from the array using divide and conquer technique. Analyze the complexity of your algorithm.
Algorithm:
* Base Case: If the array has only one element, return that element as both the minimum and
second minimum.
* Divide: Divide the array into two halves.
* Conquer: Recursively find the minimum and second minimum of each half.
* Combine:
* Compare the minimums of the two halves. The smaller one is the overall minimum.
* Compare the second minimums of the two halves. The smaller one is the overall second
minimum (if it is not equal to the overall minimum).
* If the overall minimum came from the left half, compare the second minimum of the left
half with the minimum of the right half. If the minimum of the right half is smaller, update the
overall second minimum.
* Similarly, if the overall minimum came from the right half, compare the second minimum
of the right half with the minimum of the left half.
Time Complexity:
The time complexity of this algorithm is O(n), as it visits each element of the array only once.
3. Define big-O, big-theta notation. Write the graphical interpretation of it. In which
complexity class the following function belong? f(n) = 5n^3 + 6n. When the quick sort has
best case time complexity?
Big-O Notation:
Big-O notation provides an upper bound on the growth rate of a function. It describes the
worst-case scenario for an algorithm's time complexity. We say that f(n) is O(g(n)) if there
exist positive constants c and n0 such that f(n) <= c * g(n) for all n >= n0.
Big-Theta Notation:
Big-Theta notation provides both an upper and lower bound on the growth rate of a function.
It describes the average-case scenario for an algorithm's time complexity. We say that f(n) is
Theta(g(n)) if there exist positive constants c1, c2, and n0 such that c1 * g(n) <= f(n) <= c2 *
g(n) for all n >= n0.
Graphical Interpretation:
* Big-O: The function f(n) is eventually bounded above by a constant multiple of g(n).
* Big-Theta: The function f(n) is eventually bounded both above and below by constant
multiples of g(n).
Complexity Class of f(n) = 5n^3 + 6n:
The dominant term in f(n) is 5n^3. Therefore, f(n) is in the complexity class O(n^3).
Best Case Time Complexity of Quick Sort:
Quick sort has a best-case time complexity of O(n log n) when the pivot is always chosen
such that it divides the array into two equal halves.
4. Write and explain the recurrence relation for solving matrix chain multiplication problem
using dynamic programming technique. Solve the following matrix chain multiplication
problem using dynamic programming: A1:20x5, A2: 5x50, A3:50x10, A4: 10x30
Recurrence Relation:
The recurrence relation for solving the matrix chain multiplication problem using dynamic
programming is:
M[i, j] = min(M[i, k] + M[k+1, j] + p[i-1] * p[k] * p[j]) for i <= k < j

where:
* M[i, j] is the minimum number of scalar multiplications needed to compute the product of
matrices Ai, Ai+1, ..., Aj.
* p[i] is the number of columns in matrix Ai.
Solving the Matrix Chain Multiplication Problem:
Given the matrices A1:20x5, A2: 5x50, A3:50x10, A4: 10x30, we can use the dynamic
programming approach to find the optimal parenthesization for the matrix chain
multiplication.
5. Consider two sorted array A=[ 3 6 19 30 55 60] and B=[ 15 18 20 35 40 44]. How you
merge these two using merging technique. What will be the number of comparisons. Proof
that any comparison-based sorting technique has the worst-case time complexity O(nlogn)
Merging the Arrays:
* Create a new array C to store the merged elements.
* Initialize two pointers, i and j, to the beginning of arrays A and B, respectively.
* Compare A[i] and B[j]:
* If A[i] <= B[j], add A[i] to C and increment i.
* Otherwise, add B[j] to C and increment j.
* Repeat step 3 until one of the arrays is exhausted.
* Add the remaining elements from the other array to C.
Number of Comparisons:
In the worst case, the number of comparisons required to merge two sorted arrays of size n
is 2n - 1.
Proof of Lower Bound for Comparison-Based Sorting:
Any comparison-based sorting algorithm can be represented as a decision tree, where each
node represents a comparison between two elements. The leaves of the decision tree
represent the sorted permutations of the input array.
The height of the decision tree determines the worst-case time complexity of the sorting
algorithm. Since there are n! possible permutations of an array of size n, the decision tree
must have at least n! leaves.
The height of a binary tree with n! leaves is at least log2(n!). Using Stirling's approximation,
we can show that log2(n!) is approximately n log n. Therefore, any comparison-based sorting
algorithm has a worst-case time complexity of at least O(n log n).
Question 1: Matrix Chain Multiplication
Problem Description:
Given a sequence of matrices, the goal is to find the most efficient way to multiply them
together. The efficiency is measured by the number of scalar multiplications required.
Recurrence Relation:
Let m[i, j] be the minimum number of scalar multiplications needed to compute the product
A[i]A[i+1]...A[j]. Then, we can define the recurrence relation as follows:
m[i, j] = 0 if i == j
m[i, j] = min{ m[i, k] + m[k+1, j] + p[i-1]*p[k]*p[j] } for i < j and k = i..j-1

where p[i] is the number of rows in matrix A[i] and p[j] is the number of columns in matrix
A[j].
Dynamic Programming Solution:
* Create a 2D table (DP table):
* The table will have dimensions (n) x (n), where n is the number of matrices.
* Each cell m[i, j] will store the minimum number of scalar multiplications needed to
compute the product A[i]A[i+1]...A[j].
* Initialize:
* Set m[i, i] = 0 for all i, as the cost of multiplying a single matrix is 0.
* Fill the table:
* For chain length l = 2 to n:
* For i = 1 to n-l+1:
* j = i+l-1
* m[i, j] = min{ m[i, k] + m[k+1, j] + p[i-1]*p[k]*p[j] } for k = i to j-1
* Retrieve the optimal parenthesization:
* Backtrack through the table to find the sequence of multiplications that resulted in the
minimum cost.
Example:
Given matrices A1: 20x5, A2: 5x50, A3: 50x10, A4: 10x30, we can use the above dynamic
programming approach to find the optimal parenthesization and the minimum number of
scalar multiplications.
Question 2: Time Complexity and Master Theorem
* Time Complexity of f(n) = 5n^3 + 6n:
* The highest order term is n^3, so the time complexity is O(n^3).
* Proof that any polynomial p(n) of degree k is O(n^k):
* A polynomial p(n) of degree k can be expressed as:
p(n) = a0 + a1*n + a2*n^2 + ... + ak*n^k

* For sufficiently large n, the term ak*n^k will dominate the other terms.
* Therefore, p(n) is O(n^k).
* Solving the recurrence relation T(n) = 7T(n/2) + 18n^2 using the Master Theorem:
* The Master Theorem states that for a recurrence of the form T(n) = aT(n/b) + f(n), where a
>= 1 and b > 1, the solution depends on the relationship between f(n) and n^log_b(a).
* In this case, a = 7, b = 2, and f(n) = 18n^2.
* n^log_b(a) = n^log_2(7) ≈ n^2.81
* Since f(n) = 18n^2 is asymptotically smaller than n^log_b(a), the Master Theorem case 1
applies.
* Therefore, the solution to the recurrence relation is T(n) = Θ(n^log_b(a)) = Θ(n^2.81).
Question 3: 0/1 Knapsack Problem and Fractional Knapsack
* 0/1 Knapsack Problem: Given a set of items, each with a weight and a value, and a
knapsack with a maximum weight capacity, the goal is to determine the combination of items
that can be placed into the knapsack without exceeding the weight capacity, while
maximizing the total value.
* Dynamic Programming Solution:
* Similar to the approach discussed in Question 1 for the LCS problem, we can use a 2D
table to store the maximum value that can be obtained by considering the first i items and
using a knapsack of capacity j.
* Fractional Knapsack: In the fractional knapsack problem, we can take fractions of items.
The optimal solution is obtained by greedily selecting items with the highest value-to-weight
ratio until the knapsack is full.
Question 4: Time Complexity of Quick Sort and Divide-and-Conquer
* Time Complexity of Quick Sort:
* Best Case: Θ(n log n) (when the pivot is chosen such that it divides the array into two
equal halves)
* Average Case: Θ(n log n)
* Worst Case: Θ(n^2) (when the pivot is always the smallest or largest element)
* Recurrence Relation: T(n) = T(k) + T(n-k-1) + Θ(n), where k is the position of the pivot.
* Quadratic Complexity: Quick Sort achieves quadratic complexity in the worst case when
the pivot is always the smallest or largest element, leading to unbalanced partitions.
* Divide-and-Conquer Technique:
* Divide-and-Conquer is a general algorithmic paradigm that breaks down a problem into
smaller subproblems, solves them recursively, and then combines the solutions to obtain the
solution to the original problem.
* Generalized Equation:
T(n) = aT(n/b) + f(n)

where:
* a is the number of subproblems.
* n/b is the size of each subproblem.
* f(n) is the time taken to divide the problem into subproblems and combine their solutions.
* Difference between Divide-and-Conquer and Greedy Techniques:
* Divide-and-Conquer breaks down a problem into smaller subproblems and solves them
recursively.
* Greedy techniques make locally optimal choices at each step in the hope of finding a
global optimum.
Question 5: Dijkstra's Algorithm and Shortest Paths
* Single Source Shortest Path Problem: Given a weighted graph and a source vertex, the goal
is to find the shortest paths from the source vertex to all other vertices in the graph.
* Dijkstra's Algorithm: Dijkstra's algorithm is a greedy algorithm that maintains a set of
visited vertices and a priority queue of unvisited vertices. In each iteration, it selects the
unvisited vertex with the shortest distance from the source vertex and adds it to the visited
set. It then updates the distances to the unvisited neighbors of the newly visited vertex.
* Optimal Substructure Property: Dijkstra's algorithm exhibits optimal substructure because
the shortest path to any vertex v passes through the shortest paths to its immediate
predecessors.
Question 6: Minimum Spanning Tree and Prim's Algorithm
* Minimum Spanning Tree (MST): An MST is a spanning tree with the minimum total weight
of its edges.
* Prim's Algorithm: Prim's algorithm is a greedy algorithm that starts with an arbitrary vertex
and repeatedly adds the minimum-weight edge that connects the tree to a vertex not yet in
the tree.
* Greedy Technique: Yes, Prim's Algorithm is a greedy technique because it makes locally
optimal choices at each step by selecting the minimum-weight edge available.
* Greedy Move: The greedy move in Prim's Algorithm is to select the minimum-weight edge
that connects the current tree to a vertex not yet in the tree.
* Difference between Dynamic Programming and Divide-and-Conquer:
* Dynamic Programming solves overlapping subproblems by storing their solutions and
reusing them.
* Divide-and-Conquer breaks down a problem into smaller subproblems, solves them
independently, and combines the solutions.
2. Describe the characteristics of an algorithm.
Characteristics of an Algorithm:
* Finiteness: An algorithm must terminate after a finite number of steps.
* Input: An algorithm has zero or more inputs.
* Output: An algorithm produces at least one output.
* Definiteness: Each step of an algorithm must be precisely defined.
* Effectiveness: Each step of an algorithm must be feasible.
3. State the recurrence relations for the binary search and quick sort algorithm.
Binary Search:
T(n) = T(n/2) + c if n > 1
T(n) = d if n = 1

where 'c' and 'd' are constants.


Quick Sort:
T(n) = T(p) + T(n-p-1) + cn if n > 1
T(n) = d if n = 1

where 'p' is the position of the pivot element, and 'c' and 'd' are constants.
4. A complete binary tree with 'n' non-leaf nodes contains nodes.
Solution:
In a complete binary tree, the number of leaf nodes is one more than the number of non-leaf
nodes.
Therefore, total nodes = non-leaf nodes + leaf nodes = n + (n+1) = 2n + 1
5. Which among the insertion sort, quick sort, merge sort and heap sort perform in least time
in the worst case?
Solution:
* Insertion Sort: Worst-case time complexity is O(n^2)
* Quick Sort: Worst-case time complexity is O(n^2)
* Merge Sort: Worst-case time complexity is O(n log n)
* Heap Sort: Worst-case time complexity is O(n log n)
Therefore, merge sort and heap sort perform in the least time in the worst case.
6. The way a card game player arranges his cards as he picks them up one by one, is an
example of sort.
Solution:
This is an example of insertion sort.
7. What are the best and worst space complexity of the quick sort algorithm?
Solution:
* Best-case space complexity: O(log n) (when the pivot is always the middle element)
* Worst-case space complexity: O(n) (when the pivot is always the first or last element)
Group-B.
1. Define big-O, big-theta notation. Write the graphical interpretation of it.
Big-O Notation:
* Definition: A function f(n) is said to be O(g(n)) if there exist positive constants c and n₀
such that f(n) ≤ c * g(n) for all n ≥ n₀.
* Interpretation: Big-Oh provides an upper bound on the growth rate of a function. It
describes the worst-case scenario for an algorithm's time complexity.
Big-Theta Notation:
* Definition: A function f(n) is said to be Ω(g(n)) if there exist positive constants c and n₀
such that f(n) ≥ c * g(n) for all n ≥ n₀.
* Interpretation: Big-Omega provides a lower bound on the growth rate of a function. It
describes the best-case scenario for an algorithm's time complexity.
Graphical Interpretation:
* Big-Oh: The function f(n) is eventually bounded above by a constant multiple of g(n).
* Big-Omega: The function f(n) is eventually bounded below by a constant multiple of g(n).
2. Suppose you have two sorted arrays of size n. How you get a single sorted array from this
two using merging technique?
Merging Technique:
* Create a new array C to store the merged elements.
* Initialize two pointers, i and j, to the beginning of arrays A and B, respectively.
* Compare A[i] and B[j]:
* If A[i] <= B[j], add A[i] to C and increment i.
* Otherwise, add B[j] to C and increment j.
* Repeat step 3 until one of the arrays is exhausted.
* Add the remaining elements from the other array to C.
3. Describe how to solve Knapsack problem using greedy algorithm
Knapsack Problem:
The Knapsack problem involves selecting items from a set to maximize their total value while
keeping their total weight within a given capacity.
Greedy Approach (Fractional Knapsack):
* Calculate value-to-weight ratio: For each item, calculate its value per unit weight (value/
weight).
* Sort items: Sort the items in descending order of their value-to-weight ratio.
* Fill the knapsack: Take items in the sorted order until the knapsack is full. If the last item
cannot be taken completely, take a fraction of it to fill the remaining capacity.
Note: The greedy approach works only for the fractional knapsack problem. For the 0/1
knapsack problem, where you can only take an item completely or not at all, the greedy
approach does not always give the optimal solution.
4. Describe Breadth First Search and Depth First search of a graph with example.
Breadth First Search (BFS):
* Algorithm:
* Start at a given source vertex.
* Explore all of its neighbors before moving to the next level of neighbors.
* This is typically implemented using a queue.
Example:
Consider the following graph:
A
/\
B C
/\ \
D E F

Starting at vertex A, BFS would visit the vertices in the following order: A, B, C, D, E, F.
Depth First Search (DFS):
* Algorithm:
* Start at a given source vertex.
* Explore as far as possible along each branch before backtracking.
* This is typically implemented using a stack.
Example:
Using the same graph, DFS starting at vertex A could visit the vertices in the following order:
A, B, D, E, C, F (or any other order depending on the implementation).
5. Define spanning tree of graph. Describe Kruskal algorithm for obtaining minimum spanning
tree from a graph.
Spanning Tree:
A spanning tree of a graph is a subgraph that includes all the vertices of the original graph
and forms a tree (i.e., it is connected and has no cycles).
Kruskal's Algorithm:
* Sort edges: Sort the edges of the graph in ascending order of their weights.
* Create a forest: Initially, each vertex is in its own separate tree.
* Iterate through edges:
* For each edge (u, v):
* If u and v belong to different trees:
* Add the edge (u, v) to the MST.
* Merge the trees containing u and v into a single tree.
* Repeat step 3: Until all vertices are connected.
6. What is independent set of a graph? How to find an independent set of tree using greedy
technique?
Independent Set:
An independent set of a graph is a subset of vertices such that no two vertices in the subset
are adjacent (connected by an edge).
Greedy Technique for Trees:
* Start with an empty set.
* Select a vertex: Select a vertex from the tree and add it to the independent set.
* Remove neighbors: Remove the selected vertex and all of its neighbors from the tree.
* Repeat: Repeat steps 2 and 3 until no vertices remain.
This greedy approach does not always find the maximum independent set for a general
graph, but it can be used to find a reasonable approximation for trees.
Section B
1. State the Big-Oh, Big-Omega notations. Explain their graphical interpretations.
Big-Oh (O):
* Definition: A function f(n) is said to be O(g(n)) if there exist positive constants c and n₀
such that f(n) ≤ c * g(n) for all n ≥ n₀.
* Interpretation: Big-Oh provides an upper bound on the growth rate of a function. It
describes the worst-case scenario for an algorithm's time complexity.
Big-Omega (Ω):
* Definition: A function f(n) is said to be Ω(g(n)) if there exist positive constants c and n₀
such that f(n) ≥ c * g(n) for all n ≥ n₀.
* Interpretation: Big-Omega provides a lower bound on the growth rate of a function. It
describes the best-case scenario for an algorithm's time complexity.
Graphical Interpretation:
* Big-Oh: The function f(n) is eventually bounded above by a constant multiple of g(n).
* Big-Omega: The function f(n) is eventually bounded below by a constant multiple of g(n).
2. Find the longest common subsequences between X = {AGACGCG} and Y = {GAGCC} using
dynamic programming paradigm.
Dynamic Programming Approach:
We can use a 2D table L[i][j] to store the length of the LCS of substrings X[0..i-1] and
Y[0..j-1].
Here's the pseudocode:
for i = 0 to m:
for j = 0 to n:
if i == 0 or j == 0:
L[i][j] = 0
elif X[i-1] == Y[j-1]:
L[i][j] = L[i-1][j-1] + 1
else:
L[i][j] = max(L[i-1][j], L[i][j-1])

For the given strings:


* X = "AGACGCG"
* Y = "GAGCC"
The table L will look like:
GAGCC
A 000000
G 011111
A 011111
C 011122
A 011122
C 011122
G 011222
C 011222
G 011222

Therefore, the length of the LCS is 2.


3. Explain Prim's algorithm implemented through min-heap.
Prim's algorithm is a greedy algorithm for finding the minimum spanning tree (MST) of a
weighted graph. Here's the implementation using a min-heap:
* Initialization:
* Create a min-heap to store vertices.
* Insert the starting vertex into the heap with a key of 0.
* Set the key of all other vertices to infinity.
* Iteration:
* While the heap is not empty:
* Extract the vertex with the minimum key from the heap.
* For each neighbor of the extracted vertex:
* If the neighbor is in the heap and its key is greater than the weight of the edge
connecting it to the extracted vertex:
* Update the neighbor's key in the heap.
4. You have an array of size 15 in which 12 numbers are arranged into a max-heap. If you
want to add a new number into the heap, how to do that? Write the pseudo-code for the
above problem.
Pseudocode:
1. Add the new number to the end of the array.
2. Compare the new number with its parent.
3. If the new number is greater than its parent, swap them.
4. Repeat steps 2 and 3 until the new number is less than or equal to its parent.

5. From the recurrence relation, obtain the time complexity of the merge sort algorithm
through the back substitution method.
Recurrence Relation:
T(n) = 2T(n/2) + cn

Back Substitution:
T(n) = 2T(n/2) + cn
= 2(2T(n/4) + c(n/2)) + cn
= 4T(n/4) + cn + cn
= 4(2T(n/8) + c(n/4)) + cn + cn
= 8T(n/8) + cn + cn + cn
= ...
= 2^kT(n/2^k) + kcn
When n/2^k = 1, we get k = log n.
T(n) = 2^log n * T(1) + log n * cn
= n * c' + cn log n
= O(n log n)

Therefore, the time complexity of merge sort is O(n log n).


6. Explain the algorithm for matrix chain multiplication problem using dynamic programming
approach.
Dynamic Programming Approach:
* Create a table: Create a 2D table M[i][j] to store the minimum number of scalar
multiplications needed to compute the product of matrices Ai, Ai+1, ..., Aj.
* Base case: Set M[i][i] = 0 for all i.
* Iterate: For each chain length l = 2 to n:
* For each i = 1 to n-l+1:
*j=i+l-1
* Set M[i][j] = infinity
* For each k = i to j-1:
* Calculate cost = M[i][k] + M[k+1][j] + p[i-1] * p[k] * p[j]
* If cost < M[i][j]:
* Set M[i][j] = cost
7. Solve the recurrence relation using the recursion tree method: T(n) = 2T(n/2) + n².
Recursion Tree:
T(n)
|
|
2T(n/2) + n^2
| |
| |
2T(n/4) + (n/2)^2 2T(n/4) + (n/2)^2
| | | |
| | | |
... ...

At the k-th level, there are 2^k subproblems, each with size n/2^k. The total work done at
level k is 2^k * (n/2^k)^2 = n^2.
The tree has log n levels.
Therefore, the total work done across all levels is n^2 * log n.
Hence, the solution to the recurrence relation is T(n) = O(n^2 log n).

You might also like