0% found this document useful (0 votes)
4 views

algored

The document discusses various graph algorithms and techniques, including Depth-First Search (DFS) and Breadth-First Search (BFS) with their complexities, and four algorithm design techniques: Divide and Conquer, Dynamic Programming, Greedy Algorithms, and Backtracking. It also covers algorithms for Minimum Spanning Trees (Kruskal’s and Prim’s), Dijkstra’s algorithm for shortest paths, and the Travelling Salesman Problem using Dynamic Programming. Additionally, it highlights properties of heaps, characteristics of dynamic programming problems, and the concept of optimal substructure.

Uploaded by

Furqaan Mulla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

algored

The document discusses various graph algorithms and techniques, including Depth-First Search (DFS) and Breadth-First Search (BFS) with their complexities, and four algorithm design techniques: Divide and Conquer, Dynamic Programming, Greedy Algorithms, and Backtracking. It also covers algorithms for Minimum Spanning Trees (Kruskal’s and Prim’s), Dijkstra’s algorithm for shortest paths, and the Travelling Salesman Problem using Dynamic Programming. Additionally, it highlights properties of heaps, characteristics of dynamic programming problems, and the concept of optimal substructure.

Uploaded by

Furqaan Mulla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Complexity of DFS and BFS Graph Traversal


• Depth-First Search (DFS): Complexity: O(V+E)O(V + E), where VV is the • Breadth-First Search (BFS): Complexity: O(V+E)O(V + E), where VV is the
number of vertices and EE is the number of edges. number of vertices and EE is the number of edges.
Reason: Each vertex and edge is visited exactly once in the traversal. Reason: BFS explores each vertex and all its edges systematically.
2. Four Algorithm Design Techniques
a. Divide and Conquer: c. Greedy Algorithms:
• Approach: Divide the problem into smaller subproblems, solve them • Approach: Make the locally optimal choice at each step to find the global
independently, and combine their results. optimum.
• Example: Merge Sort, Quick Sort. • Example: Prim’s algorithm, Huffman coding.
• Steps: • Steps:
1. Divide: Break the problem into subproblems. 1. Select the best option available.
2. Conquer: Solve subproblems recursively. 2. Repeat until the solution is complete.
3. Combine: Merge solutions of subproblems. d. Backtracking:
b. Dynamic Programming: • Approach: Solve problems incrementally, abandoning a path as soon as it is
• Approach: Break the problem into overlapping subproblems, solve each once, determined that it cannot yield a feasible solution.
and store their results to avoid redundant computations. • Example: N-Queens problem, Sudoku solver.
• Example: Fibonacci sequence, Knapsack problem. • Steps:
• Steps: 1. Choose an option.
1. Identify overlapping subproblems. 2. Explore further.
2. Store intermediate results in a table. 3. Backtrack if a solution is not feasible.
3. Build the solution iteratively.

3. Comparison Between Divide and Conquer and Dynamic Programming 4. Algorithm for Minimum Spanning Tree
Divide and Conquer Dynamic Programming Kruskal’s Algorithm:
1. Sort all edges in non-decreasing order of their weights.
Problem Breaking: Breaks a problem Problem Breaking: Breaks a problem 2. Initialize an empty set for the MST.
into smaller subproblems that are solved into overlapping subproblems, where 3. Iterate through the sorted edges:
independently. subproblems share solutions. - Add the edge to the MST if it doesn’t form a cycle.
Overlapping Subproblems: Overlapping Subproblems: - Use a union-find data structure to check for cycles.
Subproblems are independent and do not Subproblems often overlap and solve 4. Repeat until the MST contains (V - 1) edges, where V is the number of vertices.
overlap. the same subproblem multiple times. Complexity: O(Elog⁡E)O(E \log E), where EE is the number of edges.
Prim’s Algorithm:
Memoization: Uses memoization (top- 1. Start from any vertex and add it to the MST.
Memoization: Does not use
down) or tabulation (bottom-up) to 2. Pick the smallest edge connecting the MST to an unvisited vertex.
memoization; solves subproblems from
store and reuse solutions of 3. Repeat until all vertices are included in the MST.
scratch.
subproblems. Complexity: O(E+Vlog⁡V)O(E + V \log V) with a priority queue.
Optimal Substructure: Used for
Optimal Substructure: Applies to problems that exhibit optimal 5. Number of Spanning Trees in a Complete Graph
problems with no optimal substructure in substructure, meaning the solution to a The number of spanning trees for a complete graph with nn vertices is nn−2n^{n-2}.
general. problem can be constructed from the
solutions to its subproblems. 6. All Pairs Shortest Path Using Dijkstra’s Algorithm?
No, it is not possible to use Dijkstra’s algorithm for all pairs of shortest paths
Time Complexity: Can have
Time Complexity: Often more efficiently.
exponential time complexity for certain
problems (e.g., Merge Sort O(n log n),
efficient than Divide and Conquer due • Reason: Dijkstra’s algorithm is designed for single-source shortest paths. To
to storing subproblem results, reducing compute all pairs, you would need to run it nn times for a graph with nn vertices,
Quick Sort O(n log n), but can also be
redundant work. leading to higher complexity.
inefficient for problems like TSP).
Examples: Fibonacci, Knapsack • Alternative: Use Floyd-Warshall or Johnson’s algorithm for better efficiency
Examples: Merge Sort, Quick Sort,
Problem, Longest Common
Binary Search. 8. Technique to Solve Optimization Problems
Subsequence.
• Dynamic Programming for problems with overlapping subproblems and
Recursion Depth: Typically involves Recursion Depth: Also uses recursion
optimal substructure.
deep recursion but does not store but minimizes recomputation by
intermediate results. storing results. • Greedy Algorithms for problems where local choices lead to global optimality.
• Linear Programming for problems representable as linear equations.

7. Strassen’s Matrix Multiplication


Approach: 2. Compute 7 intermediate matrices.
• Divide matrices into four submatrices of equal size. 3. Combine results to get the final product.
Complexity:
• Use 7 multiplications (instead of 8) and combine the results.
Algorithm: • Standard matrix multiplication: O(n3)O(n^3).
1. Divide matrices AA and BB into submatrices. • Strassen’s algorithm: O(n2.81)O(n^{2.81}).
9. Job Scheduling to Maximize Profit
Approach: Greedy Algorithm 3. Use a union-find or array to track available slots.
Steps: Example Execution:
1. Sort jobs by decreasing order of profit. Jobs: [(c, 1, 40), (d, 1, 30), (a, 4, 20), (b, 1, 10)]
2. Iterate over jobs and assign them to the latest available slot before their Selected Jobs: [c, a]
deadline. Total Profit: 40 + 20 = 60
10. Dijkstra’s Single Source Shortest Path Algorithm
Algorithm Steps: o Repeat until all nodes are visited or the shortest distances are
1. Initialization: determined.
o Set the distance to the source node as 00 and to all other nodes as 4. Output:
∞\infty. o The shortest distance from the source to all nodes.
o Mark all nodes as unvisited.
2. Relaxation: Complexity Analysis:
o Pick the unvisited node with the smallest distance (let it be uu). • Time Complexity:
o For each neighbor vv of uu: o Using a simple array for the priority queue: O(V2)O(V^2).
▪ Update the distance if a shorter path is found: o Using a binary heap (priority queue): O((V+E)log⁡V)O((V +
dist(v)=min⁡(dist(v),dist(u)+weight(u,v))\text{dist}(v) = E) \log V), where VV is the number of vertices and EE is the
\min(\text{dist}(v), \text{dist}(u) + \text{weight}(u, v)). number of edges.
3. Mark Node as Visited: • Space Complexity:
o Once uu is processed, mark it as visited. o O(V+E)O(V + E), to store the graph representation and priority
queue.

11. Principle of Optimality solve problems recursively by dividing them into overlapping subproblems and solving
"An optimal solution to a problem contains optimal solutions to its subproblems." each only once, storing the result for reuse.
This principle is the foundation of dynamic programming. It implies that we can
12. Properties of a Heap Tree (ii) Ordered Property:In a max-heap, the value of each node is greater than or equal
(i) Structured Property:A heap must be a complete binary tree, meaning all levels to the values of its children.
are fully filled except possibly the last level, which is filled from left to right. In a min-heap, the value of each node is less than or equal to the values of its children.
13. Dijkstra’s Algorithm for the Given Graph
Steps for Dijkstra’s Algorithm:
1. Initialization: o Move to D (dist(D)=16\text{dist}(D) = 16):
o dist(A)=0\text{dist}(A) = 0, all others =∞= \infty. Update dist(F)=17\text{dist}(F) = 17.
o Unvisited nodes: {A, B, C, D, E, F}. Visited: {A, B, C, D}.
2. Iterations: o Visit remaining nodes F and E (no updates).
o Start at A (dist(A)=0\text{dist}(A) = 0): 3. Final Distances:
Update dist(B)=7\text{dist}(B) = 7, dist(C)=12\text{dist}(C) = 12. o dist(A)=0\text{dist}(A) = 0, dist(B)=7\text{dist}(B) = 7,
Visited: {A}. dist(C)=9\text{dist}(C) = 9,
o Move to B (dist(B)=7\text{dist}(B) = 7): dist(D)=16\text{dist}(D) = 16, dist(E)=19\text{dist}(E) = 19,
Update dist(C)=9\text{dist}(C) = 9, dist(D)=16\text{dist}(D) = 16. dist(F)=17\text{dist}(F) = 17.
Visited: {A, B}.
o Move to C (dist(C)=9\text{dist}(C) = 9):
Update dist(E)=19\text{dist}(E) = 19.
Visited: {A, B, C}.

14. Properties of a Good Hash Function


1. Uniform Distribution: The hash function should distribute keys uniformly across the hash table to minimize collisions.
2. Efficiency: The hash function should be computationally efficient to calculate.
15. Characteristics of Problems Solved Using Dynamic Programming
1. Optimal Substructure: The problem can be broken into smaller subproblems, and the optimal solution to the entire problem can be built from optimal subproblem solutions.
2. Overlapping Subproblems: Subproblems are solved multiple times, so results are stored to avoid redundant calculations (memoization).
3. Optimal Solution: DP guarantees finding the best solution by combining solutions to subproblems.
4. Recursive Structure: Problems have a recursive relation for solving smaller subproblems.
16. Matrix Multiplication Using Divide and Conquer
Steps: Time Complexity:
1. Divide: Split matrices into smaller sub-matrices. The time complexity of matrix multiplication using divide and conquer is
2. Conquer: Perform recursive multiplications on sub-matrices. O(n3)O(n^3).
3. Combine: Merge results to form the final matrix.

17. Four Applications of Min/Max Heaps


1. Priority Queue: Min-heaps or max-heaps are used to implement priority 3. K-th Largest/Smallest Element: Heaps are used to find the k-th largest or
queues, where elements are extracted based on priority. smallest element in an array.
2. Heap Sort: A sorting algorithm that uses a heap to efficiently sort elements 4. Dynamic Median Finding: Two heaps (min-heap and max-heap) can be
with O(nlog⁡n)O(n \log n) time complexity. used to find the median in a stream of numbers efficiently.
18. Complexity of the Function
int fun(int n) { int j=0, total=0; for (int i=1; j<=n; • j=2(1+2+3+⋯+i)=i(i+1)j = 2(1 + 2 + 3 + • Loop runs until i(i+1)≤ni(i+1) \leq n, i.e.,
i++) { ++total; j += 2 * i;} \dots + i) = i(i+1). i≈ni \approx \sqrt{n}.
return total;} Complexity: O(n)O(\sqrt{n}).

19. Travelling Salesman Problem (TSP) using Dynamic Programming (DP)


The Travelling Salesman Problem (TSP) is a classic problem in optimization where 4. Final Solution:
a salesman is required to visit each city exactly once and return to the origin city, while o The final answer is the minimum of dp[AllCities][i] + dist(i, 0)
minimizing the total distance traveled. for all cities i, where AllCities is the bitmask representing all
To solve this problem using Dynamic Programming (DP), we use a technique called cities visited.
Held-Karp Algorithm, which improves the brute-force solution by utilizing the Complexity Analysis:
properties of overlapping subproblems. 1. Time Complexity:
DP Approach: o There are 2^n possible subsets of cities, and for each subset, we
1. State Representation: compute the minimum cost for n cities. For each subset, we have
o Let dp[S][i] represent the minimum cost to visit all cities in the to check all n cities as potential last visited cities, resulting in a
set S (a subset of cities), where i is the last city visited in this time complexity of: O(n2⋅2n)O(n^2 \cdot 2^n)
subset S. o Thus, the time complexity of the DP solution is O(n² * 2^n).
o S is a bitmask that represents the subset of cities visited so far. 2. Space Complexity:
2. Recurrence Relation: o We need to store the DP table, which has a size of O(n * 2^n), as
o For each subset of cities S (with k cities), we calculate the we store the cost for each subset and each city.
minimum cost of visiting the cities using the following o The space complexity is O(n * 2^n).
recurrence: dp[S][i]=min⁡(dp[S∖{i}][j]+dist(j,i))dp[S][i] = Conclusion:
\min(dp[S \setminus \{i\}][j] + dist(j, i)) where dist(j, i) is the The DP approach to solving the TSP is much more efficient than brute-force but still
distance between cities j and i, and j is the last city before i in has an exponential time complexity, making it feasible for problems with smaller
the subset S. numbers of cities (around 20-25).
3. Base Case:
o The base case is dp[{0}][0] = 0, meaning the salesman starts
from the origin city.

20. Minimum Spanning Tree (MST)


Minimum Spanning Tree (MST) Definition • Approximation algorithms for NP-hard problems
A Minimum Spanning Tree (MST) of a connected, weighted graph is a subgraph that Example:
includes all the vertices of the original graph, is a tree (i.e., no cycles), and has the For a graph with vertices V={A,B,C,D}V = \{A, B, C, D\} and edges with weights, the
minimum possible total edge weight compared to all other possible spanning trees. MST would be the subset of edges connecting all vertices with the least total weight,
Key Characteristics of MST: ensuring no cycles.
1. Spanning Tree: It includes all the vertices of the graph. Popular Algorithms for MST:
2. No Cycles: It is a tree, so it has no cycles. 1. Kruskal's Algorithm - Greedy algorithm that sorts edges and adds the
3. Minimum Total Weight: The sum of the edge weights in the MST is as smallest edge without forming a cycle.
small as possible. 2. Prim's Algorithm - Greedy algorithm that grows the MST starting from an
Applications: arbitrary vertex by adding the smallest edge connecting a vertex in the MST
• Network design (e.g., computer networks, electrical grids) to a vertex outside.
• Cluster analysis
21. Naive String Matching 28. Minimum Spanning Tree (MST)
For a pattern PP of length mm and text TT of length nn: An MST is a spanning tree with the minimum total edge weight.
1. Compare PP with every substring of TT of length mm. Difference between MST and Spanning Tree:
2. Complexity: O(m×n)O(m \times n). 1. Spanning Tree: Any subgraph that is a tree and spans all vertices.
Example: Match "abc" in "ababc". 2. MST: A spanning tree with the minimum total edge weight.
22. Master’s Theorem
Master’s theorem solves divide-and-conquer recurrences of the form: a) T(n)=2T(n/2)+nlog⁡nT(n) = 2T(n/2) + n\log n
T(n)=aT(n/b)+O(nd)T(n) = aT(n/b) + O(n^d) Where: Here: a=2,b=2,d=1(log⁡n>n1)a = 2, b = 2, d = 1 (\log n > n^1).
• aa: Number of subproblems Solution: T(n)=O(nlog⁡2n)T(n) = O(n\log^2 n).
b) T(n)=2nT(n/2)+nnT(n) = 2nT(n/2) + n^n
• bb: Division factor Here: T(n)T(n) is dominated by nnn^n. Solution: T(n)=O(nn)T(n) = O(n^n)
• dd: Exponent of cost outside recursion
23. Define Spanning Tree 24. Depth-First Search (DFS)
A spanning tree of a graph is a subgraph that: Depth-First Search (DFS) - Summary:
1. Includes all vertices of the original graph. DFS explores as far as possible along each path before backtracking. It uses a stack or
2. Is a tree (i.e., connected and acyclic). recursion and is useful for tasks like finding paths or cycles.
For a graph with VV vertices, a spanning tree will have exactly V−1V-1 Steps for DFS Traversal:
edges. 1. Start at the source node.
Example: 2. Visit a node, mark it as visited, and explore its neighbors recursively.
A complete graph K4K_4 (4 vertices, 6 edges) has multiple spanning trees, each with 3 3. Backtrack when no unvisited neighbors remain.
edges. DFS Traversal for the Graph:
Starting from node 0:
Traversal Order: 0 → 1 → 3 → 4 → 5 → 2.
25. Merge Sort Algorithm
Merge Sort is a Divide and Conquer algorithm. for (int i = 0; i < n1; i++) L[i] = arr[l + i];
Steps: for (int i = 0; i < n2; i++) R[i] = arr[m + 1 + i];
1. Divide: Split the array into two halves. int i = 0, j = 0, k = l;
2. Conquer: Recursively sort each half. while (i < n1 && j < n2) {
3. Combine: Merge the sorted halves. if (L[i] <= R[j]) arr[k++] = L[i++];
Pseudocode: else arr[k++] = R[j++];
void mergeSort(int arr[], int l, int r) { }
if (l < r) { while (i < n1) arr[k++] = L[i++];
int mid = l + (r - l) / 2; while (j < n2) arr[k++] = R[j++];
mergeSort(arr, l, mid); }
mergeSort(arr, mid + 1, r); Complexity:
merge(arr, l, mid, r); • Time Complexity: O(nlog⁡n)O(n \log n) (logarithmic levels of recursion
}} and linear merging at each level).
void merge(int arr[], int l, int m, int r) {
int n1 = m - l + 1, n2 = r - m;
• Space Complexity: O(n)O(n) (auxiliary arrays for merging).
int L[n1], R[n2];
26. When to Use Backtracking?
Backtracking is used when: Examples:
1. You need to explore all possible solutions and choose the optimal one. • N-Queens Problem: Placing queens on a chessboard such that no two
2. Problems involve constraints, and solutions are built incrementally. queens threaten each other.
• Subset Sum Problem: Find subsets of a set that sum to a specific value
27. Prim’s Algorithm
Prim's algorithm finds the Minimum Spanning Tree (MST) by starting from any vertex and expanding the tree by adding the smallest weight edge that connects to an unvisited vertex.
Algorithm: Complexity:
1. Initialize the MST set with one starting vertex. • Using adjacency matrix: O(V2)O(V^2).
2. While the MST does not include all vertices:
o Select the smallest weight edge connecting a vertex in the MST • Using priority queue: O(Elog⁡V)O(E \log V), where EE is the number of
edges and VV is the number of vertices.
to a vertex outside it.
o Add the edge to the MST.
29. Tower of Hanoi
In this, disks are moved between three rods following these rules: Steps for nn Disks:
1. Only one disk can be moved at a time. 1. Move n−1n-1 disks to an auxiliary rod.
2. A larger disk cannot be placed on top of a smaller disk. 2. Move the largest disk to the destination rod.
Greedy Solution: 3. Move n−1n-1 disks from the auxiliary rod to the destination.
• Move the smallest disk at every alternate step. Complexity:
• Alternate moves between the smallest disk and the next valid move. • O(2n−1)O(2^n - 1) moves.
30. Naive String Matching
Algorithm: }
1. Slide the pattern over the text one character at a time. if (j == m) // Match found
2. Compare the pattern with the current substring of the text. cout << "Pattern found at index " << i << endl;
3. If a mismatch occurs, slide the pattern by one position and repeat. }
Pseudocode: }
void naiveSearch(string text, string pattern) { Complexity:
int n = text.length(), m = pattern.length(); • Time Complexity: O((n−m+1)⋅m)O((n-m+1) \cdot m).
for (int i = 0; i <= n - m; i++) {
int j;
• Comparison with KMP and Rabin-Karp:
for (j = 0; j < m; j++) { o Naive: O(n⋅m)O(n \cdot m).
if (text[i + j] != pattern[j]) o KMP: O(n+m)O(n + m).
break; o Rabin-Karp: O(n+m)O(n + m) on average.
32. General Formula for Finding the Number of Spanning Trees
For a graph with ∣V∣|V| vertices and ∣E∣|E| edges, the number of spanning trees can be found using Kirchhoff's Matrix Tree Theorem:
1. Construct the Laplacian matrix LL of the graph: 2. Remove any row and corresponding column from LL.
o L[i][i]=L[i][i] = degree of vertex ii. 3. The determinant of the resulting matrix gives the number of spanning trees.
o L[i][j]=−1L[i][j] = -1 if there is an edge between ii and jj; otherwise,
00.
34. What is an Algorithm? Explain its Characteristics
An algorithm is a step-by-step procedure or set of rules for solving a problem in a finite amount of time.
Characteristics of an Algorithm: 3. Finiteness: Must terminate after a finite number of steps.
1. Input: Takes zero or more inputs. 4. Definiteness: Each step must be clear and unambiguous.
2. Output: Produces at least one output. 5. Effectiveness: Must be executable using basic operations.
35. Comparisons to Find Min and Max in an Array
For an array of size nn:
1. Using a traditional method: 2. Using Divide and Conquer:
o n−1n-1 comparisons for minimum. o Complexity: 3n/2−23n/2 - 2 comparisons.
o n−1n-1 comparisons for maximum. o For n=100n = 100: 148148 comparisons.
o Total: 2(n−1)=1982(n-1) = 198 comparisons for 100 elements.
38. Number of Edges in a Spanning Tree
For a graph with nn vertices, a spanning tree has:Number of edges=n−1\text{Number of edges} = n - 1
36. Big Theta (Θ) Asymptotic Notation
Big Theta (Θ\Theta) represents the tight bound of an algorithm. It provides both an upper and a lower bound.
Definition:
• A function f(n)f(n) is Θ(g(n))\Theta(g(n)) if: c1⋅g(n)≤f(n)≤c2⋅g(n)for all n≥n0c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \quad \text{for all } n \geq n_0
• c1,c2,c_1, c_2, and n0n_0 are constants.
Example:For f(n)=3n2+2n+1f(n) = 3n^2 + 2n + 1, f(n)∈Θ(n2)f(n) \in \Theta(n^2).
37. Kruskal’s Algorithm and Its Complexity
Algorithm: Complexity:
1. Sort all edges by weight. • Sorting edges: O(Elog⁡E)O(E \log E).
2. Initialize an empty MST.
3. For each edge:
• Cycle detection (using Union-Find): O(Eα(V))O(E \alpha(V)).
o If it doesn’t form a cycle, add it to the MST.
4. Repeat until V−1V-1 edges are added.
39. KMP String Matching Algorithm
The Knuth-Morris-Pratt (KMP) algorithm efficiently searches for a pattern in a text by preprocessing the pattern to avoid redundant comparisons.
Steps: • Match pattern using the LPS to shift efficiently.
1. Compute the Longest Prefix Suffix (LPS) array for the pattern. Complexity:
2. Traverse the text using the LPS to skip unnecessary comparisons.
Example:
• Preprocessing: O(m)O(m), where mm is the pattern length.
Text: "ABABDABACD" • Searching: O(n)O(n), where nn is the text length.
Pattern: "ABAB"
• Compute LPS: [0, 0, 1, 2]
40. Asymptotic Notations
Asymptotic notations describe the running time or space of an algorithm in terms of input size nn:
1. Big-O (OO): Upper bound (worst-case). 3. Big-Theta (Θ\Theta): Tight bound (average-case).
2. Big-Omega (Ω\Omega): Lower bound (best-case).
41. Properties for Solving with Dynamic Programming
1. Optimal Substructure: The solution to the problem can be constructed 2. Overlapping Subproblems: Subproblems are solved multiple times,
using solutions to its subproblems. making memoization or tabulation efficient.
52. Matrix Chain Multiplication (Optimal Parenthesization Algorithm)
The Matrix Chain Multiplication problem aims to determine the most efficient way to multiply a sequence of matrices to minimize scalar multiplications.
42. Recursion Tree Method for T(n)=3T(n/4)+n2T(n) = 3T(n/4) + n^2
Steps: Solution:
1. Break down the recurrence into levels. • At level ii, the cost is 3i⋅(n/4i)23^i \cdot (n/4^i)^2.
2. Calculate the cost at each level.
3. Sum the cost of all levels. • Total cost: Θ(n2)\Theta(n^2).
44. Time Complexity for nnth Fibonacci Number
1. Recursive Method: Time Complexity: O(2n)O(2^n) (exponential growth). 2. Dynamic Programming: Time Complexity: O(n)O(n) (linear growth).
45. Recursion Tree Method for T(n)=2T(n/10)+nT(n) = 2T(n/10) + n
Steps:
1. Break down into levels: At level ii, cost = 2i⋅n/10i2^i \cdot n/10^i. 2. Sum up costs across levels 3. Total complexity: O(nlog⁡n)O(n \log n).
46. Minimum Scalar Multiplications (Matrix Chain Multiplication)
We aim to find the most efficient way to multiply a chain of matrices A1,A2,A3,A4A1, A2, A3, A4 with dimensions 5×4,4×6,6×2,2×75 \times 4, 4 \times 6, 6 \times 2, 2 \times 7.
• Problem: Matrix multiplication is associative, so the order of computation affects the total number of scalar multiplications.
• Objective: Minimize scalar multiplications.
Steps:
1. Define m[i][j]m[i][j] as the minimum number of scalar multiplications needed to compute the product Ai...AjAi...Aj.
2. Use the recurrence relation: m[i][j]=min⁡i≤k<j{m[i][k]+m[k+1][j]+p[i−1]×p[k]×p[j]},m[i][j] = \min_{i \leq k < j} \{ m[i][k] + m[k+1][j] + p[i-1] \times p[k] \times p[j] \},
where p[]p[] is the array of matrix dimensions.
3. Base case: m[i][i]=0m[i][i] = 0, since a single matrix multiplication requires no scalar multiplications.
4. Fill the table iteratively for increasing chain lengths.
Solution: For matrices p=[5,4,6,2,7]p = [5, 4, 6, 2, 7]:m[1][4]=((A1A2)(A3A4))m[1][4] = ((A1A2)(A3A4)) is optimal with 158158 multiplications.
47. Overlapping Substructure
A problem exhibits overlapping substructure when it involves solving the same subproblem multiple times.
Examples:
• Fibonacci numbers: The subproblems F(n−1)F(n-1) and F(n−2)F(n-2) are solved repeatedly in a naive recursive approach.
• Dynamic programming avoids recomputation by storing solutions in a table.
48. Selection Sort Algorithm
Algorithm: }
1. Start from the first element and find the smallest element in the remaining // Swap minimum element with current element
array. swap(arr[minIndex], arr[i]);
2. Swap it with the current element. }
3. Repeat for all elements. }
Pseudocode: Complexity:
void selectionSort(int arr[], int n) { • Best Case: O(n2)O(n^2) (Still iterates through the array).
for (int i = 0; i < n-1; i++) {
int minIndex = i;
• Worst Case: O(n2)O(n^2).
for (int j = i+1; j < n; j++) { • Space Complexity: O(1)O(1) (In-place sorting).
if (arr[j] < arr[minIndex]) `
minIndex = j;
49. Iterative Solution to Recurrence
Given: T(n)={2if n=1,2T(n/2)+2n+3if n>1.T(n) = \begin{cases} 2 & \text{if } n = 1, \\ 2T(n/2) + 2n + 3 & \text{if } n > 1. \end{cases}
Steps:
1. Expand T(n)T(n) iteratively: T(n)=2(2T(n/4)+2n/2+3)+2n+3T(n) = 2(2T(n/4) + 2n/2 + 3) + 2n + 3 Continue expanding until the base case is reached (T(1)=2T(1) = 2).
2. General form: After kk expansions, the cost of 2k⋅n/(2k)2^k \cdot n/(2^k) terms dominates, giving a total complexity of O(nlog⁡n)O(n \log n).
50. Optimal Substructure
A problem has optimal substructure if its global optimal solution can be constructed from optimal solutions to its subproblems.
Example:
• Shortest Path (Dijkstra’s Algorithm): The shortest path to a vertex vv from a source ss is composed of the shortest path from ss to uu and then uu to vv.
• Matrix Chain Multiplication: The optimal multiplication of A1A2A3A4A1A2A3A4 depends on the optimal multiplication of A1A2A1A2 and A3A4A3A4.
51. Parameters for Algorithm Analysis
Two fundamental parameters used to analyze an algorithm:
1. Time Complexity: 2. Space Complexity:
o Measures the amount of time an algorithm takes based on the o Measures the amount of memory required by an algorithm,
size of its input. including input, output, and auxiliary storage.
o Example: Binary Search has O(log⁡n)O(\log n) time o Example: Merge Sort requires O(n)O(n) additional space for
complexity for searching in a sorted array. merging.
53. Disadvantage of Greedy Method
The Greedy Method works by making a locally optimal choice at each step.
Disadvantage: It doesn’t guarantee a globally optimal solution in problems where local choices do not lead to the best global outcome.
Example: In the Knapsack Problem, Greedy fails to find the optimal solution for the 0/1 variant but works for the fractional variant.
59. Time Complexity for MST
1. Kruskal’s Algorithm: 2. Prim’s Algorithm:
o Sorting edges: O(Elog⁡E)O(E \log E). o Priority queue implementation: O(Elog⁡V)O(E \log V).
o Union-Find operations: O(Eα(V))O(E \alpha(V)). o Total: O(Elog⁡V)O(E \log V).
o Total: O(Elog⁡E)O(E \log E).
54. Binary Search Algorithm
Binary Search divides the search space in half after each comparison. else if (arr[mid] < key) low = mid + 1;
Algorithm: else high = mid - 1;
1. Start with the middle element. }
2. If the middle element equals the key, return its index. return -1; // Element not found
3. If the key is smaller, search the left half; otherwise, search the right half. }
Pseudocode: Complexity:
int binarySearch(int arr[], int low, int high, int key) { • Best Case: O(1)O(1).
while (low <= high) {
int mid = low + (high - low) / 2;
• Worst Case: O(log⁡n)O(\log n).
if (arr[mid] == key) return mid;
55. Matrix Chain Multiplication Example
For matrices 4×10,10×3,3×12,12×20,20×74 \times 10, 10 \times 3, 3 \times 12, 12 \times 20, 20 \times 7,
Steps:
1. Create m[i][j]m[i][j] and compute for increasing chain lengths. 3. Minimum Multiplications: 38043804.
2. Optimal Parenthesization: ((A1A2)(A3(A4A5)))((A1A2)(A3(A4A5))).
56. Memoization
Memoization is a top-down approach in Dynamic Programming where results of overlapping subproblems are stored to avoid recomputation.
Example: Fibonacci sequence with memoization stores F(1),F(2),…,F(n)F(1), F(2), \dots, F(n) in an array to avoid redundant calculations.
57. Bubble Sort Algorithm and Its Complexity
Algorithm: Complexity:
BubbleSort(arr[], n): • Best Case (Already Sorted): O(n)O(n) (requires a flag to detect no swaps
for i = 0 to n-1: in an iteration).
for j = 0 to n-i-2:
if arr[j] > arr[j+1]:
• Worst Case (Reverse Sorted): O(n2)O(n^2).
swap(arr[j], arr[j+1]) • Average Case: O(n2)O(n^2).
58. Fibonacci Complexity as O(2n)O(2^n)
In the naive recursive method: T(n)=T(n−1)+T(n−2)+O(1) ⟹ O(2n).T(n) = T(n-1) + T(n-2) + O(1) \implies O(2^n).
Fib(n):
if n <= 1: 72. Backtracking Concept
return n Backtracking is a systematic way of exploring solutions by making choices and
return Fib(n-1) + Fib(n-2) undoing them when constraints are violated. It is used for constraint satisfaction
Each call generates two recursive calls, creating an exponential growth of function problems like N-Queens, Sudoku, and Subset Sum
calls. For nn, it results in:

60. Merge Two Sorted Arrays


Algorithm: 61. Insertion Sort Algorithm and Complexity
MergeArrays(arr1[], arr2[], n1, n2): Algorithm:
i, j, k = 0, 0, 0 InsertionSort(arr[], n):
result[] for i = 1 to n-1:
while i < n1 and j < n2: key = arr[i]
if arr1[i] <= arr2[j]: j = i-1
result[k++] = arr1[i++] while j >= 0 and arr[j] > key:
else: arr[j+1] = arr[j]
result[k++] = arr2[j++] j--
while i < n1: arr[j+1] = key
result[k++] = arr1[i++] Complexity:
while j < n2: • Best Case: O(n)O(n).
result[k++] = arr2[j++]
Complexity:
• Worst and Average Case: O(n2)O(n^2).
Time: O(n1+n2)O(n1 + n2). Space: O(n1+n2)O(n1 + n2).
62. Time Complexity for Matrix Multiplication
1. Divide and Conquer Method 2. Strassen's Method
• In the Divide and Conquer method, the matrix multiplication problem is • Strassen’s algorithm improves matrix multiplication by reducing the
divided into 8 sub-problems of half the size. number of subproblems to 7 instead of 8.
• The recurrence relation is: • The recurrence relation is:
T(n)=8T(n2)+O(n2)T(n) = 8T\left(\frac{n}{2}\right) + O(n^2) T(n)=7T(n2)+O(n2)T(n) = 7T\left(\frac{n}{2}\right) + O(n^2)
o Here, n2n^2 is the time for merging the results o Again, n2n^2 is the time for merging results (adding/subtracting
(adding/subtracting matrices). matrices).
• Using the Master Theorem, the solution is: • Using the Master Theorem, the solution is approximately:
T(n)=O(n3)T(n) = O(n^3) T(n)=O(nlog⁡27)≈O(n2.81)T(n) = O(n^{\log_2 7}) \approx O(n^{2.81})
So, the time complexity of matrix multiplication using the Divide and Conquer So, the time complexity of matrix multiplication using Strassen's method is
method is O(n3)O(n^3). approximately O(n2.81)O(n^{2.81}), which is faster than the Divide and Conquer
method.
Comparison:
• Divide and Conquer Method: O(n3)O(n^3) Strassen’s method is more efficient for large matrices due to its reduced complexity,
but it may involve higher constant factors and additional overhead, making it less
• Strassen's Method: O(n2.81)O(n^{2.81}) practical for smaller matrices.
63. Algorithm to Merge ‘n’ Sorted Files
The merging of sorted files with different lengths into a single file in minimum time can be achieved using a min-heap. The goal is to minimize the total cost (time taken), which
corresponds to the sum of file sizes at every merge step. Here's the algorithm:
Algorithm: c. Add the `merge_cost` to `total_time`.
1. Input: Array F[] of n sorted file lengths. d. Insert the `merge_cost` back into the heap.
2. Output: Total time to merge all files into a single sorted file. 4. Return `total_time`.
MergeSortedFiles(F[], n): Time Complexity:
1. Create a min-heap and insert all `n` file lengths into the heap. • Building the heap: O(n)O(n)
2. Initialize `total_time = 0`.
3. While the size of the heap is greater than 1: • Merging: O(nlog⁡n)O(n \log n) (since there are n−1n-1 merges and each
heap operation takes O(log⁡n)O(\log n))
a. Extract the two smallest file lengths (`f1` and `f2`) from the heap.
Total Complexity: O(nlog⁡n)O(n \log n)
b. Compute the merge cost: `merge_cost = f1 + f2`.
68. Difference Between Fractional and 0/1 Knapsack 71. Differences between Kruskal’s and Prim’s Algorithms
Aspect Fractional Knapsack 0/1 Knapsack Aspect Kruskal’s Algorithm Prim’s Algorithm
Items can be divided Items cannot be divided (all or Greedy; selects edges in Greedy; grows MST from a
Item Division Approach
(fractions allowed). none). increasing weight. starting vertex.
Optimal Greedy algorithm gives Solved using Dynamic Graph
Edge list. Adjacency matrix or list.
Solution optimal solution. Programming. Representation

Time O(nlog⁡n)O(n \log n) O(n×W)O(n \times W) (dynamic O(V2)O(V^2) (or


Complexity (sorting items). programming). Time Complexity O(Elog⁡E)O(E \log E) O(E+log⁡V)O(E + \log V) with
heap).

64. Solving N-Queens Problem with Backtracking


Key Idea:
The N-Queens problem places NN queens on an N×NN \times N chessboard such that no two queens threaten each other (no two queens share the same row, column, or diagonal).
Backtracking explores all possible configurations and backtracks when a conflict arises.
Algorithm: return True
1. Start with an empty chessboard. board[row][col] = 0 # Backtrack
2. Place a queen in a column of the current row if:
o No other queen is in the same column. return False
o No other queen is in the same diagonal.
3. Move to the next row and repeat the process. IsSafe(board, row, col, N):
4. If all rows are processed, a solution is found. # Check column and diagonals
5. If no valid position exists for a row, backtrack to the previous row and try for i in range(row):
the next position. if board[i][col] == 1 or \
SolveNQueens(board, row, N): (col - row + i >= 0 and board[i][col - row + i] == 1) or \
if row == N: (col + row - i < N and board[i][col + row - i] == 1):
PrintSolution(board) return False
return True return True
Complexity:
for col in range(N): • Time Complexity: O(N!)O(N!) (as there are NN options for the first row,
if IsSafe(board, row, col, N): N−1N-1 for the second, and so on).
board[row][col] = 1 # Place queen • Space Complexity: O(N2)O(N^2) (for the chessboard)
if SolveNQueens(board, row + 1, N):

70. Comparisons in Min-Max with Divide and Conquer • Total comparisons:


• Divide the array into two halves recursively. o For nn elements, 3n2−2\frac{3n}{2} - 2 comparisons are made.
• At each level, compare elements to find the min and max in each half. 74. Number of Swaps in Bubble Sort and Selection Sort
Array: [3,4,5,2,1][3, 4, 5, 2, 1] 2. Selection Sort:
1. Bubble Sort: o Total Swaps: 4
o Total Swaps: 7
65. Sorting Algorithm for Array A={2,4,6,13,23,20,22,32,37,40}A = \{2, 4, 6, 13, 23, 20, 22, 32, 37, 40\}
Observation: The array is almost sorted except for a small unsorted portion Steps:
({23,20,22}\{23, 20, 22\}). This makes Insertion Sort an efficient choice, as it 1. Traverse the array from the beginning.
performs well on nearly sorted data. 2. If an element is out of place, shift larger elements to the right and insert the
Why Insertion Sort? element at its correct position.
• Insertion Sort has a time complexity of O(n+k)O(n + k), where kk is the 3. Repeat for the rest of the array.
number of inversions. Since the array is nearly sorted, kk is small. Alternatively:
If kk is very large, consider using Merge Sort with a time complexity of
• It sorts in place with a low overhead. O(nlog⁡n)O(n \log n).
66. Differences between Backtracking and Recursion
1. Purpose: o Recursion: Solves subproblems, typically in a divide-and-
o Backtracking: Used for solving constraint satisfaction conquer manner, without considering constraint violations.
problems by exploring all possible solutions and backtracking 3. Termination:
when constraints are violated (e.g., N-Queens, Sudoku). o Backtracking: Explicitly terminates paths that fail to meet
o Recursion: A programming technique where a function calls constraints.
itself to solve subproblems (e.g., factorial, Fibonacci sequence). o Recursion: Terminates when the base case is reached.
2. Solution Space Exploration:
o Backtracking: Explores multiple paths systematically until a
solution is found or all possibilities are exhausted.
67. Fractional Knapsack Solution
To solve the Fractional Knapsack Problem, we need to maximize the total profit by selecting fractions of items in such a way that the total weight does not exceed the capacity of the
knapsack. The optimal strategy is to select items based on the highest profit-to-weight ratio.
Step-by-step solution:
1. Given Data:
o Number of items: n = 7 o Item 5: Profit = 6, Weight = 1 → Add full item (Remaining
o Knapsack capacity: m = 15 capacity = 13 - 1 = 12).
Item Profit Weight Profit/Weight Ratio o Item 6: Profit = 18, Weight = 4 → Add full item (Remaining
capacity = 12 - 4 = 8).
1 10 2 10/2 = 5 o Item 3: Profit = 15, Weight = 5 → Add full item (Remaining
2 5 3 5/3 ≈ 1.67 capacity = 8 - 5 = 3).
3 15 5 15/5 = 3
o Item 7: Profit = 3, Weight = 1 → Add full item (Remaining
capacity = 3 - 1 = 2).
4 7 7 7/7 = 1 o Item 2: Profit = 5, Weight = 3 → We can't take this item fully
5 6 1 6/1 = 6 since remaining capacity is 2.
6 18 4 18/4 = 4.5
▪ Take a fraction of Item 2: We can take 2 units out of
3. So, the fractional profit is: 23×5=3.33\frac{2}{3}
7 3 1 3/1 = 3 \times 5 = 3.33 (Remaining capacity = 0).
2. Sort items by Profit/Weight Ratio in descending order: 4. Final Solution:
Sorted order: Item 1 (5), Item 5 (6), Item 6 (4.5), Item 3 (3), Item 7 (3), o Full items selected: Item 1 (10), Item 5 (6), Item 6 (18), Item 3
Item 2 (1.67), Item 4 (1). (15), Item 7 (3).
3. Select items for the knapsack: o Fractional item selected: 2/3 of Item 2 (3.33).
We start by selecting items with the highest Profit/Weight ratio and Total Profit = 10 + 6 + 18 + 15 + 3 + 3.33 = 55.33
continue until the knapsack capacity is filled. Thus, the optimal solution to the fractional knapsack problem is a total profit of 55.33.
o Item 1: Profit = 10, Weight = 2 → Add full item (Remaining
capacity = 15 - 2 = 13).
69. Fibonacci Complexity Reduction
1. Exponential Time O(2n)O(2^n): dp = [0] * (n + 1)
o Recursive computation recalculates overlapping subproblems. dp[0], dp[1] = 0, 1
2. Dynamic Programming O(n)O(n): for i in range(2, n + 1):
o Stores previous results in a table (memoization). dp[i] = dp[i - 1] + dp[i - 2]
o Eliminates redundant calculations. return dp[n]
Algorithm:
Fibonacci(n):

73. 0/1 Knapsack with Dynamic Programming


Algorithm:
Knapsack(values, weights, W): if weights[i - 1] <= w:
n = len(values) dp[i][w] = max(dp[i - 1][w], values[i - 1] + dp[i - 1][w - weights[i - 1]])
dp = [[0] * (W + 1) for _ in range(n + 1)] else:
for i in range(1, n + 1): dp[i][w] = dp[i - 1][w]
for w in range(W + 1): return dp[n][W]
Complexity:
• Time: O(n×W)O(n \times W) • Space: O(n×W)O(n \times W)
82. Subset Sum Problem (A={4,16,5,23,12},sum=9A = \{4, 16, 5, 23, 12\}, \text{sum} = 9)
Use Backtracking:
1. Start with an empty subset. 2. Add elements one by one, checking if the subset 3. If a subset sum exceeds the target, backtrack.
sum equals the target.
Example Solution:Subset: {4,5}\{4, 5\}.
75. Big-O (O) Asymptotic Notation
Definition: Big-O notation describes the upper bound of an algorithm's time or space complexity. It represents the worst-case growth rate of a function as the input size nn approaches
infinity. It helps analyze and compare the performance of algorithms independent of hardware or implementation.
Common Big-O Complexities: Why Use Big-O?
Big-O Notation Example Algorithms Description Big-O notation allows developers and researchers to:

Constant time, independent of


• Compare the efficiency of algorithms.
O(1)O(1) Accessing array elements
nn. • Predict how an algorithm scales with input size.
O(log⁡n)O(\log n) Binary search Logarithmic growth. • Choose suitable algorithms for performance-critical applications.
Key Features:
O(n)O(n) Linear search Linear growth. 1. Focus on Growth Rate:
O(nlog⁡n)O(n \log Merge sort, Quick sort Big-O considers how the runtime or memory usage grows with increasing
Linearithmic growth. input size nn, ignoring constant factors or lower-order terms.
n) (avg)
2. Worst-Case Performance:
O(n2)O(n^2) Bubble sort, Selection sort Quadratic growth. It provides the worst-case scenario, ensuring the algorithm's efficiency is
O(2n)O(2^n) Recursive Fibonacci Exponential growth. acceptable even under the most challenging conditions.

77. Arrange in Increasing Order of Asymptotic Complexity


Given: nlog⁡n,nlog⁡n,2n,n3/2n \log n, n \log n, 2^n, n^{3/2} o 2n2^n: Exponential growth, dominates all others.
1. Simplify complexities: 2. Final Order:
o nlog⁡nn \log n and nlog⁡nn \log n: Equivalent. nlog⁡n,nlog⁡n,n3/2,2nn \log n, n \log n, n^{3/2}, 2^n
o n3/2>nlog⁡nn^{3/2} > n \log n: Polynomial growth is faster.
78. Constructing Min-Heap for A={3,14,32,18,42,12,20}A = \{3, 14, 32, 18, 42, 12, 20\}
1. Initial Array: [3,14,32,18,42,12,20][3, 14, 32, 18, 42, 12, 20]. o Heap: [3,14,12,18,42,32,20][3, 14, 12, 18, 42, 32, 20].
2. Build Min-Heap using Bottom-Up Approach: • Node 1414 (index 1):
o Start from the last non-leaf node (n/2−1=2n/2 - 1 = 2) and o Compare with 18,4218, 42. Already a min-heap.
heapify upwards.
Step-by-Step Heapify Process: • Node 33 (index 0):
• Node 3232 (index 2): o Compare with 14,1214, 12. Already a min-heap.
Final Min-Heap: [3,14,12,18,42,32,20][3, 14, 12, 18, 42, 32, 20].
o Compare with its children 12,2012, 20. Swap 3232 and 1212.
79. Solve Using Master’s Theorem
i) T(n)=2T(n/4)+nT(n) = 2T(n/4) + \sqrt{n}: ii) T(n)=7T(n/2)+n2T(n) = 7T(n/2) + n^2:
• a=2,b=4,f(n)=n=n1/2a = 2, b = 4, f(n) = \sqrt{n} = n^{1/2}. • a=7,b=2,f(n)=n2a = 7, b = 2, f(n) = n^2.
• Calculate p=log⁡ba=log⁡42=1/2p = \log_b a = \log_4 2 = 1/2. • Calculate p=log⁡ba=log⁡27≈2.81p = \log_b a = \log_2 7 \approx 2.81.
Comparison: Comparison:
• f(n)=n=O(np)f(n) = \sqrt{n} = O(n^p), where p=1/2p = 1/2. • f(n)=n2f(n) = n^2, and p=2.81>2p = 2.81 > 2.
Result: Result:
• Case 2 (Master's Theorem): T(n)=O(n1/2log⁡n)T(n) = O(n^{1/2} \log n) • Case 1 (Master's Theorem): T(n)=O(nlog⁡27)≈O(n2.81)T(n) =
O(n^{\log_2 7}) \approx O(n^{2.81})

81. Spanning Tree and Total Count for Complete Graph


• Definition: A spanning tree of a graph is a subgraph that includes all the 83. Space Complexity for Program
vertices and is a tree (connected and acyclic). int A[n];
for (int i = 1; i <= n; i *= 2) {
• Number of Spanning Trees for Complete Graph KnK_n: Using sum = sum + A[i];
Cayley’s Formula:
}
Number of Spanning Trees=nn−2\text{Number of Spanning Trees} = n^{n-
2} • Space Complexity:
For n=6n = 6: o Array A[n]A[n]: O(n)O(n).
66−2=64=12966^{6-2} = 6^4 = 1296 o Loop variables and sum: O(1)O(1).
Total Space Complexity: O(n)\text{Total Space Complexity: } O(n)

84. Characteristics of Problems Solved with Dynamic Programming 85. Subset Sum with Backtracking
1. Optimal Substructure: 1. Start with an empty subset.
The solution to a problem can be constructed from solutions to its 2. Include/exclude the current element.
subproblems. 3. If the subset sum equals the target, save the subset.
2. Overlapping Subproblems: 4. Backtrack and explore other configurations.
Subproblems are solved multiple times; storing results avoids Example: A={4,16,5,23,12},sum=9A = \{4, 16, 5, 23, 12\}, \text{sum} = 9.
recomputation. Valid subset: {4,5}\{4, 5\}.
3. Decision-Making:
Each decision contributes to an optimal solution. 80. Making Bubble Sort Adaptive
Examples: Knapsack, Fibonacci, Matrix Chain Multiplication. To make Bubble Sort adaptive, add a flag that tracks whether any swaps occur during a
pass. If no swaps occur, the array is already sorted, and the algorithm terminates early.
86. What is Hashing? Why Do We Need Hashing?
Definition of Hashing:
Hashing is a process of mapping a large dataset (keys) to a smaller fixed-sized dataset
(hash values) using a hash function. The hash value determines where the data is
stored in a hash table.
Why Do We Need Hashing?
1. Efficient Search and Retrieval: o It is especially useful in scenarios where fast data retrieval is
o Hashing provides an average-case time complexity of O(1)O(1) required, like databases and caching systems.
for searching, inserting, and deleting elements. 3. Collision Handling:
2. Fast Access: o Advanced hashing techniques (e.g., chaining, open addressing)
ensure efficient resolution of hash collisions.
87. Big Omega (Ω\Omega) Asymptotic Notation
Definition: Big Omega notation represents the lower bound of an algorithm's time or
space complexity. It gives the best-case performance of the algorithm, indicating the
minimum time required to complete a task.
Key Features:
1. Growth Rate Representation: o For a function T(n)T(n), T(n)=Ω(g(n))T(n) = \Omega(g(n))
o Describes the smallest possible growth of a function as input means there exists constants c>0c > 0 and n0n_0 such that:
size (nn) increases. T(n)≥c⋅g(n)for all n≥n0.T(n) \geq c \cdot g(n) \quad \text{for all
2. Notation: } n \geq n_0.
Example: If an algorithm has a time complexity of T(n)=5n2+3nT(n) = 5n^2 + 3n, the
lower bound is Ω(n2)\Omega(n^2).

88. Divide and Conquer Strategy


Definition: Divide and conquer is an algorithmic paradigm that divides a problem into
smaller subproblems, solves each subproblem independently, and then combines their
results to solve the original problem.
Steps: Example: Merge Sort
1. Divide: Break the problem into smaller subproblems. 1. Divide: Split the array into two halves.
2. Conquer: Solve each subproblem recursively. 2. Conquer: Recursively sort each half.
3. Combine: Merge the solutions of the subproblems to form the final 3. Combine: Merge the two sorted halves.
solution. Time Complexity: O(nlog⁡n)O(n \log n).
89. Time Complexity for the Given Program
int A[10]; Analysis:
for (int i = 1; 1. The loop runs with ii values as 1,2,4,8,…,n1, 2, 4, 8, \ldots, n.
i <= n; 2. Each iteration doubles ii, so the loop runs log⁡2(n)\log_2(n) times.
i *= 2) { Time Complexity:
sum = sum + A[i]; O(log⁡n)O(\log n)
}

90. Rabin-Karp String Matching Algorithm


Purpose: Rabin-Karp is a string matching algorithm that uses a hashing technique to
efficiently find occurrences of a pattern in a text.
Steps:
1. Compute the hash value of the pattern PP and the first substring of the text
TT of the same length.
2. Slide the pattern over the text:
o For each new substring, compute its hash value and compare it
with the pattern's hash.
o If hashes match, perform a direct comparison of characters to
verify the match.
3. Continue until the pattern is found or the text ends.
Example: Text: "abcdabc"
Pattern: "abc"
1. Compute the hash of "abc" (pattern).
2. Compute hashes of substrings "abc", "bcd", and "cab".
3. Compare hashes and perform character checks when a match is found.
Time Complexity:
• Average: O(n+m)O(n + m)
• Worst Case: O(nm)O(nm) (with hash collisions).
91. Dynamic Programming Technique
Definition:
Dynamic programming (DP) solves problems by breaking them into overlapping
subproblems and solving each subproblem only once, storing its result for future
reference (memoization or tabulation).
Steps:
1. Define the State: Identify the subproblem in terms of states.
2. Recurrence Relation: Express the solution of the problem in terms of its
subproblems.
3. Base Case: Determine the simplest possible subproblem.
4. Optimize and Solve: Use a bottom-up or top-down approach to compute
the solution.
Example: Fibonacci Sequence
Problem: Compute the nn-th Fibonacci number.
Recursive Relation:
F(n)=F(n−1)+F(n−2),F(0)=0,F(1)=1F(n) = F(n-1) + F(n-2), \quad F(0) = 0, F(1) = 1
DP Solution (Bottom-Up):
def fibonacci(n):
dp = [0] * (n + 1)
dp[0], dp[1] = 0, 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
Time Complexity: O(n)O(n)
Space Complexity: O(n)O(n) (can be optimized to O(1)O(1)).

You might also like