algored
algored
3. Comparison Between Divide and Conquer and Dynamic Programming 4. Algorithm for Minimum Spanning Tree
Divide and Conquer Dynamic Programming Kruskal’s Algorithm:
1. Sort all edges in non-decreasing order of their weights.
Problem Breaking: Breaks a problem Problem Breaking: Breaks a problem 2. Initialize an empty set for the MST.
into smaller subproblems that are solved into overlapping subproblems, where 3. Iterate through the sorted edges:
independently. subproblems share solutions. - Add the edge to the MST if it doesn’t form a cycle.
Overlapping Subproblems: Overlapping Subproblems: - Use a union-find data structure to check for cycles.
Subproblems are independent and do not Subproblems often overlap and solve 4. Repeat until the MST contains (V - 1) edges, where V is the number of vertices.
overlap. the same subproblem multiple times. Complexity: O(ElogE)O(E \log E), where EE is the number of edges.
Prim’s Algorithm:
Memoization: Uses memoization (top- 1. Start from any vertex and add it to the MST.
Memoization: Does not use
down) or tabulation (bottom-up) to 2. Pick the smallest edge connecting the MST to an unvisited vertex.
memoization; solves subproblems from
store and reuse solutions of 3. Repeat until all vertices are included in the MST.
scratch.
subproblems. Complexity: O(E+VlogV)O(E + V \log V) with a priority queue.
Optimal Substructure: Used for
Optimal Substructure: Applies to problems that exhibit optimal 5. Number of Spanning Trees in a Complete Graph
problems with no optimal substructure in substructure, meaning the solution to a The number of spanning trees for a complete graph with nn vertices is nn−2n^{n-2}.
general. problem can be constructed from the
solutions to its subproblems. 6. All Pairs Shortest Path Using Dijkstra’s Algorithm?
No, it is not possible to use Dijkstra’s algorithm for all pairs of shortest paths
Time Complexity: Can have
Time Complexity: Often more efficiently.
exponential time complexity for certain
problems (e.g., Merge Sort O(n log n),
efficient than Divide and Conquer due • Reason: Dijkstra’s algorithm is designed for single-source shortest paths. To
to storing subproblem results, reducing compute all pairs, you would need to run it nn times for a graph with nn vertices,
Quick Sort O(n log n), but can also be
redundant work. leading to higher complexity.
inefficient for problems like TSP).
Examples: Fibonacci, Knapsack • Alternative: Use Floyd-Warshall or Johnson’s algorithm for better efficiency
Examples: Merge Sort, Quick Sort,
Problem, Longest Common
Binary Search. 8. Technique to Solve Optimization Problems
Subsequence.
• Dynamic Programming for problems with overlapping subproblems and
Recursion Depth: Typically involves Recursion Depth: Also uses recursion
optimal substructure.
deep recursion but does not store but minimizes recomputation by
intermediate results. storing results. • Greedy Algorithms for problems where local choices lead to global optimality.
• Linear Programming for problems representable as linear equations.
11. Principle of Optimality solve problems recursively by dividing them into overlapping subproblems and solving
"An optimal solution to a problem contains optimal solutions to its subproblems." each only once, storing the result for reuse.
This principle is the foundation of dynamic programming. It implies that we can
12. Properties of a Heap Tree (ii) Ordered Property:In a max-heap, the value of each node is greater than or equal
(i) Structured Property:A heap must be a complete binary tree, meaning all levels to the values of its children.
are fully filled except possibly the last level, which is filled from left to right. In a min-heap, the value of each node is less than or equal to the values of its children.
13. Dijkstra’s Algorithm for the Given Graph
Steps for Dijkstra’s Algorithm:
1. Initialization: o Move to D (dist(D)=16\text{dist}(D) = 16):
o dist(A)=0\text{dist}(A) = 0, all others =∞= \infty. Update dist(F)=17\text{dist}(F) = 17.
o Unvisited nodes: {A, B, C, D, E, F}. Visited: {A, B, C, D}.
2. Iterations: o Visit remaining nodes F and E (no updates).
o Start at A (dist(A)=0\text{dist}(A) = 0): 3. Final Distances:
Update dist(B)=7\text{dist}(B) = 7, dist(C)=12\text{dist}(C) = 12. o dist(A)=0\text{dist}(A) = 0, dist(B)=7\text{dist}(B) = 7,
Visited: {A}. dist(C)=9\text{dist}(C) = 9,
o Move to B (dist(B)=7\text{dist}(B) = 7): dist(D)=16\text{dist}(D) = 16, dist(E)=19\text{dist}(E) = 19,
Update dist(C)=9\text{dist}(C) = 9, dist(D)=16\text{dist}(D) = 16. dist(F)=17\text{dist}(F) = 17.
Visited: {A, B}.
o Move to C (dist(C)=9\text{dist}(C) = 9):
Update dist(E)=19\text{dist}(E) = 19.
Visited: {A, B, C}.
84. Characteristics of Problems Solved with Dynamic Programming 85. Subset Sum with Backtracking
1. Optimal Substructure: 1. Start with an empty subset.
The solution to a problem can be constructed from solutions to its 2. Include/exclude the current element.
subproblems. 3. If the subset sum equals the target, save the subset.
2. Overlapping Subproblems: 4. Backtrack and explore other configurations.
Subproblems are solved multiple times; storing results avoids Example: A={4,16,5,23,12},sum=9A = \{4, 16, 5, 23, 12\}, \text{sum} = 9.
recomputation. Valid subset: {4,5}\{4, 5\}.
3. Decision-Making:
Each decision contributes to an optimal solution. 80. Making Bubble Sort Adaptive
Examples: Knapsack, Fibonacci, Matrix Chain Multiplication. To make Bubble Sort adaptive, add a flag that tracks whether any swaps occur during a
pass. If no swaps occur, the array is already sorted, and the algorithm terminates early.
86. What is Hashing? Why Do We Need Hashing?
Definition of Hashing:
Hashing is a process of mapping a large dataset (keys) to a smaller fixed-sized dataset
(hash values) using a hash function. The hash value determines where the data is
stored in a hash table.
Why Do We Need Hashing?
1. Efficient Search and Retrieval: o It is especially useful in scenarios where fast data retrieval is
o Hashing provides an average-case time complexity of O(1)O(1) required, like databases and caching systems.
for searching, inserting, and deleting elements. 3. Collision Handling:
2. Fast Access: o Advanced hashing techniques (e.g., chaining, open addressing)
ensure efficient resolution of hash collisions.
87. Big Omega (Ω\Omega) Asymptotic Notation
Definition: Big Omega notation represents the lower bound of an algorithm's time or
space complexity. It gives the best-case performance of the algorithm, indicating the
minimum time required to complete a task.
Key Features:
1. Growth Rate Representation: o For a function T(n)T(n), T(n)=Ω(g(n))T(n) = \Omega(g(n))
o Describes the smallest possible growth of a function as input means there exists constants c>0c > 0 and n0n_0 such that:
size (nn) increases. T(n)≥c⋅g(n)for all n≥n0.T(n) \geq c \cdot g(n) \quad \text{for all
2. Notation: } n \geq n_0.
Example: If an algorithm has a time complexity of T(n)=5n2+3nT(n) = 5n^2 + 3n, the
lower bound is Ω(n2)\Omega(n^2).