daa exp02
daa exp02
Experiment No. 2
AIM: To understand and implement the Divide and Conquer algorithm using merge sort and quick sort
respectively.
Theory: Dynamic Programming (DP) is a powerful technique used to optimize recursive algorithms,
particularly when a problem involves overlapping subproblems. In a basic recursive approach, a
problem is often divided into smaller subproblems, but many of these are solved multiple times,
leading to redundant calculations. This inefficiency results in exponential time complexity,
making the solution impractical for large inputs.
DP overcomes this redundancy by storing the results of subproblems as they are computed.
When the same subproblem arises again, its result is retrieved from storage—often referred to as
a memoization table or cache—rather than being recalculated. By ensuring that each subproblem
is solved only once, DP significantly reduces time complexity from exponential to polynomial.
A classic example of a problem that benefits from DP is Matrix Chain Multiplication.
The objective is to determine the optimal way to parenthesize a sequence of matrices to minimize
the number of scalar multiplications required. Instead of trying all possible ways to multiply the
matrices, DP breaks the problem into smaller subproblems, computing the minimum number of
multiplications for smaller matrix chains first and then using those results to solve larger
subproblems. This systematic approach efficiently finds the most cost-effective order for matrix
multiplication.
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
Algorithm:
MATRIX-CHAIN-ORDER (dims)
1. n = length[dims] - 1
2. for i ← 1 to n
3. do cost[i, i] ← 0
4. for length ← 2 to n // length is the chain length
5. do for i ← 1 to n - length + 1
6. do j ← i + length - 1
7. cost[i, j] ← ∞
8. for split ← i to j - 1
9. do temp ← cost[i, split] + cost[split + 1, j] + dims[i - 1] * dims[split] * dims[j]
10. If temp < cost[i, j]
11. then cost[i, j] ← temp
12. splitPoint[i, j] ← split
13. return cost and splitPoint
PRINT-OPTIMAL-PARENTHESIS (splitPoint, i, j)
1. if i = j
2. then print "M"
3. else
4. print "("
5. PRINT-OPTIMAL-PARENS (splitPoint, i, splitPoint[i, j])
6. PRINT-OPTIMAL-PARENS (splitPoint, splitPoint[i, j] + 1, j)
7. print ")"
Use Cases:
1. String Processing
2. Graph Algorithms
3. Sequence and Array Optimization
4. Game Theory & Decision Making
5. Matrix & Partitioning Problems
Description:
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
Quick Sort is another divide and conquer algorithm. It picks an element (called the pivot),
partitions the array into two sub-arrays (one with elements smaller than the pivot and the other
with elements larger than the pivot), and then recursively sorts the sub-arrays.
Algorithm:
QUICKSORT(arr, start, end):
1. If start < end:
2. p = PARTITION(arr, start, end)
3. QUICKSORT(arr, start, p - 1)
4. QUICKSORT(arr, p + 1, end)
Use Cases:
When a fast sorting algorithm is required, as Quick Sort often outperforms Merge Sort in
practice due to better cache performance.
Suitable for arrays stored in memory since it doesn’t require extra space for merging.
Ideal for datasets that are mostly sorted, as well as when in-place sorting is a priority.
MATRIX-CHAIN-MULTIPLICATION(p)
1. n = length of p - 1
2. Define m[1…n, 1…n] and s[1…n-1, 2…n]
3. for i = 1 to n:
4. m[i, i] = 0 // Zero cost for single matrix
5. for l = 2 to n: // l represents the chain length
6. for i = 1 to n - l + 1:
7. j=i+l-1
8. m[i, j] = ∞ // Initialize cost to a large value
9. for k = i to j - 1:
10. q = m[i, k] + m[k+1, j] + p[i-1] * p[k] * p[j]
11. if q < m[i, j]:
12. m[i, j] = q
13. s[i, j] = k // Store split point
14. return m and s
Explanation of Pseudocode
1. Initialize Tables:
o m[i, j] stores the minimum cost to multiply matrices from Aᵢ to Aⱼ.
o s[i, j] keeps track of the split point that gives the optimal parenthesization.
2. Base Case (Single Matrix Multiplication):
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
o When i == j, the multiplication cost is 0, since a single matrix does not require
multiplication.
3. Iterate Over Different Chain Lengths (l):
o We start by solving subproblems of length 2, then 3, up to n.
4. Compute Minimum Cost Using All Possible Splits (k):
o The for k loop tries every possible split between i and j.
o The recurrence relation calculates the cost of multiplying matrices from i to k and
k+1 to j, and then combining the results.
5. Store and Return the Optimal Solution:
o The m[i, j] table contains the minimum multiplication cost.
o The s[i, j] table is used to reconstruct the optimal multiplication order.
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
int main() {
int n;
return 0;
}
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
Output:
Example 1:
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
Example 2:
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
Dry Run:
BHARATIYA VIDYA BHAVAN’S
SARDAR PATEL INSTITUTE OF TECHNOLOGY
Bhavan’s Campus, Munshi Nagar, Andheri (West), Mumbai – 400058-India
Department of Computer Engineering
The DP table must consider every possible way to multiply matrices, making all computations
necessary.
Example:
If given matrices in a non-optimal order, the algorithm must evaluate every split before
determining the best one.
Every subproblem is computed from scratch, leading to O(n³) complexity.
Conclusion:
I have learned that matrix chain multiplication involves finding the optimal order to multiply matrices while
minimizing the number of scalar multiplications. The Dynamic Programming (DP) approach helps in solving
this problem efficiently by avoiding redundant calculations using memoization.
How DP Helps in Matrix Chain Multiplication
1. Avoids Redundant Calculations:
o The naive recursive approach recalculates the same subproblems multiple times, leading to
exponential time complexity (O(2ⁿ)).
o DP stores the results of subproblems in a table (m[][]), ensuring each subproblem is solved
only once, reducing complexity to O(n³).
2. Optimizes Parenthesization:
o Since matrix multiplication is associative, different parenthesizations yield different
multiplication costs.
o The DP approach efficiently determines the optimal split points using a decision table (dp[][]).
3. Improves Computational Efficiency:
o Instead of brute force searching for the best order, DP breaks the problem into smaller
subproblems and combines their results optimally.
o This makes large-scale matrix multiplications feasible in polynomial time.