UNIT-4
UNIT-4
Dynamic Programming
• Principal of Optimality
• 0/1 Knapsack Problem
• Making change Problem
• Chain matrix multiplication
• Longest Common Subsequence
• All pair shortest paths
• Warshall's Algorithm
• Floyd's Algorithms
Dynamic programming
• Dynamic programming is a method for solving complex problems by
breaking them down into smaller, easier sub-problems.
• Instead of solving the same sub-problems over and over, it saves the
results of these sub-problems so they can be reused.
• This helps to solve the original problem more efficiently.
• It’s particularly useful for optimization problems, like the minimum
cost or maximum value.
• Dynamic programming ensures you find the best solution if one
exists.
Principal of Optimality
• the optimal solution to a dynamic optimization problem can be found
by combining the optimal solutions to its subproblems
• Optimality is the property that the algorithm finds the best trajectory,
provided one exists.
• Sub-Problem Decomposition:
• Technical: Divide the problem into smaller sub-problems, like finding the
shortest path between two intersections.
• Optimal Substructure:
• Technical: An optimal solution to the problem can be constructed from
optimal solutions to its sub-problems.
• Optimal Solution:
• Technical: If you find the shortest path for each sub-problem, you can combine
them to get the shortest path for the entire problem.
• Overlapping Sub-Problems:
• Technical: Dynamic programming exploits the fact that many sub-problems are
solved multiple times.
• Memoization:
• Technical: Store the results of sub-problems to avoid redundant calculations.
• Solution Construction:
• Technical: Use the results of sub-problems to build up the solution to the main
problem.
• Efficiency:
• Technical: Dynamic programming reduces the computational complexity by
avoiding repeated work.
• Optimality Guarantee:
• Technical: The principle guarantees that if sub-problems are solved optimally, the
combined solution will be optimal.
• Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for
future purposes so that we do not need to compute the result again.
• The subproblems are optimized to optimize the overall solution is known as optimal substructure property.
• The main use of dynamic programming is to solve optimization problems. Here, optimization problems mean
that when we are trying to find out the minimum or the maximum solution of a problem.
• The dynamic programming guarantees to find the optimal solution of a problem if the solution exists.
• The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler sub-problems, solving each sub-problem just once, and then storing
their solutions to avoid repetitive computations.
•
• Consider an example of the Fibonacci series. The following series is the Fibonacci series:
• 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…
•
The numbers in the above series are not randomly calculated. Mathematically, we could write each of the
terms using the below formula:
•
F(n) = F(n-1) + F(n-2),
•
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above relationship.
For example, F(2) is the sum f(0) and f(1), which is equal to 1.
•
• How can we calculate F(20)?
• The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows that
how F(20) is calculated.
Problem
• As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18). In the dynamic
programming approach, we try to divide the problem into the similar subproblems.
• We are following this approach in the above case where F(20) into the similar subproblems, i.e., F(19) and
F(18).
• If we recap the definition of dynamic programming that it says the similar subproblem should not be
computed more than once. Still, in the above case, the subproblem is calculated twice.
• In the above example, F(18) is calculated two times; similarly, F(17) is also calculated twice. However, this
technique is quite useful as it solves the similar subproblems, but we need to be cautious while storing the
results because we are not particular about storing the result that we have computed once, then it can lead to a
wastage of resources.
• In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous usage of
resources and decreases the overall performance.
Solution
• The solution to the above problem is to save the computed results in an array. First, we calculate F(16) and
F(17) and save their values in an array.
• The F(18) is calculated by summing the values of F(17) and F(16), which are already saved in an array. The
computed value of F(18) is saved in an array.
• The value of F(19) is calculated using the sum of F(18), and F(17), and their values are already saved in an
array.
• The computed value of F(19) is stored in an array.
• The value of F(20) can be calculated by adding the values of F(19) and F(18), and the values of both F(19)
and F(18) are stored in an array.
• The final computed value of F(20) is stored in an array.
•
How does the dynamic programming approach work?
• The following are the steps that the dynamic programming follows:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The process of storing the results of subproblems is
known as memorization. Memoization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.
Properties
• The dynamic programming is applicable that are having properties such as:
1. Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining
the optimal solution of all the subproblems.
2. In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.
Making change Problem
# Compute the minimum coins required for each amount up to the target amount
for i in range(1, amount + 1): #already considered 0 and python ignores upper bonds
for coin in coins:
if i >= coin:
dp[i] = min(dp[i], dp[i - coin] + 1)
# If dp[amount] is still infinity, it means it's not possible to make the amount with the given coins
return dp[amount] if dp[amount] != float('inf') else -1
# Example usage:
coins = [1, 5, 6, 8]
amount = 11
result = min_coins(coins, amount)
Chain matrix multiplication
• Problem Introduction :The goal is to find the minimum number of
scalar multiplications required to compute the matrix product of a
chain of matrices using dynamic programming.
• Dimensions of matrices are provided as 𝑃0×𝑃1 , 𝑃1×𝑃2 , and so on.
• 4. **Example**: For five cities, the algorithm runs five times—once for
each city as the source.
• 5. **Goal**: The aim is to find the minimum distance between all vertices
in the graph.
Warshall's Algorithm