0% found this document useful (0 votes)
7 views

UNIT-4

Data structure and Algorithm part 4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

UNIT-4

Data structure and Algorithm part 4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Unit 4

Dynamic Programming

Dr. Meghana Harsh Ghogare


Dynamic Programming

• Principal of Optimality
• 0/1 Knapsack Problem
• Making change Problem
• Chain matrix multiplication
• Longest Common Subsequence
• All pair shortest paths
• Warshall's Algorithm
• Floyd's Algorithms
Dynamic programming
• Dynamic programming is a method for solving complex problems by
breaking them down into smaller, easier sub-problems.
• Instead of solving the same sub-problems over and over, it saves the
results of these sub-problems so they can be reused.
• This helps to solve the original problem more efficiently.
• It’s particularly useful for optimization problems, like the minimum
cost or maximum value.
• Dynamic programming ensures you find the best solution if one
exists.
Principal of Optimality
• the optimal solution to a dynamic optimization problem can be found
by combining the optimal solutions to its subproblems
• Optimality is the property that the algorithm finds the best trajectory,
provided one exists.

• Sub-Problem Decomposition:
• Technical: Divide the problem into smaller sub-problems, like finding the
shortest path between two intersections.
• Optimal Substructure:
• Technical: An optimal solution to the problem can be constructed from
optimal solutions to its sub-problems.
• Optimal Solution:
• Technical: If you find the shortest path for each sub-problem, you can combine
them to get the shortest path for the entire problem.
• Overlapping Sub-Problems:
• Technical: Dynamic programming exploits the fact that many sub-problems are
solved multiple times.
• Memoization:
• Technical: Store the results of sub-problems to avoid redundant calculations.
• Solution Construction:
• Technical: Use the results of sub-problems to build up the solution to the main
problem.
• Efficiency:
• Technical: Dynamic programming reduces the computational complexity by
avoiding repeated work.
• Optimality Guarantee:
• Technical: The principle guarantees that if sub-problems are solved optimally, the
combined solution will be optimal.
• Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for
future purposes so that we do not need to compute the result again.
• The subproblems are optimized to optimize the overall solution is known as optimal substructure property.
• The main use of dynamic programming is to solve optimization problems. Here, optimization problems mean
that when we are trying to find out the minimum or the maximum solution of a problem.
• The dynamic programming guarantees to find the optimal solution of a problem if the solution exists.
• The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler sub-problems, solving each sub-problem just once, and then storing
their solutions to avoid repetitive computations.


• Consider an example of the Fibonacci series. The following series is the Fibonacci series:
• 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…

The numbers in the above series are not randomly calculated. Mathematically, we could write each of the
terms using the below formula:

F(n) = F(n-1) + F(n-2),

With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above relationship.
For example, F(2) is the sum f(0) and f(1), which is equal to 1.


• How can we calculate F(20)?
• The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows that
how F(20) is calculated.
Problem
• As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18). In the dynamic
programming approach, we try to divide the problem into the similar subproblems.
• We are following this approach in the above case where F(20) into the similar subproblems, i.e., F(19) and
F(18).
• If we recap the definition of dynamic programming that it says the similar subproblem should not be
computed more than once. Still, in the above case, the subproblem is calculated twice.
• In the above example, F(18) is calculated two times; similarly, F(17) is also calculated twice. However, this
technique is quite useful as it solves the similar subproblems, but we need to be cautious while storing the
results because we are not particular about storing the result that we have computed once, then it can lead to a
wastage of resources.
• In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous usage of
resources and decreases the overall performance.
Solution
• The solution to the above problem is to save the computed results in an array. First, we calculate F(16) and
F(17) and save their values in an array.
• The F(18) is calculated by summing the values of F(17) and F(16), which are already saved in an array. The
computed value of F(18) is saved in an array.
• The value of F(19) is calculated using the sum of F(18), and F(17), and their values are already saved in an
array.
• The computed value of F(19) is stored in an array.
• The value of F(20) can be calculated by adding the values of F(19) and F(18), and the values of both F(19)
and F(18) are stored in an array.
• The final computed value of F(20) is stored in an array.


How does the dynamic programming approach work?

• The following are the steps that the dynamic programming follows:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The process of storing the results of subproblems is
known as memorization. Memoization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.
Properties
• The dynamic programming is applicable that are having properties such as:
1. Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining
the optimal solution of all the subproblems.
2. In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.
Making change Problem

• Dynamic programming is practical and widely used in many real-world


applications, provided that the problem constraints are finite and
manageable. It is a powerful tool for optimization problems,
balancing between complexity and efficiency.
we can't have a coin of value zero, but
we can have the amount be zero.
in both of these scenarios, we would need
two coins . It doesn't matter as long as we
output as 2 coins

• These problems have a lot of solutions, right?
• A lot of base that we need to check to find our answer.
• And from all of these possible choices, we are asked to pick
one.
• This is why dynamic programming makes sense in these types
of problems, because it allows us to reuse
• such solutions, which makes us avoid computing them again in
a separate path are DP problem
•If you take a coin with a value of 2, you are left with the problem of finding the minimum
number of coins to reach the value 8 (10 - 2).
•This is represented by F(8).
•Adding the one coin you already took (value 2) gives you F(8) + 1.
Sum=Amount=S, C= Coin Value
General Formula
def min_coins(coins, amount):
(dp = [float('inf')] * (amount + 1)# initialize to large no.infinity
dp[0] = 0 ## Base case: No coins are needed to make change for 0 amount

# Compute the minimum coins required for each amount up to the target amount
for i in range(1, amount + 1): #already considered 0 and python ignores upper bonds
for coin in coins:
if i >= coin:
dp[i] = min(dp[i], dp[i - coin] + 1)

# If dp[amount] is still infinity, it means it's not possible to make the amount with the given coins
return dp[amount] if dp[amount] != float('inf') else -1

# Example usage:
coins = [1, 5, 6, 8]
amount = 11
result = min_coins(coins, amount)
Chain matrix multiplication
• Problem Introduction :The goal is to find the minimum number of
scalar multiplications required to compute the matrix product of a
chain of matrices using dynamic programming.
• Dimensions of matrices are provided as 𝑃0×𝑃1​ , 𝑃1×𝑃2 , and so on.

• Base Cases: Diagonal elements like 𝑀11,𝑀22,𝑀33,𝑀44 are initialized


to zero as multiplying a single matrix requires no operations.
• Algorithm Explanation:
• Table Initialization: Create an n×nn \times nn×n table to store values
of M[i,j]M[i,j]M[i,j] (minimum multiplications) and S[i,j]S[i,j]S[i,j] (to
store split positions).
• Base Case: Set diagonal elements M[i,i]=0M[i,i] = 0M[i,i]=0.
• Chain Length Calculation: For chain lengths from 2 to n−1n-1n−1,
calculate the values using dynamic programming.
• Find Minimum: Iterate over possible split points kkk to compute the
minimum scalar multiplications for M[i,j]M[i,j]M[i,j].
• Complexity:
• The algorithm runs in θ(n3)\theta(n^3)θ(n3), where nnn is the
number of matrices due to three nested loops.
Longest Common Subsequence
All pair shortest paths single source shortest
path
• 1. **Single Source Shortest Path**: Finds the shortest path from a single
source vertex to all other vertices in a graph.

• 2. **All-Pairs Shortest Path**: Calculates the shortest paths between


every pair of vertices in a graph.

• 3. **Multiple Single Source Problems**: The all-pairs shortest path


problem is solved by running a single-source shortest path algorithm (like
Dijkstra's or Bellman-Ford) for each vertex.

• 4. **Example**: For five cities, the algorithm runs five times—once for
each city as the source.

• 5. **Goal**: The aim is to find the minimum distance between all vertices
in the graph.
Warshall's Algorithm

You might also like