Dynamic Programming
Dynamic Programming
Dynamic programming is a programming technique for solving optimization problems by breaking them
down into simpler subproblems and exploiting the fact that the optimal solution to the overall situation
is dependent on the optimal solution to its subproblems.
The coin change problem is a classic dynamic programming problem. Given a set of coins of various
denominations and a target amount, the goal is to find the minimum number of coins needed to make
up the target amount.
The dynamic programming solution to the coin change problem works by building up a table of solutions
to the simpler subproblems. The table is initialized with a row for each coin denomination and a column
for each target amount. The entry at row i and column j of the table represents the minimum number of
coins needed to make up the target amount j using only coins of denominations up to i.
The table is filled in row-by-row, starting with the row for the first coin denomination. The entry at row 1
and column 0 is set to 0, since no coins are needed to make up a target amount of 0. The other entries in
the first row are set to infinity, since it is impossible to make up a positive target amount using only coins
of the first denomination.
For each subsequent row i, the entries in the table are filled in as follows:
if j < denominations[i]:
else:
This code compares the minimum number of coins needed to make up the target amount j using only
coins of denominations up to i - 1 to the minimum number of coins needed to make up the target
amount j - denominations[i] using coins of all denominations up to i. The smaller of these two values is
then stored in the table entry at row i and column j.
Once the table has been filled in, the entry at the last row and last column contains the minimum
number of coins needed to make up the target amount using all denominations of coins.
Here is a Python implementation of the dynamic programming solution to the coin change problem:
Python
"""Finds the minimum number of coins needed to make up the target amount.
Args:
Returns:
"""
for i in range(len(denominations)):
if j < denominations[i]:
else:
return table[-1][-1]
To use the min_coins() function, simply pass in a list of coin denominations and the target amount of
money. The function will return the minimum number of coins needed to make up the target amount.
target_amount = 12
print(min_coins)
Output:
This example shows that the minimum number of coins needed to make up a target amount of 12 using
the coin denominations 1, 5, 10, and 25 is 3.
Sources
www.simplilearn.com/tutorials/data-structure-tutorial/coin-change-problem-with-dynamic-
programming#:~:text=BootcampExplore%20Program-,What%20Is%20Dyna
The principle of optimality is one of the key principles of dynamic programming. It states that the
optimal solution to a problem consists of optimal solutions to its subproblems.
In other words, the best way to solve a problem is to break it down into smaller subproblems, solve each
subproblem optimally, and then combine the optimal solutions to the subproblems to form the optimal
solution to the overall problem.
The principle of optimality is what makes dynamic programming so powerful. It allows us to find optimal
solutions to problems that would be difficult or impossible to solve using other methods.
Here are some examples of problems that satisfy the principle of optimality:
The shortest path problem: The shortest path between two nodes in a graph is the shortest path
between those nodes, even if we are only considering subpaths that go through a subset of the nodes in
the graph.
The knapsack problem: The optimal solution to the knapsack problem is the optimal solution to the
subproblem of finding the most valuable subset of items that can fit in the knapsack, even if we are only
considering items that weigh less than a certain amount.
The coin change problem: The minimum number of coins needed to make up a target amount is the
minimum number of coins needed to make up the target amount, even if we are only considering using
coins from a subset of the available denominations.
Dynamic programming algorithms typically work by building up a table of solutions to the subproblems
of the overall problem. The table is initialized with the solutions to the simplest subproblems, and then
the solutions to more complex subproblems are computed recursively using the solutions to the simpler
subproblems.
Once the table has been filled in, the optimal solution to the overall problem can be found by simply
looking up the entry in the table that corresponds to the entire problem.
Dynamic programming algorithms can be very efficient, but they can also be quite complex to design and
implement. However, the power of dynamic programming makes it a valuable tool for solving a wide
variety of optimization problems.
mic%20Programming%3F,optimal%20so
The knapsack problem is a classic optimization problem in computer science. It is often used to teach
dynamic programming and greedy algorithms.
The problem is as follows: You have a knapsack with a maximum capacity of W, and you have a set of
items, each with a weight and a value. You want to find the subset of items that you can put in the
knapsack such that the total value of the items is maximized, without exceeding the knapsack's capacity.
One way to solve the knapsack problem is using dynamic programming. The dynamic programming
solution works by building up a table of solutions to the subproblems of the overall problem. The table is
initialized with the solutions to the simplest subproblems, and then the solutions to more complex
subproblems are computed recursively using the solutions to the simpler subproblems.
The table is a 2D table, where the rows represent the different knapsack capacities and the columns
represent the different items. The entry at row i and column j of the table represents the maximum
value that can be put in the knapsack with capacity i using only the first j items.
for j in range(len(items)):
if i == 0:
table[i][j] = 0
elif j == 0:
else:
This code compares the maximum value that can be put in the knapsack with capacity i using only the
first j - 1 items to the maximum value that can be put in the knapsack with capacity i - items[j].weight
using all of the first j items. The greater of these two values is then stored in the table entry at row i and
column j.
Once the table has been filled in, the maximum value that can be put in the knapsack with capacity W
using all of the items is stored in the table entry at row W and column len(items) - 1.
The following Python code shows a complete implementation of the dynamic programming solution to
the knapsack problem:
Python
class Item:
self.weight = weight
self.value = value
"""Finds the maximum value that can be put in a knapsack with the given capacity using the given
items.
Args:
Returns:
"""
if i == 0 or j == 0:
table[i][j] = 0
else:
table[i][j] = table[i][j - 1]
return table[capacity][len(items)]
if __name__ == '__main__':
capacity = 50
print(max_value)
Output:
180
This example shows that the maximum value that can be put in a knapsack with capacity 50 using the
items in the items list is 180.
Floyd's algorithm is a dynamic programming algorithm for finding the shortest paths between all pairs of
vertices in a weighted directed graph. It is a generalization of Dijkstra's algorithm, which can only find the
shortest paths from a single source vertex to all other vertices.
Floyd's algorithm works by iteratively building up a table of the shortest paths between all pairs of
vertices. The table is initialized with the direct edge weights between all pairs of vertices. Then, the
algorithm repeatedly updates the table by considering all possible paths between all pairs of vertices
that go through one intermediate vertex.
At each iteration, the algorithm considers all possible paths between all pairs of vertices that go through
the current intermediate vertex. For each pair of vertices, the algorithm compares the shortest path
between the two vertices in the current table to the shortest path between the two vertices that goes
through the current intermediate vertex. If the shortest path that goes through the current intermediate
vertex is shorter, then the algorithm updates the table entry for that pair of vertices.
The algorithm terminates when no more updates are made to the table. At this point, the table contains
the shortest paths between all pairs of vertices in the graph.
Python
def floyd_warshall(graph):
"""Finds the shortest paths between all pairs of vertices in a weighted directed graph.
Args:
Returns:
A dictionary of dictionaries, where the keys are the vertex pairs and the values are the shortest paths
between those vertex pairs.
"""
table = {}
table[vertex_i] = {}
return table
if __name__ == '__main__':
graph = {
'D': {}
# Find the shortest paths between all pairs of vertices in the graph.
shortest_paths = floyd_warshall(graph)
Output:
The shortest path from A to B is 10.
Floyd's algorithm is a powerful algorithm for finding the shortest paths between all pairs of vertices in a
weighted directed graph. It is relatively efficient and easy to implement.
Chained matrix multiplication is the problem of finding the most efficient way to multiply a sequence of
matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence
of the matrix multiplications involved. The problem may be solved using dynamic programming.
There are many options because matrix multiplication is associative. In other words, no matter how the
product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B,
C, and D, there are five possible options:
(AB)(CD)
(A(BC))D
(AB)(C(D))
A((BC)D)
A(B(CD))
The dynamic programming algorithm for chained matrix multiplication works by building up a table of
the optimal cost of multiplying all possible subsequences of the matrices. The table is initialized with the
cost of multiplying each individual matrix. Then, the algorithm repeatedly updates the table by
considering all possible ways to split the subsequence into two smaller subsequences and multiplying
those subsequences optimally.
The algorithm terminates when the entire subsequence has been considered. At this point, the table
entry for the entire subsequence contains the optimal cost of multiplying the subsequence.
Here is a Python implementation of the dynamic programming algorithm for chained matrix
multiplication:
Python
def chained_matrix_multiplication(matrices):
Args:
Returns:
"""
table = {}
for i in range(len(matrices)):
table[i, i] = 0
# Update the table of optimal costs, considering all possible ways to split the subsequence into two
smaller subsequences and multiplying those subsequences optimally.
j=i+l
if __name__ == '__main__':
optimal_cost = chained_matrix_multiplication(matrices)
print(optimal_cost)
Output:
2000
This example shows that the optimal cost of multiplying the three matrices in the matrices list is 2000.
The dynamic programming algorithm for chained matrix multiplication is a powerful algorithm for finding
the most efficient way to multiply a sequence of matrices. It is relatively efficient and easy to implement.
Sources