Session+9+and+10
Session+9+and+10
College of Education
School of Continuing and Distance Education
2020/2021 – 2022/2023
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by recurrences with overlapping
subproblems
• Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Example 1: Fibonacci numbers
• Recall definition of Fibonacci numbers:
F(n)
F(n-1) + F(n-2)
F(0) = 0, F(1)=c₁
DP solution to the coin-row problem (cont.)
F(n) = max{cn + F(n-2), F(n-1)} for n > 1,
F(0) = 0, F(1)=c₁
index 0 1 2 3 4 5 6
coins -- 5 1 2 10 6 2
F( )
Max amount:
Coins of optimal solution:
Time efficiency:
Space efficiency:
Note: All smaller instances were solved.
Example 3: Path counting
1
Consider the problem of
A
counting the number of shortest
paths from point A to point B in
a city with perfectly horizontal
streets and vertical avenues
B
Example 4: Coin-collecting by robot
Several coins are placed in cells of an n×m board. A robot, located in
the upper left cell of the board, needs to collect as many of the coins as
possible and bring them to the bottom right cell. On each step, the
robot can move either one cell to the right or one cell down from its
current location. 1 2 3 4 5 6
5
Solution to the coin-collecting problem
Let F(i,j) be the largest number of coins the robot can collect and
bring to cell (i,j) in the ith row and jth column.
The recurrence:
F(i, j) = max{F(i-1, j), F(i, j-1)} + cij for 1 ≤ i ≤ n, 1 ≤ j ≤ m
where cij = 1 if there is a coin in cell (i,j), and cij = 0 otherwise
5
Other examples of DP algorithms
• Computing a binomial coefficient (# 9, Exercises 8.1)
k-1
∑ ps (level as in T[i,k-1] +1) +
Optimal Optimal s=i
BST for BST for
a i , ..., ak-1 a k+1 , ..., aj
j
∑ ps (level as in T[k+1,j] +1)}
s =k+1
DP for Optimal BST Problem (cont.)
After simplifications, we obtain the recurrence for C[i,j]:
j
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n
i≤k≤j s=i
C[i,i] = pi for 1 ≤ i ≤ j ≤ n
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filled
using the recurrence j
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi ;
i≤k≤j s=i
the right one, for trees’ roots, records k’s values giving the minima
j 0 1 2 3 4 j 0 1 2 3 4
i i
1 0 .1 .4 1.1 1.7 1 1 2 3 3 C
2 0 .2 .8 1.4 2 2 3 3 B D
3 0 .4 1.0 3 3 3
A
4 0 .3 4 4
optimal BST
5 0 5
Optimal Binary Search Trees
Analysis DP for Optimal BST Problem
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)
3 3
1 1
2 4 0 0 1 0 2 4 0 0 1 0
1 0 0 1 1 1 1 1
0 0 0 0 0 0 0 0
0 1 0 0 1 1 1 1
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence
of n-by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3 3 3 3 3
1 1 1 1 1
4 4 4 2 4 4
2 2 2 2
{ R(k-1)[i,j]
R(k)[i,j] = or
(path using just 1 ,…,k-1)
j
Warshall’s Algorithm (matrix generation)
0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 1 1 1 1 1 1
R(2) = 0 0 0 0 R(3) = 0 0 0 0 R(4) = 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1
Warshall’s Algorithm (pseudocode and analysis)
Example: 4 3
1
1
6
1 5
2 4
3
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among 1,
…,k as intermediate
D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]
j
Floyd’s Algorithm (example)
1
2 2 0 ∞ 3 ∞ 0 ∞ 3 ∞
3 6 7 2 0 ∞ ∞ 2 0 5 ∞
D(0) = ∞ 7 0 1 D(1) = ∞ 7 0 1
3
1
4 6 ∞ ∞ 0 6 ∞ 9 0
0 ∞ 3 ∞ 0 10 3 4 0 10 3 4
2 0 5 ∞ 2 0 5 6 2 0 5 6
D(2) = 9 7 0 1 D(3) = 9 7 0 1 D(4) = 7 7 0 1
6 ∞ 9 0 6 16 9 0 6 16 9 0
Floyd’s Algorithm (pseudocode and analysis)