Dynamic Prog Updated
Dynamic Prog Updated
UNIT-IV
Dynamic Programming
II B.TECH II sem
By
Dr. M. Umadevi
VFSTR
Dynamic programming definition
Characteristics
Applications
◦ Matrix Chain Multiplication
◦ 0/1 Knapsack
◦ OBST
◦ TSP
◦ All pair shortest Path
◦ Reliability Design
Dynamic Programming
Applied to optimization Problem
Invented by a U.S. Mathematician Richard Bellman in 1950
For solving problem with overlapping sub problems. Ex: Matrix
multiplication of ABC can be viewed as A(BC)=(AB)C
Applied when solution to a problem can be viewed as result of sequence of
decisions
Dynamic Programming step /design
Characterize structure of optimal solution i.e. Develop a mathematical
notation to express solution to given problem
Recursively define value of optimal Solution
Use bottom up technique to compute optimal solution for that develop
recurrence relation
Compute optimum solution from computed information
Principle of Optimality
Principle of Optimality states that in an
optimal sequence of decision or choices
each subsequence must also be optimal.
Ex: Finding shortest path
Dynamic Programming , Solution of
algorithm is obtained using principle of
optimality
Differences between Divide & Conquer and Dynamic Programming
Divide and Conquer Dynamic Prog
Goal:
(A1A2) (A1A2)A3
No. Mul=5X4X6 No. Mul=5X6X2
(A1A2)A3= 5X4X6 + 5X6X2 = 180
(A2A3) A1(A2A3)
No. Mul=4X6X2 No. Mul=5X4X2
1 2 3 4
0 M11=0 M22=0 M33=0 M44=0
1 M12= M23= M34=
K= K= K=
2 M13= M24=
K= K=
3 M14=
K=
Matrix Chain Multiplication Problem
A1 is 5X4 A2 is 4x6
A3 is 6x2 A4 is 2X7
1 2 3 4
0 M11=0 M22=0 M33=0 M44=0
1 M12=12 M23=48 M34=84
0 K=2 K=3
K=1
2 M13= M24=
K= K=
3 M14=
K=
A1 is 5X4 A2 is 4x6
A3 is 6x2 A4 is 2X7
For k=1
M13= { M11+ M23 + P1 P2 P4} for k=1
= 0 + 48+5X4X2
=48+40=88
For k=2
M13={ M12+ M33 + P1 P3 P4} for k=2
=120 + 0 + 5X6X2= 180
Min{88, 180}=88 i.e k=1
Matrix Chain Multiplication Problem
For k=2
M24= { M22+ M34 + P2 P3 P5} for k=2
= 0 + 84+ 4X6X7= 252
For k=3
M24={ M23+ M44 + P2P4 P5} for k=3
=48 + 0 + 4X2X7= 104
Min{252, 104}=104 i.e k=3
Matrix Chain Multiplication Problem
For k=3
M14= { M13+ M44 + P1P4 P5} for k=2
= 88+0+ 5X2X7= 158
A1(A2A3)A4=158
Solution
A1(A2A3)A4
A1(A2A3) A4
A1 (A2A3)
A2 A3
Dyanmic programming used to find optimal solution
Dynamic programming used to solve the problem with optimal
substructure
OBST: time complexity worst case O(n3)
o/1 knap sack: arrange all xi in order of descending p i/wi
TRAVELLING SALESPERSON PROBLEM
Let G = (V, E) be a directed graph with edge costs Cij.
The variable cij is defined such that cij > 0 for all I and j and
cij = infinity if < i, j> not belong to E.
Let |V| = n and assume n > 1.
A tour of G is a directed simple cycle that includes every vertex in
V. The cost of a tour is the sum of the cost of the edges on the tour.
The traveling sales person problem is to find a tour of minimum
cost. The tour is to be a simple path that starts and ends at vertex 1.
Let g (i, S) be the length of shortest path starting at vertex i, going
through all vertices in S, and terminating at vertex 1. The function g
(1, V – {1}) is the length of an optimal salesperson tour. From the
principal of optimality it follows that:
1 i
i
0/1 – KNAPSACK
We are given n objects and a knapsack.
Each object i has a positive weight wi and a positive
value Vi.
The knapsack can carry a weight not exceeding W.
Fill the knapsack so that the value of objects in the
knapsack is optimized.
Solution steps using Dynamic Programming
1. Optimize fi(y) where y= wi . Initially s0={(0,0)},
We use the ordered set Si = {(f (yj), yj) | 1 < j < k}
to represent fi (y).
Each number of Si is a pair (P, W), where P = fi (yj) and W =yj.
We can compute Si+1 from Si by first computing:
Si = {(P, W) | (P – p , W – w ) e Si}
Si+1 can be computed by merging the pairs in Si and Si1
together
2. A solution to the knapsack problem can be obtained by
making a sequence of decisions on the variables x1, x2, . . .Xn.
A decision on variable xi involves determining which of the
values 0 or 1 is to be assigned to it.
Purging rule: two pairs (Pj, Wj) and (Pk, Wk) with
the property that Pj < Pk and Wj > Wk, then the pair
(Pj, Wj) can be discarded. Discarding or purging rules
such as this one are also known as dominance rules.
Dominated tuples get purged. In the above, (Pk, Wk)
dominates (Pj, Wj).
0/1 knapsack
pi wi
1 2
2 3
5 4
S0 (0,0)
s0 1 (1, 2)
S1 S11 (2, 3)
S
2 2
S 1 (5, 4)
s3
Optimal Binary Search Tree
In computer science, an
optimal binary search tree
(Optimal BST), sometimes
called a weight-balanced
binary tree.
It is a binary search tree
which provides the
smallest possible search
time for a given sequence
of accesses (or access
probabilities).
Dynamic programmming method to solve OBST
STEP 1:
Let Tij=OBST(ai+1,……aj)
Cij = cost of the tree Tij
STEP 2:
OBST can be build using principle of optimality
Let T0n be an OBST for element a1<a2< ..<an.
Let L and R left and right subtrees. Suppose root is k
L is left subtree with a1, a2..ak-1
R is Right subtree with ak+1, ak+2..an
C(T0n )= C(L)+C(R )+ W
where W=p1+p2+p3..+pn+ q0+q1+…qn
Suppose L is not optimal subtree then we find another tree L’
For same elements with the property C(L’)< C(L)
C(T’)= C(L’)+C(R)+W
C(T’) < C(T)
This contradicts the fact that T0n is not optimal.
There fore L must be Optimal. Same is for R
STEP 3:
C(I,j)= min { C(I,k-1)+C(k,j)+ W(I,j)}
i<k<j
W( i, j)= W[ i, j-1]+p[j]+q[j]
r( i, j) = k
• C (i, J) is the cost of the optimal binary search tree 'T ij‘ ,the root R (i, J)
of each tree 'Tij'.Then an optimal binary search tree may be
constructed from these R (i, J). R (i, J) is the value of 'K' that minimizes
equation (1).
We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1),
0 ≤ i < 4; Knowing W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and
repeating until W (0, n), C (0, n) and R (0, n) are obtained.
OBST
if
do
int
while
Reliability Design
The problem is to design a system that is composed of several
devices connected in series.
Let ri be the reliability of device Di
ri is the probability that device i will function properly
then the reliability of the entire system is π ri.
Even if the individual devices are very reliable (the ri’s are very
close to one), the reliability of the system may not be very good.
For example, if n = 10 and ri = 0.99, i < i < 10, then π ri = .904.
Hence, it is desirable to duplicate devices. Multiply copies of the
same device type are connected in parallel.
Reliability Design
If stage i contains mi copies of device Di. Then the
probability that all mi have a
malfunction is (1 - r )mi. Hence the reliability of stage i
becomes 1 – (1 - r)mi.
i i
The reliability of stage ‘i’ is given by a function Øi (mi).
Our problem is to use device duplication. This maximization
is to be carried out under a cost constraint. Let ci be the cost
of each unit of device i and let c be the maximum allowable
cost of the system being designed.
All pairs shortest paths
In the all pairs shortest path problem, we are to find a shortest path
between every pair of vertices in a directed graph G. That is, for every pair
of vertices (i, j), we are to find a shortest path from i to j as well as one
from j to i. These two paths are the same when G is undirected
The shortest i to j path in G, i ≠ j originates at vertex i and goes through
some intermediate vertices (possibly none) and terminates at vertex j.
If k is an intermediate vertex on this shortest path, then the subpaths from
i to k and from k to j must be shortest paths from i to k and k to j,
respectively.
Let Ak (i, j) represent the length of a shortest path from i to j going
through no vertex of index greater than k, we obtain:
ALL PAIR SHORTEST PATH
Longest Common Sequence Problem Statement
Given two sequences, find the length of longest subsequence present in both of
them.
A subsequence is a sequence that appears in the same relative order, but not
necessarily contiguous.
For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of
“abcdefg”.
Number of combinations with 2 elements are nC2 and so forth and so on. We know
So a string of length n has 2n-1 different possible subsequences since we do not consider the
if a subsequence is common to both the strings. This time complexity can be improved using dynamic
programming.
It is a classic computer science problem, (a file comparison
program that outputs the differences between two files), and has
applications in bioinformatics
matching subsequence.
respectively.
And let L(X[0..m-1], Y[0..n-1]) be the length of LCS of the two sequences X and Y.
If last characters of both sequences do not match (or X[m-1] != Y[n-1]) then
Y[0..n-2]) )
Consider the input strings “AGGTAB” and
“GXTXAY”)
Consider the input strings “ABCDGH” and “AEDFHR.
Last characters do not match for the strings. So length of LCS can be written
as:
“AEDFH”) )
So the LCS problem has optimal substructure property as the main problem
Following is simple recursive implementation of the LCS problem. The implementation simply follows
int m = strlen(X);
int n = strlen(Y);
return 0;
}
Considering the above implementation, following is a partial recursion tree for input strings “AXYT”
and “AYZX”
lcs("AXYT", "AYZX")
/ \
/ \ / \