0% found this document useful (0 votes)
6 views

Dynamic Prog Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Dynamic Prog Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Design and Analysis of Algorithm

UNIT-IV
Dynamic Programming

II B.TECH II sem
By
Dr. M. Umadevi
VFSTR
Dynamic programming definition
Characteristics
Applications
◦ Matrix Chain Multiplication
◦ 0/1 Knapsack
◦ OBST
◦ TSP
◦ All pair shortest Path
◦ Reliability Design
Dynamic Programming
 Applied to optimization Problem
 Invented by a U.S. Mathematician Richard Bellman in 1950
 For solving problem with overlapping sub problems. Ex: Matrix
multiplication of ABC can be viewed as A(BC)=(AB)C
 Applied when solution to a problem can be viewed as result of sequence of
decisions
Dynamic Programming step /design
 Characterize structure of optimal solution i.e. Develop a mathematical
notation to express solution to given problem
 Recursively define value of optimal Solution
 Use bottom up technique to compute optimal solution for that develop
recurrence relation
 Compute optimum solution from computed information
Principle of Optimality
Principle of Optimality states that in an
optimal sequence of decision or choices
each subsequence must also be optimal.
Ex: Finding shortest path
Dynamic Programming , Solution of
algorithm is obtained using principle of
optimality
Differences between Divide & Conquer and Dynamic Programming
Divide and Conquer Dynamic Prog

Problem is divided into sub Many decision sequences are


problems and then solved generated and all the overlapping
independently. Finally all solutions sub instances are considered
are merged to get the solution to
original Problem

Duplications in sub solutions are Duplications in solutions avoided


neglected totally.
Less efficient because of rework on Efficient than divide and conquer
solution
Top down approach Bottom up approach
Recursive Method Iterative Method
Splits input at specific deterministic Splits input at every possible point.
point(Middle) Determines which Split is Optimal
Differences between Greedy method and Dynamic Programming

Greedy method Dynamic Programming


To obtain Optimal Solutions Obtain Optimum solution
Set of feasible solutions are No special set of feasible
generated and Pick up the solution are generated in this
optimum solutions method
Optimum selection is without Considers all possible
revising previously generated sequences in order to obtain
solutions optimum solution
No guarantee of getting Guaranteed that dynamic
optimum solution programming generate optimal
solution using principle of
optimality
Matrix Chain Multiplication
Input:
N matrices A1, A2 ------ An with dim[P1, P2,-----P n+1 ]
with dimension P1XP2 P2xP3 PnXPn+1

 Goal:

To Compute matrix product A1A2…An with minimum


number of computations
Ex: A1 A2 A3 matrices with dimension [ 5 4 6 2]
A1 is 5x4 A2 is 4x6 A3 is 6x2
(A1A2)A3=5x4x6 +
A1 A2 A3 matrices with dimension [ 5 4 6 2]

A1 is 5x4 A2 is 4x6 (A1A2)is 5X6 A3 is 6x2

(A1A2) (A1A2)A3
No. Mul=5X4X6 No. Mul=5X6X2
(A1A2)A3= 5X4X6 + 5X6X2 = 180

A2 is 4X6 A3 is 6X2 A1 is 5x4 (A2A3)is 4X2

(A2A3) A1(A2A3)
No. Mul=4X6X2 No. Mul=5X4X2

A1(A2A3)= 4X6X2 + 5X4X2 = 88


A1(A2A3) is optimal with cost 88
Steps to solve matrix chain
Multiplication
 Step 1: Mij is cost of Multiplying Ai …………….Aj
M(i,i)= 0 for all i=1 to n
M(1,1)=0, M(2,2)=0…M(n.n)=0
 Step 2: Sequence of decisions can be built based on Principle of
Optimality
Process of Multiplication
T be tree denoting optimal way of Multiplication Ai--------Aj
L- Left sub tree Ai------------Ak
R- Right Sub tree Ak+1-------Aj for some I <=k<=j-1
Ai--------Aj
Left Right
Ai------------Ak Ak+1-------Aj

 Step 3:Apply formula for computing each sequence


MiMj=min { Mik + M k+1 J + Pi P K+1 P J+1 / I <=k<=j-1}
Matrix Chain Multiplication Problem

A1 A2 A3 A4 are matrices with dimensions[5 4 6 2 7]


[P1 P2 P3 P4 P5]=[5 4 6 2 7]
A1 is 5X4 A2 is 4x6 A3 is 6x2 A4 is 2X7

1 2 3 4
0 M11=0 M22=0 M33=0 M44=0
1 M12= M23= M34=
K= K= K=
2 M13= M24=
K= K=
3 M14=
K=
Matrix Chain Multiplication Problem

A1 is 5X4 A2 is 4x6
A3 is 6x2 A4 is 2X7

To Calculate M12 i=1 j=2 k is between i to j-1

i=1 j=2 k=1


Mij=min { Mik + M k+1 J + Pi P K+1 P J+1 / I <=k<=j-1}

M12= { M11+ M22 + P1 P2 P3} for k=1


= 0 + 0 + 5X4 X6=120
M23={ M22+ M33 + P2 P3 P4} for k=2
= 0 + 4X6X2= 48
M34={ M33+ M44+ P3 P4P5} for k=3
= 0 + 0 + 6X2X7= 84
Matrix Chain Multiplication Problem

A1 A2 A3 A4 are matrices with dimensions[5 4 6 2 7]


[P1 P2 P3 P4 P5]=[5 4 6 2 7]
A1 is 5X4 A2 is 4x6 A3 is 6x2 A4 is 2X7

1 2 3 4
0 M11=0 M22=0 M33=0 M44=0
1 M12=12 M23=48 M34=84
0 K=2 K=3
K=1
2 M13= M24=
K= K=
3 M14=
K=

Multiplication of A2A3 is best. Now we have to compute


M13, M24 i.e cost of A1(A2A3) and (A2A3)A4
Matrix Chain Multiplication Problem

A1 is 5X4 A2 is 4x6
A3 is 6x2 A4 is 2X7

To Calculate M13 i=1 j=3 k= 1 or 2 is between i to j-1 k=1 or k=2

Mij=min { Mik + M k+1 J + Pi P K+1 P J+1 / I <=k<=j-1}

For k=1
M13= { M11+ M23 + P1 P2 P4} for k=1
= 0 + 48+5X4X2
=48+40=88
For k=2
M13={ M12+ M33 + P1 P3 P4} for k=2
=120 + 0 + 5X6X2= 180
Min{88, 180}=88 i.e k=1
Matrix Chain Multiplication Problem

A1 is 5X4 A2 is 4x6 A3 is 6x2 A4 is 2X7


[P1 P2 P3 P4 P5]=[5 4 6 2 7]

To Calculate M24 i=2 j=4 k= 2 or 3

Mij=min { Mik + M k+1 J + Pi P K+1 P J+1 / I <=k<=j-1}

For k=2
M24= { M22+ M34 + P2 P3 P5} for k=2
= 0 + 84+ 4X6X7= 252

For k=3
M24={ M23+ M44 + P2P4 P5} for k=3
=48 + 0 + 4X2X7= 104
Min{252, 104}=104 i.e k=3
Matrix Chain Multiplication Problem

A1 A2 A3 A4 are matrices with dimensions[5 4 6 2 7]


[P1 P2 P3 P4 P5]=[5 4 6 2 7]
A1 is 5X4 A2 is 4x6 A3 is 6x2 A4 is 2X7
1 2 3 4
0 M11=0 M22=0 M33=0 M44=0
1 M12=12 M23=48 M34=84
0 K=2 K=3
K=1
2 M13=88 M24=104
K=1 K=3
3 M14=15
8
K=3

M13 is min i.e A1(A2A3)


Matrix Chain Multiplication Problem

A1 is 5X4 A2 is 4x6 A3 is 6x2 A4 is 2X7


[P1 P2 P3 P4 P5]=[5 4 6 2 7]

To Calculate M14 i=1 j=4 k= 2 or 3

Mij=min { Mik + M k+1 J + Pi P K+1 P J+1 / I <=k<=j-1}

For k=3
M14= { M13+ M44 + P1P4 P5} for k=2
= 88+0+ 5X2X7= 158
A1(A2A3)A4=158
Solution
A1(A2A3)A4

A1(A2A3) A4

A1 (A2A3)

A2 A3
 Dyanmic programming used to find optimal solution
 Dynamic programming used to solve the problem with optimal
substructure
 OBST: time complexity worst case O(n3)
 o/1 knap sack: arrange all xi in order of descending p i/wi
TRAVELLING SALESPERSON PROBLEM
 Let G = (V, E) be a directed graph with edge costs Cij.
 The variable cij is defined such that cij > 0 for all I and j and
 cij = infinity if < i, j> not belong to E.
 Let |V| = n and assume n > 1.
 A tour of G is a directed simple cycle that includes every vertex in
V. The cost of a tour is the sum of the cost of the edges on the tour.
 The traveling sales person problem is to find a tour of minimum
cost. The tour is to be a simple path that starts and ends at vertex 1.
 Let g (i, S) be the length of shortest path starting at vertex i, going
through all vertices in S, and terminating at vertex 1. The function g
(1, V – {1}) is the length of an optimal salesperson tour. From the
principal of optimality it follows that:
1 i
i
0/1 – KNAPSACK

 We are given n objects and a knapsack.
 Each object i has a positive weight wi and a positive
value Vi.
 The knapsack can carry a weight not exceeding W.
 Fill the knapsack so that the value of objects in the
knapsack is optimized.
 Solution steps using Dynamic Programming
1. Optimize fi(y) where y= wi . Initially s0={(0,0)},
We use the ordered set Si = {(f (yj), yj) | 1 < j < k}
to represent fi (y).
Each number of Si is a pair (P, W), where P = fi (yj) and W =yj.
We can compute Si+1 from Si by first computing:
Si = {(P, W) | (P – p , W – w ) e Si}
Si+1 can be computed by merging the pairs in Si and Si1
together
2. A solution to the knapsack problem can be obtained by
making a sequence of decisions on the variables x1, x2, . . .Xn.
A decision on variable xi involves determining which of the
values 0 or 1 is to be assigned to it.

Let us assume that decisions on the xi are made in the order


xn, xn-1, x1.
Following a decision on xn, we may be in one of two possible
states: The decisions xn-1, , x1 must be optimal with respect
to the problem state resulting from the decision on xn.
Otherwise, xn, , x1 will not be optimal
Hence, the principal of optimality holds.
3. Fn (m) = max {fn-1 (m), fn-1 (m - wn) + pn} -- 1
 For arbitrary fi (y), i > 0, this equation generalizes to:
 Fi (y) = max {fi-1 (y), fi-1 (y - wi) + pi} -- 2
 Time Complexity : Θ (m n) where n is no of objects
and m is knapsack capcity

 Purging rule: two pairs (Pj, Wj) and (Pk, Wk) with
the property that Pj < Pk and Wj > Wk, then the pair
(Pj, Wj) can be discarded. Discarding or purging rules
such as this one are also known as dominance rules.
Dominated tuples get purged. In the above, (Pk, Wk)
dominates (Pj, Wj).
0/1 knapsack

pi wi
1 2
2 3
5 4
 S0 (0,0)
s0 1 (1, 2)

S1 S11 (2, 3)

S
2 2
S 1 (5, 4)

s3
Optimal Binary Search Tree

 In computer science, an
optimal binary search tree
(Optimal BST), sometimes
called a weight-balanced
binary tree.
 It is a binary search tree
which provides the
smallest possible search
time for a given sequence
of accesses (or access
probabilities).
Dynamic programmming method to solve OBST

STEP 1:
Let Tij=OBST(ai+1,……aj)
Cij = cost of the tree Tij

Wij = weight of Tij


T0n = final tree obtained
TOO = Empty tree
Ti i+1 = single node tree , element ai+1

STEP 2:
OBST can be build using principle of optimality
Let T0n be an OBST for element a1<a2< ..<an.
Let L and R left and right subtrees. Suppose root is k
L is left subtree with a1, a2..ak-1
R is Right subtree with ak+1, ak+2..an

C(T0n )= C(L)+C(R )+ W
where W=p1+p2+p3..+pn+ q0+q1+…qn
 Suppose L is not optimal subtree then we find another tree L’
 For same elements with the property C(L’)< C(L)
 C(T’)= C(L’)+C(R)+W
 C(T’) < C(T)
 This contradicts the fact that T0n is not optimal.
 There fore L must be Optimal. Same is for R

STEP 3:
C(I,j)= min { C(I,k-1)+C(k,j)+ W(I,j)}
i<k<j

W( i, j)= W[ i, j-1]+p[j]+q[j]

r( i, j) = k
• C (i, J) is the cost of the optimal binary search tree 'T ij‘ ,the root R (i, J)
of each tree 'Tij'.Then an optimal binary search tree may be
constructed from these R (i, J). R (i, J) is the value of 'K' that minimizes
equation (1).

We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1),
0 ≤ i < 4; Knowing W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and
repeating until W (0, n), C (0, n) and R (0, n) are obtained.
OBST

if
do
int

while
Reliability Design
 The problem is to design a system that is composed of several
devices connected in series.
 Let ri be the reliability of device Di
 ri is the probability that device i will function properly
 then the reliability of the entire system is π ri.
 Even if the individual devices are very reliable (the ri’s are very
close to one), the reliability of the system may not be very good.
 For example, if n = 10 and ri = 0.99, i < i < 10, then π ri = .904.
 Hence, it is desirable to duplicate devices. Multiply copies of the
same device type are connected in parallel.
Reliability Design
 If stage i contains mi copies of device Di. Then the
probability that all mi have a
 malfunction is (1 - r )mi. Hence the reliability of stage i
becomes 1 – (1 - r)mi.
i i
 The reliability of stage ‘i’ is given by a function Øi (mi).
 Our problem is to use device duplication. This maximization
is to be carried out under a cost constraint. Let ci be the cost
of each unit of device i and let c be the maximum allowable
cost of the system being designed.
All pairs shortest paths

 In the all pairs shortest path problem, we are to find a shortest path
between every pair of vertices in a directed graph G. That is, for every pair
of vertices (i, j), we are to find a shortest path from i to j as well as one
from j to i. These two paths are the same when G is undirected
 The shortest i to j path in G, i ≠ j originates at vertex i and goes through
some intermediate vertices (possibly none) and terminates at vertex j.
 If k is an intermediate vertex on this shortest path, then the subpaths from
i to k and from k to j must be shortest paths from i to k and k to j,
respectively.
 Let Ak (i, j) represent the length of a shortest path from i to j going
through no vertex of index greater than k, we obtain:
ALL PAIR SHORTEST PATH
Longest Common Sequence Problem Statement

Given two sequences, find the length of longest subsequence present in both of

them.

A subsequence is a sequence that appears in the same relative order, but not

necessarily contiguous.

 For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of

“abcdefg”.

 LCS for input Sequences “ABCDGH” and “AEDFHR” is “ADH” of length 3.

 LCS for input Sequences “AGGTAB” and “GXTXAYB” is “GTAB” of length 4.


LCS
 The complexity of brute force approach, we need to first know the number of possible different

subsequences of a string with length n,

 The number of combinations with 1 element are nC1.

 Number of combinations with 2 elements are nC2 and so forth and so on. We know

that nC0 + nC1 + nC2 + … nCn = 2n.

 So a string of length n has 2n-1 different possible subsequences since we do not consider the

subsequence with length 0.


 The time complexity of the brute force approach will be O(n * 2 n). Note that it takes O(n) time to check

if a subsequence is common to both the strings. This time complexity can be improved using dynamic

programming.
 It is a classic computer science problem, (a file comparison

program that outputs the differences between two files), and has

applications in bioinformatics

 The naive solution for this problem is to generate all

subsequences of both given sequences and find the longest

matching subsequence.

 This solution is exponential in term of time complexity. Let us see

how this problem possesses both important properties of a

Dynamic Programming (DP) Problem


1) Optimal Substructure:

Let the input sequences be X[0..m-1] and Y[0..n-1] of lengths m and n

respectively.

And let L(X[0..m-1], Y[0..n-1]) be the length of LCS of the two sequences X and Y.

Following is the recursive definition of L(X[0..m-1], Y[0..n-1]).

 If last characters of both sequences match (or X[m-1] == Y[n-1]) then

L(X[0..m-1], Y[0..n-1]) = 1 + L(X[0..m-2], Y[0..n-2])

 If last characters of both sequences do not match (or X[m-1] != Y[n-1]) then

L(X[0..m-1], Y[0..n-1]) = MAX ( L(X[0..m-2], Y[0..n-1]), L(X[0..m-1],

Y[0..n-2]) )
 Consider the input strings “AGGTAB” and

“GXTXAYB”. Last characters match for the strings.

So length of LCS can be written as:

L(“AGGTAB”, “GXTXAYB”) = 1 + L(“AGGTA”,

“GXTXAY”)
 Consider the input strings “ABCDGH” and “AEDFHR.

 Last characters do not match for the strings. So length of LCS can be written

as:

L(“ABCDGH”, “AEDFHR”) = MAX ( L(“ABCDG”, “AEDFHR”), L(“ABCDGH”,

“AEDFH”) )

So the LCS problem has optimal substructure property as the main problem

can be solved using solutions to subproblems.


 2) Overlapping Subproblems:

Following is simple recursive implementation of the LCS problem. The implementation simply follows

the recursive structure mentioned above.


 /* A Naive recursive implementation of LCS problem */
 #include <bits/stdc++.h>
 /* Returns length of LCS for X[0..m-1], Y[0..n-1] */
int lcs( char *X, char *Y, int m, int n )
{
if (m == 0 || n == 0)
return 0;
if (X[m-1] == Y[n-1])
return 1 + lcs(X, Y, m-1, n-1);
else
return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n));
}
int main()
{
char X[] = "AGGTAB";
char Y[] = "GXTXAYB";

int m = strlen(X);
int n = strlen(Y);

cout<<"Length of LCS is "<< lcs( X, Y, m, n ) ;

return 0;
}
Considering the above implementation, following is a partial recursion tree for input strings “AXYT”
and “AYZX”

lcs("AXYT", "AYZX")

/ \

lcs("AXY", "AYZX") lcs("AXYT", "AYZ")

/ \ / \

lcs("AX", "AYZX") lcs("AXY", "AYZ") lcs("AXY", "AYZ") lcs("AXYT", "AY")

You might also like