DAA_unit_3_Dynamic programming
DAA_unit_3_Dynamic programming
INTRODUCTION
The drawback of greedy method is, we will make one decision at a time. This can be
overcome in dynamic programming. In this we will make more than one decision at a time.
Dynamic programming is an algorithm design method that can be used when the solution to a
problem can be viewed as the result of a sequence of decisions. Dynamic programming is
applicable when the sub-problems are not independent, that is when sub-problems share sub-
sub-problems. A dynamic programming algorithm solves every sub-sub-problem just once
and then saves its answer in a table, there by avoiding the work of re-computing the answer
every time the sub-problem is encountered.
Definition:
It is a programming technique in which solution is obtained from a sequence of decisions
General Method:
The fundamental dynamic programming model may be written as,
Once F1(R) is known equation(1) provides a relation for evaluation of F2(R), F3(R)…. This
recursive process ultimately leads to the value of Fn-1(R) and finally Fn(R) at which process
stops.
A dynamic programming problem can be divided into a number of stages where an optimal
decision must be made at each stage. The decision made at each stage must take into account
its effects not only on the next stage, but also on the entire subsequent stages. Dynamic
programming provides a systematic procedure whereby starting with the last stage of the
problem and working backwards one makes an optimal decision for each stage of problem.
The information for the last stage is the information derived from the previous stage.
The Dynamic programming technique was developed by Bellman based upon his principle
known as principle of optimality. This principle states that “An optimal policy has the
property that, what ever the initial decisions are, the remaining decisions must constitute an
optimal policy with regard to the state resulting from the first decision”.
Example
Consider the following example to understand the concept of multistage graph.
1. The key in the left child of a node (if it exists) is less than the key in its parent
node.
2. The key in the right child of a node is greater than the key in its parent node.
3. The left and right subtrees of the root are again binary search trees.
(1) (2)
If we want to search an element in binary search tree, first that element is compared
with root node. If element is less than root node then search continue in left subtree.
If element is greater than root node then search continue in right subtree. If element
is equal to root node then print search was successful (element found) and terminate
search procedure.
algorithm search(t,x)
{
if (t==0) then return 0;
else
if (x = t->data) then return t;
else
if ( x< t->data) then return search (t->left, x);
else
return search (t->right, x);
}
The possible binary search trees for the identifier set (a1,a2,a3=do, if, stop)
Hence n=3
1 2n 1 2 x3
The number of possible binary search trees = Cn = C3
n 1 3 1
1 1 1 2 3 4 5 6 1 20 5
= 6C3 =
4 4 1 2 31 2 3 4
stop,do,if do,stop,if
(d) (e)
cos t(T ) ∑
1in
p(i) level(ai ) ∑ q(i) level(E 1)
0in
i
1
p(i) q(i)
7
1 1 1 1 1 1 1 15
a) 1 2 3 1 2 3 3
7 7 7 7 7 7 7 7
1 1 1 1 1 1 1 13
b) 1 2 2 2 2 2 2
7 7 7 7 7 7 7 7
1 1 1 1 1 1 1 15
c) 1 2 3 1 2 3 3
7 7 7 7 7 7 7 7
1 1 1 1 1 1 1 15
d) 1 2 3 1 2 3 3
7 7 7 7 7 7 7 7
1 1 1 1 1 1 1 15
e) 1 2 3 1 2 3 3
7 7 7 7 7 7 7 7
In the above binary search tree the cost of the tree 2 is minimum. Hence it is optimal
binary search tree.
W01 = E0 a1 E1
W12 = E1 a2 E2
W23 = E2 a3 E3
W34 = E3 a4 E4
W02 = E0 a1 E1 a2 E2
W13 = E1 a2 E2 a3 E3
W24 = E2 a3 E3 a4 E4
W03 = E0 a1 E1 a2 E2 a3 E3
W14 = E1 a2 E2 a3 E3 a4 E4
W04 = E0a1E1a2E2a3E3a4E4
In order to solve above problem, first we will draw one table by taking „I‟ corresponds
to columns. The cells in the table can be indicated as wj,j+i, cj,j+i,Rj,j+i.
0 1 2 3 4
W00=2 W11=3 W22=1 W33=1 W44=1
0 C00=0 C11 =0 C22=0 C33=0 C44=0
R00=0 R11=0 R22=0 R33=0 R44=0
W01= W12= W23= W34=
1 C01= C12 = C23= C34 =
R01= R12 = R23= R34 =
W02= W13= W24=
2 C02 = C13 = C24=
R02 = R13 = R24=
W03= W14=
3 C03 = C14 =
R03 = R14 =
W04=
4 C04 =
R04 =
Principle of optimality was applied.
When k=2
C(0,2) = min { c(0,1) + c(2,2) } + w(0,2) = 8 + 0 + 12 = 20
When k=1
C(0,2) = min { c(0,0) + c(1,2) } + w(0,2) = 0 + 7 + 12 = 19
C(0,2) = min{20,19} = 19
r(0, 2) = 1
When k=3 or 2
C(1,3) = min { c(1,1) + c(2,3), c(1,2) + c(3,3) } + w(1,3) ={ 0 + 3, 7 + 0} + 9 = 12
r(1, 3) = 2
When k=3
C(2,4) = min { c(2,2) + c(3,4) } + w(2,4) = 0 + 3 + 5 = 8
When k=4
C(2,4) = min { c(2,3) + c(4,4) } + w(2,4) = 3 + 0 + 5 = 8
r(2,4)=3
When k=2 or 3 or 4
C(1,4) = min { c(1,1) + c(2,4), c(1,2) + c(3,4), c(1,3)+c(4,4) } + w(1,4)
=min{0+8, 7+3, 12+0} + 11 = 8 + 11 = 19
r(1,4)=2
When k=1 or 2 or 3 or 4
C(1,4) = min { c(0,0) + c(1,4), c(0,1) + c(2,4), c(0,2)+c(3,4), c(0,3)+c(4,4) } + w(0,4)
=min{19, 16, 23, 25} + 16 = 16 + 16 = 32
R(0,4)=2
0 1 2 3 4
W00=2 W11=3 W22=1 W33=1 W44=1
0 C00=0 C11 =0 C22=0 C33=0 C44=0
R00=0 R11=0 R22=0 R33=0 R44=0
W01=8 W12=7 W23=3 W34=3
1 C01=8 C12 =7 C23=3 C34 =3
R01=1 R12 =2 R23=3 R34 =4
W02=12 W13=9 W24=5
2 C02 =19 C13 =12 C24=8
R02 =1 R13 =2 R24=3
W03=14 W14=11
3 C03 =25 C14 =19
R03 =2 R14 =2
W04=16
4 C04 =32
R04 =2
Now observe the tables last cell i.e. 4th row 0th column, contains r04 = 2 i.e. r04 = a2
( 2 corresponds to second node a2).
R04=k, then k=2
To build OBST R(0,4)=2=k=2
T01 is divided into two parts T00 and T11 where k=1
T24 is divided into two parts T22 and T34 where k=3
T34 is divided into two parts T33 and T44 where k=4
Since r00, r11, r22, r33, r44 = 0 these are external nodes and can be neglected.
Algorithm OBST(p,q,n)
{
for i:=0 to n-1 do
{
w[i,i]:=q[i];
r[i,i]:=0;
c[i,i]:=0;
w[i,i+1]:=q[i]+q[i+1]+p[i+1];
r[i,i+1]:=i+1;
c[i,i+1]=q[i]+q[i+1]+p[i+1];
}
w[n,n]:=q[n];
r[n,n]:=0;
c[n,n]:=0.0;
for m:=2 to n do
for i:=0 to n-m do
{
j:=i+m;
w[i, j]:=w[i, j-1]+p[j]+q[j];
k:=find(c, r, i, j);
c[i, j]:=w[i, j]+c[i, k-1]+c[k, j];
r[i,j]:=k;
}
write(c[0,n],w[0,n],r[0,n];
}
algorithm find(c, r, i, j)
{
min := ∞;
for m := r[i, j-1] to r[i+1, j] do
if ((c[i, m-1]+c[m, j]) < min) then
{
min:=c[i, m-1]+c[m, j];
l:=m;
}
return l;
}
It is clear that the remaining decisions xi+1, xi+2,….xn must be optimal w.r.t. the
problem state resulting from the decision of xi (i=1). Hence the principle of optimality
holds.
Suppose that a store contains different types of ornaments, which are made up of gold.
Let n1, n2, n3 be ornaments, cost and weight of these ornaments are c1, c2, c3 dollars
w1, w2, w3 pounds respectively. Now a thief wants to rob the ornaments such that he
should get maximum profit. In this the thief can‟t place fraction of ornament in the
bag, i.e. either he can place ornament completely in the bag or he can‟t place
ornament. So
Xi= 0 or 1.
Xi = 0 means we can not place ornament in the bag.
Xi = 1 means we can place ornament completely in the bag.
This problem contains either 0 or 1, hence the problem is called 0/1 Knapsack
problem.
Example : Consider the knapsack instance n=3, (w1, w2, w3) = (2, 3, 4),
(p1, p2, p3) = (1, 2, 5) and m=6
(p1,w1) = (1, 2)
(p2,w2) = (2, 3)
(p3,w3) = (5, 4)
s0 0, 0
si si1 ( p , w )
1 i i
s1 s ( p1, w1)
1 0
----------------------addition
={(0,0)}+{(1,2)}
s1 {(1, 2)}
1
s2 s1 ( p , w ) -----------addition
1 2 2
={(0,0),(1,2)}+{2,3}
= {(2,3),(3,5)}
s s1 s12
2
merging
s {(0, 0), (1, 2)} {(2, 3), (3, 5)}
2
s3 s2 s13 merging
s3 {(0, 0), (1, 2), (2, 3), (3, 5)} {(5, 4), (6, 6), (7, 7), (8, 9)}
s3 {(0, 0), (1, 2), (2, 3), (3, 5), (5, 4), (6, 6), (7, 7), (8, 9)};
Using purge rule(dominance rule) in the set S3 on ordered pairs (3,5) (5,4) i.e. 3<5 and 5>4.
so we can eliminate (3,5) from S3 .
After applying purge rule we will check the following condition inorder to find solution
x1 1, x2 0, x3 1
Maximum profit is ∑ p x p x
i i 1 1 p2 x2 p3x3 11 2 0 51 1 5 6
Forward Approach:
cost(i,j) = min { c(j,l) + cost(i+1,l )}
lЄVi+1
<j,l>ЄE
STAGES V1 V2 V3 V4 V5
Five-Stage Graph
First compute cos t[k 2, j]j vk 2 and then compute cos t[k 3, j]j vk 3 and so
on, and finally compute cost[1,S]
Stage 4 contains 3 vertices (9,10,11) cost of each vertex at stage 4 to the destination
cost(4,9) = c(9,12) = 4 stage 4 cost from vertex 9 to vertex 12
cost(4,10) = c(10,12) = 2 stage 4 cost from vertex 10 to vertex 12
cost(4,11) = c(11,12) = 5 stage 4 cost from vertex 11 to vertex 12
cost(2,3) = 9
cost (2,4)=18
cost(2,5)=15
cost(1,1) = min 9+cost(2,2) 16
7+cost(2,3)
3+cost(2,4)
2+cost(2,5)
Note that in the calculation of cost(2,2), we have reused the values of cost (3,6), cost
(3,7) and cost(3,8) and so avoided their recomputation. A minimum cost s to t path
has a cost of 16. This path can be determined easily if we record the decision made at
each stage (vertex).
Backward approach:
The tour of graph G is a directed simple cycle that includes every vertex in V. The
cost of the tour is the sum of cost of the edges on the tour. The travelling sales-
person problem is to find a tour of minimum cost.
Applications: Suppose we have to route a postal van to pick up mail from mailbox
located at „n‟ different sites. An (n+1) vertex graph can be used to represent the
situation. One vertex represents the post office from which the postal van starts and
to which it must return. Edge <i,j> is assigned a cost equal to the distance from site
„I‟ to site „j‟. Route taken by the postal van is a tour and we are interested in finding
tour of minimum length.
Every tour consist of an edge <1, k> for some k Є V-{1} and a path from vertex k to
vertex 1, which goes through each vertex in V-{1,k} exactly once. It is easy to see
that if the tour is optimal, then the path from k to 1 must be a shortest k to 1 path
going through all vertices in V-{1, k}. Hence, the principle of optimality holds.
Let g(i,S) be the length of a shortest path starting at vertex „i‟ going through all
vertices in „S‟ & terminating at vertex 1. The function g(1,V-{1}) is the length of an
optimal salesperson tour. From the principle of optimality
g (1, V {1}) m in { C 1 k g ( k , V {1, k })}
2k n
j={3,4}
g(2,{3,4}) = min c23 + g(3,s-{3}) when j=3
c24 + g(4,s-{4}) when j=4
g (2,ø)=C21 =1
g (3,ø)=C31 =6
g (4,ø)=C41 =1
g (5,ø)=C51 =3
`
g(4,{3,5})= min C43 + g(3,S-{3})
C45 + g(5,S-{5})
The all pair shortest path problem is to determine a matrix A such that A(i,j) is the
length of a shortest path from vertex i to j. Assume that this path contains no
cycles. If k is an intermediate vertex on this path, then the sub paths form i to k
and from k to j are the shortest paths from I to k and from k to j respectively.
Otherwise the path from i to j is not shortest path. If k is intermediate vertex with
highest index then the path i to k is the shortest path going through no vertex with
index greater than k-1. similarly the path k to j is shortest path going through no
vertex with index greater than k-1.
Example : Compute all pairs shortest path for the following graph
1 2
4
11
2
3
3
Graph G
0 4 11
cost Adjacency Matrix A0 (i, j) W(i, j) 6 0 2
3 0
0 4 11
cost Adjacency Matrix A1 (i, j) 6 0 2
3 7 0
Step 2
For k=2 i.e. going from i to j through intermediate vertex 2
When i=1 k=2 j=1/2/3
Ak (i, j) min{Ak1(i, j), Ak1(i, k) Ak1(k, j)}
A2(1,1) min{A1(1,1), A1(1,2) A1(2,1)} min{0, 4 6} 0
A2(1, 2) min{A1(1, 2), A1(1,2) A1(2, 2)} min{4, 4 0} 4
A2(1,3) min{A1(1,3), A1(1, 2) A0(2,3)} min{11, 4 2} 6
When i=2 j=1/2/3
A (2,1) min{A1(2,1), A1(2, 2) A1(2,1)} min{6,0 6} 6
2
Step 3
For k=3 i.e. going from i to j through intermediate vertex 3
When i=1 k=3 j=1/2/3
Ak (i, j) min{Ak1(i, j), Ak1(i, k) Ak1(k, j)}
A3(1,1)min{A2(1,1), A2(1,3) A2(3,1)} min{0,6 3} 0
A3(1, 2) min{A2(1, 2), A2(1,3) A2(3, 2)} min{4,6 7} 4
A3(1,3) min{A2(1,3), A2(1,3) A2(3,3)} min{6,6 0} 6
When i=2 j=1/2/3
A3(2,1) min{A2(2,1), A2(2,3) A2(3,1)} min{6, 2 3} 5
A3(2, 2) min{A2(2, 2), A2(2,3) A2(3, 2)} min{0, 2 7} 0
A3(2,3) min{A2(2,3), A2(2,3) A2(3,3)} min{2,2 0} 2
When i=3 j=1/2/3
A3(3,1) min{A2(3,1), A2(3,3) A2(3,1)} min{3,0 3} 3
A3(3, 2) min{A2(3, 2), A2(3,3) A3(3, 2)} min{7,0 7} 7
A3(3,3) min{A2(3,3), A2(3,3) A3(3,3)} min{0,0 0} 0
0 4 6
cost Adjacency Matrix A3 (i, j) 5 0 2
3 7 0
This matrix gives the all pairs shortest path solution
Algorithm all_pairs_shortest_path(W,A,n)
// W is weighted array matrix, n is the number of vertices,
// A is the cost of shortest path from vertex i to j.
{
for i:= 1 to n do
for j:= 1 to n do
A[i,j]:= W[i,j]
for k:=1 to n do
for i:= 1 to n do
for j:= 1 to n do
A[i,j]:=min(A[i,j],A[i,k]+A[k,j]
}