0% found this document useful (0 votes)
21 views23 pages

Wa0005.

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views23 pages

Wa0005.

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 1

Dynamic Programming

1
Dynamic Programming

2
Chapter 2
Dynamic Programming

3
Dynamic Programming

4
Chapter 3
Dynamic Programming

5
Dynamic Programming

6
Chapter 4
Dynamic Programming

4.1 Multistage graph

The multi stage graph is a directed weighted graph in which the nodes are
classied in various stages such that all edges are from a particular stage to
its next stage only. In k-stage graph problems every path from source (s) to
sink(t) is the result of a sequence of k-2 decisions. The Figure. 4.1 depicts a
5-stage graph. If p(i,j) be the minimum cost path from a vertex j in G(V,E)

Figure 4.1: The multistage (5 stage) graph.


to vertex t where i is the stage and cost(i,j) be the cost of this path. Then
the forward approach gives the overall cost as
cost(i, j) = minl ∈ vi+1 ,(j,l) ∈ E {c(j, l) + cost(i + 1, l)} (4.1)
where l is the nearest neighbor of the current processing node. Figure. 4.2
shows the multistage graph operations to solve the graph given in Figure.
4.1 in which cost(i,j) is computed using (1.1). The Algorithm. 4.1 gives the
implementation details of multistage graph using forward approach.

7
Dynamic Programming

Figure 4.2: The multistage algorithm-forward approach implementation.

The multistage problem can be solved by using a backward approach


also. If p(i,j) be the minimum cost path from a vertex s in G(V,E) to vertex
j where i is the stage and cost(i,j) be the cost of this path. Then the backward
approach gives the overall cost as

cost(i, j) = minl ∈ vi-1 ,(l,j) ∈ E {cost(i − 1, l) + c(l, j)} (4.2)

where l is the nearest neighbor of the current processing node. Figure. 4.3
shows the multistage graph operations to solve the graph given in Figure.
4.1 in which cost(i,j) is computed using (1.2). The Algorithm. 4.2 gives the
implementation details of multistage graph using backward approach.

8
Dynamic Programming

MultistageGraphF(G,k,n)
{
Data: A k-stage graph G(V,E) with n nodes.
Result: A minimum cost path.
cost[n]=0
for j=n-1 to 1 do
{
let r be a vertex such that (j,r) is an edge of G
c[j,r]+cost[r] is minimum
cost[j]=c[j,r]+cost[r]
d[j]=r
}
p[1]=1, p[k]=n
for j=2 k-1 do
p[j]=d[p[j-1]]
}
Algorithm 4.1: The multistage graph using forward approach.

MultistageGraphB(G,k,n)
{
Data: A k-stage graph G(V,E) with n nodes.
Result: A minimum cost path.
cost[n]=0
for j=2 to n do
{
let r be a vertex such that (r,j) is an edge of G
c[j,r]+cost[r] is minimum
cost[j]=cost[r]+c[r,j]
d[j]=r
}
p[1]=1, p[k]=n
for j=k-1 to 2 do
p[j]=d[p[j+1]]
}
Algorithm 4.2: The multistage graph using backward approach.

9
Dynamic Programming

Figure 4.3: The multistage algorithm-backward approach implementation.

4.2 Transitive closure

4.2.1 Warshall's algorithm

Warshall's algorithm is used to calculate the transitive closure of a directed


graph. The transitive closure of a directed graph with n vertices is dened as
the n-by-n boolean matrix in which the element in the ith row and jth column
is 1, if there exists a non trivial directed path from the ith vertex to the jth
vertex.

Figure 4.4: Transitive closure by applying Warshall's algorithm.

Warshall's algorithm constructs the transitive closure of the given digraph


with n vertices through a series of nxn matrix representation like R0 ,...Rk-1 ,
Rk ,...Rn . The Figure. 4.4 shows a digraph and its corresponding adjacency
matrix and transitive closure. Each of these matrices provides information
about the directed paths in the graph. The element Rij in the ith row and

10
Dynamic Programming

Figure 4.5: Implementation of the Warshall's algorithm.

jth column of matrix Rk is 1 if and only if there exists a directed path from the
ith vertex to jth vertex with each intermediate vertex if any, not numbered
higher than k where k=(0, 1, ...n).
The process of generating the elements of matrix Rk from the elements
of matrix Rk-1 is given as

rij k = rij k-1 or rik k-1 and rkj k-1 (4.3)

The Warshall algorithm says,

ˆ If an element rij is 1 in Rk-1 , it remains 1 in Rk .

ˆ If an element rij is 0 in Rk-1 , it can change to 1 in Rk if and only if


the element in ith row and kth column and element in k th
row and jth
column are both 1 in Rk-1 .

The Warshall algorithm is given below,


Example. 4.1: Apply Warshall's algorithm to nd the transitive
closure of the given diagram as in Figure. 4.6.
The Figure. 4.6 contains a digraph with four nodes, the adjacency matrix
for the given graph is given in Figure. 4.7. The matrix is allowed to undergo
a series of operations as per (4.3).
The algorithmic implementation is given in Figure. 4.7.

11
Dynamic Programming

Warshall(a)
{
Data: An adjacency matrix a for the given digraph.
Result: The transitive closure of the digraph.
R0 =a
for k=1 to n do
for i=1 to n do
for j=1 to n do
rij k = rij k-1 or rik k-1 and rkj k-1
return Rn
}
Algorithm 4.3: The warshall algorithm.

Figure 4.6: A digraph containing four nodes.

Figure 4.7: The operations using Warshall algorithm.

12
Dynamic Programming

4.3 All pairs shortest path

4.3.1 Floyd's algorithm

The Floyd's algorithm calculates the distance from each vertex to all other
vertices. The algorithm constructs nxn matrix D to record the distance Dij
from each vertex to all other vertices where i and j are rows and columns of
the matrix D.
During operation the algorithm computes the distance matrix of a weighted
graph with n vertices through a series of n by n matrix such as. D0 ...Dk-1 ,Dk ....Dn .
Each of these matrices contain length of the shortest path with some
constraints such as Dij k in the ith row and jth column of matrix Dk , where
k=0...n is equal to the length of the shortest path among all paths from the
ith vertex to jth vertex with each intermediate vertex numbered not higher
than k. The implementation of Floyd's algorithm for the Figure. 4.8 is given
in the Figure. 4.9.

Figure 4.8: The distance matrix by Floyd's algorithm.

Floyd's algorithm obeys the following recurrence relation.

dij k = min(dij k-1 , ik k-1 + kj k-1 ) (4.4)

The Floyd's algorithm is given below,


Example. 4.2: Using Floyd's algorithm solves the all-pairs short-
est path problem for the given diagram as in Figure. 4.10.
The Figure. 4.10 contains the adjacency matrix for the given graph.
The adjacency matrix contains ve nodes and edges among nodes with their
weights. During each iteration the path with minimum cost is selected be-
tween edges. The path with negative direction and lack of direct edges are
represented as innite length paths. The matrix is allowed to undergo a
series of operations as per (4.4) and is given in Figure. 4.11.

13
Dynamic Programming

Figure 4.9: Implementation of the Floyd's algorithm.

Floyds(a)
{
Data: An adjacency matrix a for the given digraph.
Result: The shortest path from any vertex to any other vertex of
the digraph.
R =a
0

for k=1 to n do
for i=1 to n do
for j=1 to n do
Dij = min {Dij , Dik +Dkj }
return D
}
Algorithm 4.4: The Floyd's algorithm.

Figure 4.10: An adjacency matrix containing ve nodes.

14
Dynamic Programming

Figure 4.11: The operations using Floyd's algorithm.

4.3.2 Optimal binary search tree

Let a1, a2...an be distinct keys ordered from the smallest to largest, and
let p2...pn be the probabilities of searching for them. Let Cij be the smallest
average number of comparisons made in a successful search in a binary search
tree Tj i made up of keys ai,....aj, here i,j are some integer indices 1≤i≤j≤n. In
such an arrangement root contains key ak , the left sub tree Tk-1 i contains key
ai,..ak-1 and right sub tree Tk+1 j contains key ak+1,...aj which are optimally
arranged. The recurrence relation given in (4.4) is used to compute the Cij
values. Two two dimensional tables called main table and root table are used
to represent the Cij values and the root (sub tree) information respectively.

(4.5)
X
C(i, j) = min{C(i, k − 1) + C(k + 1, j)} + ijP.
Example. 4.3: Compute C values for the given key set A, B, C,
ij

D with their corresponding search probability is given as(0.1, 0.2,


0.4, 0.3).
The computation process of C[i,j] is shown in Figure. 4.13. The corre-
sponding initial Tables and updated tables are shown in Figure. 4.12 and

15
Dynamic Programming

Figure. 4.14 respectively.

Figure 4.12: The main table and root table at initial phase.

During the computation of C[1,2], if considers the binary tree containing


the rst two keys, A and B the root of the optimal tree has index two that
belongs two key B and the average number of comparisons in a successful
search in this tree is 0.4. If considers the binary tree containing the keys, B
and C the root of the optimal tree has index three that belongs two key C
and the average number of comparison in a successful search in this tree is
0.8. If considers the binary tree containing the keys, C and D, the root of the
optimal tree has index three that belongs two key C and the average number
of comparison in a successful search in this tree is 1. All the table values
are calculated and updated similarly. The Figure. 4.15 shows the optimal
binary tree obtained and the Algorithm 4.5 shows the procedures followed to
obtain the optimal binary tree.

Figure 4.13: The operations to calculate C(i,j).

16
Dynamic Programming

OptimalBT(p[1...n])
{
Data: The p[1...n] array with search probabilities of keys.
Result: The root table of optimal binary tree.
for i=1 to n do
C[i,i-1]=0
C[i,i]=P[i]
R[i,i]=i
C[n+1,n]=0 for d=1 to n-1 do
for i=1 to n-d do
j=i+d
minval=∞
for k=i to j do
if C[i,k-1]+C[k+1,j]<minval
minval=C[i,k-1]+C[k+1,j]
kmin=k
R[i,j]=kmin
sum=P[i]
for s=i+1 to j do
sum=sum+P[s]
C[i,j]=minval+sum
return C[1,n],R
}
Algorithm 4.5: The Optimal binary search tree implementation algo-
rithm.

17
Dynamic Programming

Figure 4.14: The main table and root table after updation.

Figure 4.15: The optimal binary search tree.

4.3.3 Knapsack problem

The dynamic version of the knapsack problem nds the most valuable subset
of the items that t into the knapsack for the given n items of known weights
w1,w2...n and values v1,v2...vn and a knapsack of capacity W. The problem
can solve by a recurrence relation by considering an instance dened by
the rst i items, 1≤i≤n with weights w1,w2...wn and values v1,v2...vn and
knapsack capacity j, 1≤j≤W. The V[i,j] is the value of an optimal solution.
The recurrence relation for solving the problem is given below.
max{V[i-1,j],vi+V[i-1,j-wi]}



V [i, j] = if j − wi ≥ 0 (4.6)
V[i-1,j]

if j − wi ≤ 0

The initial condition is given as


V[0,j]=0 for j≥0 and V[i,0]=0 for i≥0. The knap sack algorithm is given as
If we consider the following table Table. 4.1. data for the knapsack problem,
the algorithm is able to generate a resultant record as given in Table. 4.2.
by solving the recurrence relation(4.6). The item weights are w1=2, w2=1,
w3=3, w4=2, their corresponding value is given as v1=12, v2=10, v3=20,
v4=15.
The maximal value obtained from the result Table. 4.2. is V[4,5]=37. It
is possible to nd the composition of an optimal subset by tracing back the
computations of entry in this table. Since V[4,5] ̸=V[3,5], item 4 is an optimal

18
Dynamic Programming

Knapsack(i,j)
{
Data: The number of rst item considered i and sack capacity j.
Result: The value of optimal feasible solution.
if V[i,j]<0
if j<weights[i]
value=Knapsack(i-1,j)
else
value=max(Knapsack(i-1,j), values[i]+Knapsack(i-1,j-weights[i])
V[i,j]=value
return V[i,j]
}
Algorithm 4.6: The Knap sack using dynamic programming.

Table 4.1: A knapsack instance table.


Item Weight Value
1 2 12
2 1 10
3 3 20
4 2 15

Table 4.2: Table for solving knapsack problem.


i 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37

solution with a balance sack capacity of (5-w4)=5-2=3. Since V[3,3]=V[2,3]


item 3 is not an optimal solution. Since V[2,3] ̸=V[1,3], item 2 is an optimal
solution with a balance sack capacity of (3-w2)=3-1=2. Since V[1,2]=V[2,3]
item 3 is not an optimal solution. Since V[1,2] ̸=V[0,2], item 1 is an optimal
solution with a balance sack capacity of zero. Hence the optimal solution is
item (1,2,4).

19
Dynamic Programming

4.3.4 Bellman-Ford algorithm

The Bellman-Ford algorithm is used to determine the shortest path from a


source vertex to all remaining vertices in the graph. The algorithm primarily
computes the length or distance dist[u] of the shortest path from source
to destination under the constraints that shortest path contains at most l
edges. The Bellman Ford algorithm uses the recurrence relation of distance
calculation mentioned in (4.7) for it's application.
distk [u] = min{distk-1 [u], min{distk-1 [i] + cost[i, u]}} (4.7)

Figure 4.16: The application of Bellman Ford algorithm on a directed graph.


The implementation of Bellman Ford algorithm is given in Algorithm.
4.7.
BellmanFord(v,cost,dist,n)
{
Data: A directed weighted graph with n nodes.
Result: Shortest path from source to destination.
for i=1 to n do
dist[i]=cost[v][i]
for k=2 to n-1 do
for each u such that u̸=v and
u has at least one incoming edge do
for each(i,u) in the graph do
if(dist[u]>dist[i]+cost[i,u]) then
dist[u]=dist[i]+cost[i,u]
}
Algorithm 4.7: The Bellman Ford Algorithm.

The Figure. 4.16 shows a seven node graph with positive and nega-
tive edge weights along with the vertex arrays distance vector distk , where
k=1,2...6. The distance vector distk [u] is written as,

20
Dynamic Programming

dist1 [1]=0
dist2 [1]=0
dist3 [1]=0
dist4 [1]=0
dist5 [1]=0
dist6 [1]=0
and
dist1 [2]=6
dist1 [3]=5
dist1 [4]=5
dist1 [5]=∞
dist1 [6]=∞
dist1 [7]=∞

dist2 [2]=min{dist1 [2], min{dist1 [i]+cost[i,2]}}


dist1 [1]+cost[1,2]=0+6=6
dist1 [3]+cost[3,2]=5+-2=3
dist1 [4]+cost[4,2]=5+∞
dist1 [5]+cost[5,2]=∞+∞
dist1 [6]+cost[6,2]=∞+∞
dist1 [7]+cost[7,2]=∞+∞

dist2 [2]=min{dist1 [2], min{dist1 [i]+cost[i,2]}}=min{6,min{6,3,∞,∞,∞}}


min{6,3}=3. The entire distance table is given in Figure. 4.17.

Figure 4.17: The distance table for Bellman Ford algorithm.

21
Dynamic Programming

4.3.5 The traveling sales person problem

Let G(V,E) be a directed graph with edge costs ci,j , where ci,j >0 for all i
and j and ci,j =∞ if (i,j) ∈/ E. A tour of G is a directed simple cycle that
includes every vertex in V. The cost of a tour is the sum of the cost of the
edges on the tour. The traveling sales person problem is used to nd the
tour of minimum cost. Let g(i,S) be the length of of a shortest path starting
at vertex i, going through all vertices in S and terminating at vertex 1. The
function g(1,V-1) is the length of an optimal sales persons tour. Using the
principle of optimality g(i,S)is written as

g(i, S) = min{ci,j + g(j, S − j)} (4.8)

A directed graph and the corresponding edge length matrix is given in Figure.
4.18. The operations of the traveling sale person problem is given in Table.

Figure 4.18: The directed graph and the edge length matrix.

4.3.
The optimal tour of the graph has length 35. It is calculated as J(1,{2,3,4})=2
because the node two has the minimum g(i,S) value. The remaining tour can
be obtained from J(2,{3,4})=4, because g(4,{3}) has minimum value. Then
the tour continued to node three because J(4,{3})=3, hence the optimal tour
is 1-2-4-3-1.

4.3.6 Reliability design

If ri be the reliability of device Di then the reliability of the entire system


is π ri . In order to improve the reliability of the system it is desirable to
duplicate the devices. The reliability of stage i is given by a function ϕi mi
and which is equal to 1-(1-ri )i where mi is the number of copies of device Di .
The dynamic programming solution for reliability design is given as

f i (x) = max1≤ mi ≤ un {ϕi mi .f i-1 (x − ci mi )} (4.9)

22
Dynamic Programming

Table 4.3: TSP-Operations.


g(i,S) Operations Value
g(2,ϕ)= C21= 5
g(3,ϕ)= C31= 6
g(4,ϕ)= C41= 8
g(2,{3})= C23+g(3, ϕ)= 15
g(2,{4})= C24+g(4, ϕ)= 18
g(3,{2})= C32+g(2, ϕ)= 18
g(3,{4})= C34+g(4, ϕ)= 20
g(4,{2})= C42+g(2, ϕ)= 13
g(4,{3})= C43+g(3, ϕ)= 15
g(2,{3,4})=min{C23+g(3,{4}), 25
C24+g(4,{3})}=min{29,25}=
g(3,{2,4})=min{C32+g(2,{4}), 25
C34+g(4,{2})}=min{31,25}=
g(4,{2,3})=min{C42+g(2,{3}), 23
C43+g(3,{2})}=min{23,27}=
g(1,{2,3,4})=
min{C12+g(2,{3,4}), 35
C13+g(3,{2,4}),
C14+g(4,{2,3})}=min{35,40,43}=

where u is the upper bound. There is at most one tuple for each dierent x
that results from a sequence of decisions on m1 ,m2 .....mn .

23

You might also like