Unit 3 - Analysis and Design of Algorithm - WWW - Rgpvnotes.in
Unit 3 - Analysis and Design of Algorithm - WWW - Rgpvnotes.in
Tech
Subject Name: Analysis and Design of Algorithm
Subject Code: IT-403
Semester: 4th
Downloaded from be.rgpvnotes.in
Introduction:
Dynamic programming (DP) is a general algorithm design technique for solving problems with
overlapping sub-p o le s. This te h i ue as i e ted A ei a athe ati ia Ri ha d
Bell a i 95 s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-computation.
The solutions for a smaller instance might be needed multiple times, so store their results
0/1 knapsack:
In 0-1 Knapsack, items cannot be broken which means the thief should take the item as a whole or
should leave it. This is reason behind calling it as 0-1 Knapsack.Hence, in case of 0-1 Knapsack, the
value of xi can be either 0 or 1, where other constraints remain the same. 0-1 Knapsack cannot be
solved by Greedy approach. Greedy approach does not ensure an optimal solution. In many
instances, Greedy approach may give an optimal solution.
The following examples will establish our statement.
Example-1
Let us consider that the capacity of the knapsack is W = 25 and the items are as shown in the
following table.
Item A B C D
Profit 24 18 18 10
Weight 24 10 10 7
Without considering the profit per unit weight (pi/wi), if we apply Greedy approach to solve this
problem, first item A will be selected as it will contribute maximum profit among all the elements.
After selecting item A, no more item will be selected. Hence, for this given set of items total profit
is 24. Whereas, the optimal solution can be achieved by selecting items, B and C, where the total
profit is 18 + 18 = 36.
Example-2
Instead of selecting the items based on the overall benefit, in this example the items are selected
based on ratio pi/wi. Let us consider that the capacity of the knapsack is W = 30 and the items are
as shown in the following table.
Item A B C
Price 100 280 120
Weight 10 40 20
Ratio 10 7 6
Using the Greedy approach, first item A is selected. Then, the next item B is chosen. Hence, the
total profit is 100 + 280 = 380. However, the optimal solution of this instance can be achieved by
selecting items, B and C, where the total profit is 280 + 120 = 400.
Hence, it can be concluded that Greedy approach may not give an optimal solution.
Problem Statement
A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items
and weight of ith item is wi and the profit of selecting this item is pi. What items should the thief
take?
Dynamic-Programming Approach
Let i be the highest-numbered item in an optimal solution S for W dollars. Then S' = S - {i} is an
optimal solution for W - wi dollars and the value to the solution S is Vi plus the value of the sub-
problem.
We can express this fact in the following formula: define c[i, w] to be the solution for items , , … ,
i and the maximum weight w.
The two sequences v = <v1, v2, …, n> and w = <w1, w2, …, n>
Dynamic-0-1-knapsack (v, w, n, W)
for w = 0 to W do
c[0, w] = 0
for i = 1 to n do
c[i, 0] = 0
for w = 1 to W do
if wi the
if vi + c[i-1, w-wi] then
c[i, w] = vi + c[i-1, w-wi]
else c[i, w] = c[i-1, w]
else
c[i, w] = c[i-1, w]
The set of items to take can be deduced from the table, starting at c[n, w] and tracing backwards
where the optimal values came from.
If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i-1, w].
Otherwise, item i is part of the solution, and we continue tracing with c[i-1, w-W].
Analysis
This algo ith takes θ n, w) times as table c has (n + 1).(w + 1) entries, where each entry requires
θ ti e to o pute.
The vertex s Є s1 is called the source and the vertex t Є sk is called sink.
G is usually assumed to be a weighted graph. In this graph, cost of an edge (i, j) is represented by
c(i, j). Hence, the cost of path from source s to sink t is the sum of costs of each edges in this path.
The multistage graph problem is finding the path with minimum cost from source s to sink t.
Example
Consider the following example to understand the concept of multistage graph.
According to the formula, we have to calculate the cost (i, j) using the following steps
Reliability Design
In this section, we present the dynamic programming approach to solve a problem with
multiplicative constraints. Let us consider the example of a computer return in which a set of
nodes are connected with each other. Let r i be the reliability of a node i.e. the probability at which
the node forwards the packets correctly in r i . Then the reliability of the path connecting from one
ode s to k∏ he e k is the u e of i termediate node. Similarly, we can also consider a
another node d isi i = 1 system with n devices connected in series, where the reliability of device i
is r i .
The elia ilit of the k s ste is ∏ . Fo e a ple if the e a e 5 de i es o e ted i se ies and the
reliability of each device i i = 1 is 0.99 then the reliability of the system is 0.99 × 0.99 × 0.99 × 0.99 ×
0.99=0.951. Hence, it is desirable to connect multiple copies of the same devices in parallel
through the use of switching circuits. The switching circuits determine the devices in any group
functions properly. Then they make use of one such device at each stage.Let m i be the number of
copies of device D i in stage i. Then the probability that all m i have malfunction i.e. (1-r i ) mi
.Hence, the reliability of stage i becomes 1-(1-r i ) mi .
Thus, if r i =0.99 and m i =2, the reliability of stage i is 0.9999. However, in practical situations it
becomes less because the switching circuits are not fully reliable. Let us assume that the reliability
of stage i i φ i i , i . Thus the elia ilit k of the s ste is ∏ φ .
The reliability design problem is to use multiple copies of the devices at each stage to increase
reliability. However, this is to be done under a cost constraint. Let c i be the cost of each unit of
device D i and let c be the cost constraint. Then the objective is to maximize the reliability under
the condition that the total cost of the system m i c i will be less than c.
Floyd-Warshall Algorithm
The Floyd-Warshall algorithm works based on a property of intermediate vertices of a shortest
path. An intermediate vertex for a path p = <v1, v2, ..., vj> is any vertex other than v1 or vj.
If the vertices of a graph G are indexed by {1, 2, ..., n}, then consider a subset of vertices {1, 2, ...,
k}. Assume p is a minimum weight path from vertex i to vertex j whose intermediate vertices are
drawn from the subset {1, 2, ..., k}. If we consider vertex k on the path then either:
k is not an intermediate vertex of p (i.e. is not used in the minimum weight path)
Thus if we define a quantity d(k)ij as the minimum weight of the path from vertex i to vertex j with
intermediate vertices drawn from the set {1, 2, ..., k} the above properties give the following
recursive solution
Algorithm FloydWarshall(D, P)
1.for k in 1 to n do
2.for i in 1 to n do
3.for j in 1 to n do
4.if D[i][j] > D[i][k] + D[k][j] then
5.D[i][j] = D[i][k] + D[k][j]
6.P[i][j] = P[k][j]
7.return P
Basically the algorithm works by repeatedly exploring paths between every pair using each vertex
as an intermediate vertex. Since Floyd-Warshall is simply three (tight) nested loops, the run time is
clearly O(V3).
Example
Initialization: (k = 0)
Iteration 2: (k = 2) Shorter paths from 4 ↝1, 5 ↝1, and 5 ↝3 are found through vertex 2
Iteration 4: (k = 4) Shorter paths from 1 ↝2, 1 ↝3, 2 ↝3, 3 ↝1, 3 ↝2, 5 ↝1, 5 ↝2, 5 ↝3, and 5 ↝4
are found through vertex 4