ADA Unit-4
ADA Unit-4
DYNAMIC PROGRAMMING
Dynamic programming is an algorithm design technique which was
invented by a prominent U. S. mathematician, Richard Bellman, in the
1950s.
To compute C(n, k), we fill the table row by row, starting with row0 and
ending with row n.
C(i, i) = 1 for 0 ≤ i ≤ k.
ALGORITHM Binomial(n, k)
for i ← 0 to n do
for j ← 0 to min(i, k) do
if j = 0 or j = i
C[i, j] ← 1
return C[n, k]
Because the first k + 1 rows of the table form a triangle while the
remaining n – k rows form a rectangle, we have to split the sum
expressing A(n, k) into two parts:
Warshall’s Algorithm
Warshall’s algorithm is a well-known algorithm for computing the
transitive closure (path matrix) of a directed graph.
Example:
Specifically, the element rij(k) in the ith row and jth column of matrix
R(k) (k=0, 1, . . . ,n) is equal to 1 if and only if there exists a directed
path from the ith vertex to the jth vertex with each intermediate
vertex if any, numbered not higher than k:
vi, a list of intermediate vertices each numbered not higher than k, v j ----------- (2)
The series starts with R(0) , which does not allow any intermediate
vertices in its path; hence, R(0) is nothing else but the adjacency matrix of
the graph.
R(1) contains the information about paths that can use the first vertex as
intermediate; so, it may contain more ones than R(0) .
In general, each subsequent matrix in series (1) has one more vertex to
use as intermediate for its path than its predecessor.
The last matrix in the series, R(n) , reflects paths that can use all n
vertices of the digraph as intermediate and hence is nothing else but the
digraph’s transitive closure.
We have the following formula for generating the elements of matrix R(k)
from the elements of matrix R(k-1) :
rij (k) = rij (k-1) or (rik (k-1) and rkj (k-1) ). ------------------ (3)
This formula implies the following rule for generating elements of matrix
R(k) from elements of matrix R(k-1) :
Example: Apply Warshall’s algorithm to find the Transitive closure for the
following graph.
Floyd’s Algorithm
Given a weighted connected graph (undirected or directed), the all-pairs
shortest-paths problem asks to find the distance (lengths of the
shortest paths) from each vertex to all other vertices.
Specifically, the element dij(k) in the ith row and jth column of matrix D(k)
(k=0, 1, . . . , n) is equal to the length of the shortest path among all
paths from the ith vertex to the jth vertex with each intermediate
vertex, if any, numbered not higher than k.
The series starts with D(0) , which does not allow any intermediate
vertices in its path; hence, D(0) is nothing but the weight matrix of the
graph.
The last matrix in the series, D(n) , contains the lengths of the shortest
paths among all paths that can use all n vertices as intermediate and
hence is nothing but the distance matrix being sought.
We can compute all the elements of each matrix D(k) from its
immediate predecessor D(k-1) in series (1).
//paths problem
//length cycle
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
return D
Example: Apply Floyd’s algorithm for the following graph to find all-pairs
shortest path.
Our goal is to find V[n, W], the maximal value of a subset of the n given
items that fit into the knapsack of capacity W, and an optimal subset
itself.
For i, j > 0, to compute the entry in the ith row and the jth column, V[i, j],
we compute the maximum of the entry in the previous row and the
same column and the sum of vi and the entry in the previous row
and wi columns to the left. The table can be filled either row by row or
column by column.
Capacity W = 5
Soln:
V[2, 0] = 0 since j = 0
V[3, 0] = 0 since j = 0
Since V [4, 5] is not equal to V [3, 5], item 4 was included in an optimal
solution along with an optimal subset for filling 5 - 2 = 3 remaining units
of the knapsack capacity.
Total weight is 5
Total value is 37
The time efficiency and space efficiency of this algorithm are both in
θ(nW).
The time needed to find the composition of an optimal solution is in
O(n + W).
PRINCIPLE OF OPTIMALITY
Greedy Technique
Spanning Tree
Example:
Prim’s Algorithm
The algorithm stops after all the graph’s vertices have been included
in the tree being constructed.
Since the algorithm expands a tree by exactly one vertex on each of its
iterations, the total number of such iterations is n-1, where n is the
number of vertices in the graph.
Vertices that are not adjacent to any of the tree vertices can be given the
∞ label and a null label for the name of the nearest tree vertex.
We can split the vertices that are not in the tree into two sets, the
“fringe” and the “unseen”.
The fringe contains only the vertices that are not in the tree but
are adjacent to at least one tree vertex. These are the candidates
from which the next tree vertex is selected.
Prim’s Algorithm:
Example: Apply Prim’s algorithm to find the minimum spanning tree for the
following graph.
Soln:
Kruskal’s Algorithm
Kruskal’s Algorithm:
Example: Apply Kruskal’s algorithm to find the minimum spanning tree for
Soln:
Dijkstra’s Algorithm
Assumptions:
First, Dijkstra’s algorithm finds the shortest path from the source to a
vertex nearest to it, then to a second nearest, and so on.
Before its ith iteration commences, the algorithm has already identified
the shortest paths to i - 1 other vertices nearest to the source.
These vertices, the source, and the edges of the shortest paths
leading to them from the source form a subtree Ti of the given
graph.
To identify the ith nearest vertex, the algorithm computes, for every fringe
vertex u, the sum of the distance to the nearest tree vertex v (given by the
weight of the edge (v, u)) and the length dv of the shortest path from the
source to v. Then selects the vertex with the smallest such sum.
Dijkstra’s Algorithm:
Soln: