0% found this document useful (0 votes)
65 views28 pages

ADA Unit-4

The document discusses dynamic programming and algorithms for solving problems involving overlapping subproblems, including computing binomial coefficients and finding the transitive closure of a graph. It describes Richard Bellman's invention of dynamic programming and how it works by solving subproblems only once and storing results in a table. The binomial coefficient algorithm uses dynamic programming to fill a table to compute values. Warshall's algorithm and Floyd's algorithm are also presented as ways to find the transitive closure and shortest paths between all pairs of vertices in a graph.

Uploaded by

gowdaveeresh494
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views28 pages

ADA Unit-4

The document discusses dynamic programming and algorithms for solving problems involving overlapping subproblems, including computing binomial coefficients and finding the transitive closure of a graph. It describes Richard Bellman's invention of dynamic programming and how it works by solving subproblems only once and storing results in a table. The binomial coefficient algorithm uses dynamic programming to fill a table to compute values. Warshall's algorithm and Floyd's algorithm are also presented as ways to find the transitive closure and shortest paths between all pairs of vertices in a graph.

Uploaded by

gowdaveeresh494
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Analysis and Design of Algorithms – Unit - 4

DYNAMIC PROGRAMMING
 Dynamic programming is an algorithm design technique which was
invented by a prominent U. S. mathematician, Richard Bellman, in the
1950s.

 Dynamic programming is a technique for solving problems with


overlapping sub-problems.

 Typically, these sub-problems arise from a recurrence relating a


solution to a given problem with solutions to its smaller
sub-problems of the same type.

 Rather than solving overlapping sub-problems again and again,


dynamic programming suggests solving each of the smaller
sub-problems only once and recording the results in a table from
which we can then obtain a solution to the original problem.

 Dynamic programming usually takes one of two approaches:

 Bottom-up approach: All smaller sub-problems of a given problem


are solved.

 Top-down approach: Avoids solving unnecessary sub-problems.


This is recursion and Memory Function combined together.

Computing Binomial Coefficient


n 
 Binomial Coefficient, denoted as C(n, k) or   , is the number of
k 
combinations (subsets) of k elements from an n-element set (0 ≤ k ≤ n).

 Two important properties of Binomial Coefficients:

 C(n, k) = C(n-1, k-1) + C(n-1, k) for n>k>0 -------- (1)

 C(n, 0) = C(n, n) = 1 -------- (2)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 1


Analysis and Design of Algorithms – Unit - 4

 The nature of recurrence (1), which expresses the problem of computing


C(n, k) in terms of the smaller and overlapping problems of computing
C(n-1, k-1) and C(n-1, k), lends itself to solving by the dynamic
programming technique.

 The values of the Binomial Coefficients can be recorded in a table of n+1


rows and k+1 columns, numbered from 0 to n and from 0 to k,
respectively.

 To compute C(n, k), we fill the table row by row, starting with row0 and
ending with row n.

 Each row i (0 ≤ i ≤ n) is filled left to right, starting with 1 because


C(n,0) = 1.

 Row 0 through k also end with 1 on the table’s main diagonal:

C(i, i) = 1 for 0 ≤ i ≤ k.

 The other entries are computed using the formula

C(i, j) = C(i-1, j-1) + C(i-1, j)


i.e., by adding the contents of the cells in the preceding row and the
previous column and in the preceding row and the same column.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 2


Analysis and Design of Algorithms – Unit - 4

Example: Compute C(6, 3) using dynamic programming.

C(2, 1) = C(1,0) + C(1,1) = 1+1 = 2 C(5, 1) = C(4,0) + C(4,1) = 1+4 = 5

C(3, 1) = C(2,0) + C(2,1) = 1+2 = 3 C(5, 2) = C(4,1) + C(4,2) = 4+6 = 10

C(3, 2) = C(2,1) + C(2,2) = 2+1 = 3 C(5, 3) = C(4,2) + C(4,3) = 6+4 = 10

C(4, 1) = C(3,0) + C(3,1) = 1+3 = 4 C(6, 1) = C(5,0) + C(5,1) = 1+5 = 6

C(4, 2) = C(3,1) + C(3,2) = 3+3 = 6 C(6, 2) = C(5,1) + C(5,2) = 5+10 = 15

C(4, 3) = C(3,2) + C(3,3) = 3+1 = 4 C(6, 3) = C(5,2) + C(5,3) = 10+10 = 20

ALGORITHM Binomial(n, k)

//Computes C(n, k) by the dynamic programming algorithm

//Input: A pair of nonnegative integer n ≥ k ≥ 0

//Output: The value of C(n, k)

for i ← 0 to n do

for j ← 0 to min(i, k) do

if j = 0 or j = i

C[i, j] ← 1

else C[i, j] ← C[i - 1, j - 1] + C[i - 1, j]

return C[n, k]

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 3


Analysis and Design of Algorithms – Unit - 4

Time Efficiency of Binomial Coefficient Algorithm


 The basic operation for this algorithm is addition.

 Let A(n, k) be the total number of additions made by the algorithm in


computing C(n, k).

 To compute each entry by the formula, C(i, j) = C(i-1, j-1) + C(i-1, j)


requires just one addition.

 Because the first k + 1 rows of the table form a triangle while the
remaining n – k rows form a rectangle, we have to split the sum
expressing A(n, k) into two parts:

Warshall’s Algorithm
 Warshall’s algorithm is a well-known algorithm for computing the
transitive closure (path matrix) of a directed graph.

 It is named after S. Warshall.

 Definition of a transitive closure: The transitive closure of a directed


graph with n vertices can be defined as the n-by-n boolean matrix T = {tij},
in which the element in the ith row (1 ≤ i ≤ n) and the jth column
(1 ≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a directed
path of a positive length) from the ith vertex to the jth vertex; otherwise,
tij is 0.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 4


Analysis and Design of Algorithms – Unit - 4

Example:

 Warshall’s algorithm constructs the transitive closure of a given digraph


with n vertices through a series of n-by-n boolean matrices:

R(0) , . . . , R(k – 1) , R(k) , . . . ,R(n) . ----------- (1)

 Each of these matrices provide certain information about directed paths


in the digraph.

 Specifically, the element rij(k) in the ith row and jth column of matrix
R(k) (k=0, 1, . . . ,n) is equal to 1 if and only if there exists a directed
path from the ith vertex to the jth vertex with each intermediate
vertex if any, numbered not higher than k:

vi, a list of intermediate vertices each numbered not higher than k, v j ----------- (2)

 The series starts with R(0) , which does not allow any intermediate
vertices in its path; hence, R(0) is nothing else but the adjacency matrix of
the graph.

 R(1) contains the information about paths that can use the first vertex as
intermediate; so, it may contain more ones than R(0) .

 In general, each subsequent matrix in series (1) has one more vertex to
use as intermediate for its path than its predecessor.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 5


Analysis and Design of Algorithms – Unit - 4

 The last matrix in the series, R(n) , reflects paths that can use all n
vertices of the digraph as intermediate and hence is nothing else but the
digraph’s transitive closure.

 We have the following formula for generating the elements of matrix R(k)
from the elements of matrix R(k-1) :

rij (k) = rij (k-1) or (rik (k-1) and rkj (k-1) ). ------------------ (3)

 This formula implies the following rule for generating elements of matrix
R(k) from elements of matrix R(k-1) :

 If an element rij is 1 in R(k-1) , it remains 1 in R(k) .

 If an element rij is 0 in R(k-1) , it has to be changed to 1 in R(k) if and


only if the element in its row i and column k and the element in its
column j and row k are both 1’s in R(k-1) .

Example: Apply Warshall’s algorithm to find the Transitive closure for the
following graph.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 6


Analysis and Design of Algorithms – Unit - 4

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 7


Analysis and Design of Algorithms – Unit - 4

ALGORITHM Warshall(A[1...n, 1…n])

//Implements Warshall’s algorithm for computing the


//transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ← A
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
R(k)[i, j] ← R(k-1)[i, j] or (R(k-1)[i, k] and R(k-1)[k, j])
return R(n)

Note: The time efficiency of Warshall’s algorithm is Θ(n3)

Floyd’s Algorithm
 Given a weighted connected graph (undirected or directed), the all-pairs
shortest-paths problem asks to find the distance (lengths of the
shortest paths) from each vertex to all other vertices.

 The Distance matrix D is an n-by-n matrix in which the lengths of


shortest paths is recorded; the element dij in the ith row and the jth
column of this matrix indicates the length of the shortest path from the
ith vertex to the jth vertex (1 ≤ i, j ≤ n).

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 8


Analysis and Design of Algorithms – Unit - 4

 Floyd’s algorithm is a well-known algorithm for the all-pairs shortest-


paths problem.

 Floyd’s algorithm is named after its inventor R. Floyd.

 It is applicable to both undirected and directed weighted graphs


provided that they do not contain a cycle of a negative length.

 Floyd’s algorithm computes the distance matrix of a weighted graph with


n vertices through a series of n-by-n matrices:

D(0) , . . . , D(k – 1) , D(k) , . . . , D(n) . ----------- (1)

 Specifically, the element dij(k) in the ith row and jth column of matrix D(k)
(k=0, 1, . . . , n) is equal to the length of the shortest path among all
paths from the ith vertex to the jth vertex with each intermediate
vertex, if any, numbered not higher than k.

 The series starts with D(0) , which does not allow any intermediate
vertices in its path; hence, D(0) is nothing but the weight matrix of the
graph.

 The last matrix in the series, D(n) , contains the lengths of the shortest
paths among all paths that can use all n vertices as intermediate and
hence is nothing but the distance matrix being sought.

 We can compute all the elements of each matrix D(k) from its
immediate predecessor D(k-1) in series (1).

 The lengths of the shortest paths is got by the following recurrence:

dij(k) = min { dij(k-1) , dik(k-1) + dkj(k-1) } for k ≥ 1, dij(0) = wij

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 9


Analysis and Design of Algorithms – Unit - 4

ALGORITHM Floyd(W[1...n, 1…n])

//Implements Floyd’s algorithm for the all-pairs shortest-

//paths problem

//Input: The weight matrix W of a graph with no negative

//length cycle

//Output: The distance matrix of the shortest paths’ lengths

D ← W // is not necessary if W can be overwritten

for k ← 1 to n do

for i ← 1 to n do

for j ← 1 to n do

D[i, j] ← min { D[i, j], D[i, k] + D[k, j] }

return D

Note: The time efficiency of Floyd’s algorithm is cubic i.e., Θ(n3)

Example: Apply Floyd’s algorithm for the following graph to find all-pairs
shortest path.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 10


Analysis and Design of Algorithms – Unit - 4

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 11


Analysis and Design of Algorithms – Unit - 4

The Knapsack Problem

 Given a knapsack with maximum capacity W, and n items.


 Each item i has some weight wi and value vi (all wi , vi and W are
positive integer values).

 Problem: How to pack the knapsack to achieve maximum total value of


packed items? i.e., find the most valuable subset of the items that fit
into the knapsack.The problem is called a “0-1 Knapsack problem”,
because each item must be entirely accepted or rejected. Just
another version of this problem is the “Fractional Knapsack Problem”,
where we can take fractions of items.

 To design a dynamic programming algorithm, we need to have a


recurrence relation that expresses a solution to an instance of the
knapsack problem in terms of solutions to its smaller sub-instances.

 The following recurrence is used for the Knapsack problem:

 Our goal is to find V[n, W], the maximal value of a subset of the n given
items that fit into the knapsack of capacity W, and an optimal subset
itself.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 12


Analysis and Design of Algorithms – Unit - 4

 Below figure illustrates the values involved in recurrence equations:

 For i, j > 0, to compute the entry in the ith row and the jth column, V[i, j],
we compute the maximum of the entry in the previous row and the
same column and the sum of vi and the entry in the previous row
and wi columns to the left. The table can be filled either row by row or
column by column.

i.e., V[i, j] = max{V[i-1, j], vi + V[i-1, j-wi]}

 Example: Apply dynamic programming algorithm to the following


instance of Knapsack problem. Also find the optimal subset and optimal
maximum value.

Capacity W = 5

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 13


Analysis and Design of Algorithms – Unit - 4

Soln:

Entries for Row 0:

V[0, 0]= 0 since i and j values are 0

V[0, 1]=V[0, 2]=V[0, 3]=V[0,4]=V[0, 5]=0 Since i=0

Entries for Row 1:

V[1, 0] = 0 since j=0

V[1, 1] = V[0, 1] = 0 (Here, V[i, j]= V[i-1, j] since j-wi < 0)

V[1, 2] = max{V[0, 2], 12 + V[0, 0]} = max(0, 12) = 12

V[1, 3] = max{V[0, 3], 12 +V[0, 1]} = max(0, 12) = 12

V[1, 4] = max{V[0, 4], 12 + V[0, 2]} = max(0, 12) = 12

V[1, 5] = max{V[0, 5], 12 + V[0, 3]} = max(0, 12) = 12

Entries for Row 2:

V[2, 0] = 0 since j = 0

V[2, 1] = max{V[1, 1], 10 + V[1, 0]} = max(0, 10) = 10

V[2, 2] = max{V[1, 2], 10 + V[1, 1]} = max(12, 10) = 12

V[2, 3] = max{V[1, 3], 10 +V[1, 2]} = max(12, 22) = 22

V[2, 4] = max{V[1, 4], 10 + V[1, 3]} = max(12, 22) = 22

V[2, 5] = max{V[1, 5], 10 + V[1, 4]} = max(12, 22) = 22

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 14


Analysis and Design of Algorithms – Unit - 4

Entries for Row 3:

V[3, 0] = 0 since j = 0

V[3, 1] = V[2, 1] = 10 (Here, V[i, j]= V[i-1, j] since j-wi < 0)

V[3, 2] = V[2, 2] = 12 (Here, V[i, j]= V[i-1, j] since j-wi < 0)

V[3, 3] = max{V[2, 3], 20 +V[2, 0]} = max(22, 20) = 22

V[3, 4] = max{V[2, 4], 20 + V[2, 1]} = max(22, 30) = 30

V[3, 5] = max{V[2, 5], 20 + V[2, 2]} = max(22, 32) = 32

Entries for Row 4:


V[4, 0] = 0 since j = 0

V[4, 1] = V[3, 1] = 10 (Here, V[i, j]= V[i-1, j] since j-wi < 0)

V[4, 2] = max{V[3, 2], 15 + V[3, 0]} = max(12, 15) = 15

V[4, 3] = max{V[3, 3], 15 +V[3, 1]} = max(22, 25) = 25

V[4, 4] = max{V[3, 4], 15 + V[3, 2]} = max(30, 27) = 30

V[4, 5] = max{V[3, 5], 15 + V[3, 3]} = max(32, 37) = 37

 Thus, the maximal value is V [4, 5] = $37. We can find the


composition of an optimal subset by tracing back the computations
of this entry in the table.

To find composition of optimal subset

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 15


Analysis and Design of Algorithms – Unit - 4

 Since V [4, 5] is not equal to V [3, 5], item 4 was included in an optimal
solution along with an optimal subset for filling 5 - 2 = 3 remaining units
of the knapsack capacity.

 The remaining is V[3, 3]

 Here V[3, 3] = V[2, 3] so item 3 is not included

 V[2, 3]  V[1, 3] so item 2 is included.

 The remaining is V[1,2]

 V[1, 2]  V[0, 2] so item 1 is included

 The solution is {item 1, item 2, item 4}

 Total weight is 5

 Total value is 37

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 16


Analysis and Design of Algorithms – Unit - 4

Efficiency of Knapsack Problem:

 The time efficiency and space efficiency of this algorithm are both in
θ(nW).
 The time needed to find the composition of an optimal solution is in
O(n + W).

PRINCIPLE OF OPTIMALITY

 It is a general principle that underlines dynamic programming algorithms for


optimization problems.
 Richard Bellman called the principle as the principle of optimality.
 It says that an optimal solution to any instance of an optimization problem is
composed of optimal solutions to its subinstances.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 17


Analysis and Design of Algorithms – Unit - 4

Greedy Technique

 The greedy approach suggests constructing a solution through a


sequence of steps, each expanding a partially constructed solution
obtained so far, until a complete solution to the problem is reached.
On each step, the choice made must be

 feasible, i.e., it has to satisfy the problem’s constraints.


 locally optimal, i.e., it has to be the best local choice among all
feasible choices available on that step.
 irrevocable, i.e., once made, it cannot be changed on subsequent
steps of the algorithm.

Spanning Tree

Definition: A spanning tree of a connected graph is its connected acyclic


subgraph (i.e., a tree) that contains all the vertices of the graph. A minimum
spanning tree of a weighted connected graph is its spanning tree of smallest
weight, where the weight of a tree is defined as the sum of the weights on all
its edges. The minimum spanning tree problem is the problem of finding a
minimum spanning tree for a given weighted connected graph.

Example:

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 18


Analysis and Design of Algorithms – Unit - 4

Prim’s Algorithm

 Prim’s algorithm constructs a minimum spanning tree through a


sequence of expanding subtrees.

 The initial subtree in such a sequence consists of a single vertex


selected arbitrarily from the set V of the graph’s vertices.

 On each iteration, we expand the current tree in the greedy manner by


simply attaching to it the nearest vertex not in that tree.

 The algorithm stops after all the graph’s vertices have been included
in the tree being constructed.

 Since the algorithm expands a tree by exactly one vertex on each of its
iterations, the total number of such iterations is n-1, where n is the
number of vertices in the graph.

 The nature of Prim’s algorithm makes it necessary to provide each vertex


not in the current tree with the information about the shortest edge
connecting the vertex to a tree vertex.

 We can provide such information by attaching two labels to a vertex:


the name of the nearest tree vertex and the weight (length) of the
corresponding edge.

 Vertices that are not adjacent to any of the tree vertices can be given the
∞ label and a null label for the name of the nearest tree vertex.

 We can split the vertices that are not in the tree into two sets, the
“fringe” and the “unseen”.

 The fringe contains only the vertices that are not in the tree but
are adjacent to at least one tree vertex. These are the candidates
from which the next tree vertex is selected.

 The unseen vertices are all other vertices of the graph.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 19


Analysis and Design of Algorithms – Unit - 4

Prim’s Algorithm:

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 20


Analysis and Design of Algorithms – Unit - 4

Example: Apply Prim’s algorithm to find the minimum spanning tree for the

following graph.

Soln:

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 21


Analysis and Design of Algorithms – Unit - 4

Analysis of Prim’s Algorithm

 Efficiency of Prim’s algorithm depends on the data structures chosen for


the graph.

 If a graph is represented by its weight matrix, the algorithm’s


running time will be in θ(|V |2).

 If a graph is represented by its adjacency linked lists, the running


time of the algorithm is in O(|E| log |V |).

Kruskal’s Algorithm

 This is another greedy algorithm for the minimum spanning tree


problem that also always yields an optimal solution.

 It is named Kruskal’s algorithm, after Joseph Kruskal, who discovered


the algorithm.

 Kruskal’s algorithm looks at a minimum spanning tree for a weighted


connected graph G = {V, E} as an acyclic subgraph with |V| - 1 edges for
which the sum of the edge weights is the smallest.

 The algorithm constructs a minimum spanning tree as an expanding


sequence of subgraphs, which are always acyclic but are not necessarily
connected on the intermediate stages of the algorithm.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 22


Analysis and Design of Algorithms – Unit - 4

 The algorithm begins by sorting the graph’s edges in nondecreasing order


of their weights.
 Then, starting with the empty subgraph, it scans this sorted list adding
the next edge on the list to the current subgraph if such an inclusion
does not create a cycle and simply skipping the edge otherwise.

Kruskal’s Algorithm:

Analysis of Kruskal’s Algorithm:


With an efficient sorting algorithm, efficiency of Kruskal’s algorithm will be in
O(|E| log |E |).

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 23


Analysis and Design of Algorithms – Unit - 4

Example: Apply Kruskal’s algorithm to find the minimum spanning tree for

the following graph.

Soln:

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 24


Analysis and Design of Algorithms – Unit - 4

Dijkstra’s Algorithm

 Single-source shortest-paths problem: For a given vertex called the


source in a weighted connected graph, find shortest paths to all its other
vertices.

 The best-known algorithm for the single-source shortest-paths problem


is Dijkstra’s algorithm.

 Edsger W. Dijkstra discovered this algorithm in mid-1950s.

 Assumptions:

 the graph is connected

 the edges are undirected (or directed)

 the edge weights are nonnegative

 First, Dijkstra’s algorithm finds the shortest path from the source to a
vertex nearest to it, then to a second nearest, and so on.

 Before its ith iteration commences, the algorithm has already identified
the shortest paths to i - 1 other vertices nearest to the source.

 These vertices, the source, and the edges of the shortest paths
leading to them from the source form a subtree Ti of the given
graph.

 The set of vertices adjacent to the vertices in Ti can be referred to as


“fringe vertices”; they are the candidates from which Dijkstra’s algorithm
selects the next vertex nearest to the source.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 25


Analysis and Design of Algorithms – Unit - 4

 To identify the ith nearest vertex, the algorithm computes, for every fringe
vertex u, the sum of the distance to the nearest tree vertex v (given by the
weight of the edge (v, u)) and the length dv of the shortest path from the
source to v. Then selects the vertex with the smallest such sum.

Dijkstra’s Algorithm:

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 26


Analysis and Design of Algorithms – Unit - 4

Example: Apply Dijkstra’s Algorithm to identify single source shortest path

for the following graph.

Soln:

The shortest paths and their lengths are:


from a to b: a-b of length 3
from a to d: a–b–d of length 5
from a to c: a–b–c of length 7
from a to e: a–b–d–e of length 9

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 27


Analysis and Design of Algorithms – Unit - 4

Analysis of Dijkstra’s Algorithm

 The time efficiency of Dijkstra’s algorithm depends on the data


structures used for representing an input graph.

 It is in θ(|V |2) for graphs represented by their weight matrix.

 For graphs represented by their adjacency linked lists it is in


O(|E| log |V |).

************* END OF UNIT - 4 *************

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 28

You might also like