0% found this document useful (0 votes)
75 views

DAA Unit-3 PPT - Part-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

DAA Unit-3 PPT - Part-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Unit-3

Greedy Method

1
Introduction
•A greedy algorithm is the most straightforward design
technique.
•Most of these problems have N inputs and require us to
obtain a subset that satisfies some constraints. Any subset
satisfies these constraints is called a feasible solution.
•Wee need to find a feasible solution that either maximizes
or minimizes a given objective function. A feasible solution
that does this is called optimal solution.
The Greedy Method
• Greedy algorithms make good local choices in the hope that they
result in an optimal solution.
• They result in feasible solutions.
• Not necessarily an optimal solution.
• A proof is needed to show that the algorithm finds an optimal
solution.
The Greedy Method Strategy

4
Algorithm: Greedy method control abstraction for the subset
paradigm
Real time example is loading the containers
Knapsack Problem

7
Solution-1 : Randomly selected .
Solution-2 : profit non-increasing order.
Solution -3 : profit non-decreasing order
Solution-4 : non-increasing order of profit and weight ratio(Pi/Wi), that is p1/w1≥p2/w2≥.....≥pn/wn,
then, the greedy knapsack generates the optimal solution.

8
Algorithm GreedyKnapsack(m, n)
//P[1:n] and w[1:n] contain the profits and weights respectively of the n objects
// ordered such that p[i]/w[i] >= p[i+1]/w[i+1].
//m is the knapsack size and x[1:n] is the solution vector.
{
for i :=1 to n
do x[i] := 0.0; // Initialize x.
U := m; //Knapsack Size
for i := 1 to n do
{
if ( w[i] > U ) then
break;
if x[i] := 1 then
U := U - w[i];
}
if ( i <= n) then
x[i] := U/w[i];
}

9
Time Complexity
The main time taking step is the sorting of all items in the decreasing order of their value / weight ratios.

If the items are already arranged in the required order, the while loop takes O(n) time.

10
JOB SEQUENCING WITH DEADLINES

11
12
O(n2)

Time Complexity: O(n2)

13
14
MINIMUM-COST SPANNING TREES

Definition : SPANNING TREES Let G =(V,G) be an undirected connected graph. A


sub-graph t=(V, Eʹ) of G is a spanning tree of G iff t is a tree.

15
MINIMUM-COST SPANNING TREES

There are two algorithms to find the minimum cost spanning tree .
i). Prim’s algorithm.
ii). Kruskal’s algorithm
16
Prim’s algorithm.

The minimum cost is 99


17
Minimum Cost Spanning Tree – PRIM’s
Algorithm
Prim’s Algorithm
1. Algorithm Prim(E, cost, n, t) 14 for i=2 to n-1 do
2. // E is the set of edges in G. 15 {
3. //cost[1:n,1:n] is the cost matrix such that cost[i,j] is
16 // Find n-1 additional edges for t.
4. //either a positive real number or ∞ if no edge (i,j)
5. //exists. cost[i,j]=0, if i=j. A minimum spanning 17 Let j be an index such that near[j]≠0 and
6. //tree is computed and stored as a set of edges in the 18 cost[j, near[j]] is minimum;
//array t[1:n-1,1:2] 19 t[i,1]=j; t[i,2]=near[j];
7 { 20 mincost=mincost+cost[j, near[j]];
8. Let ( k, l) be an edge of minimum cost in E; 21 near[j]=0;
9. mincost=cost[ k, l]; 22 for k=1 to n do // update near[]
10 t[1,1]=k; t[1,2]=l; 23 if( ( near[k] ≠0 ) and (cost[k,near[k]>cost[k, j]))
11 for i=1 to n do // initialize near 25 then near[k]=j;
12 if( cost[i,k]< cost[i, l] then near[i]= k; 25 }
else near[i]= l; 26 return mincost;
13 near[k]=near[l]=0; 27 }

18
t[1:n-1; 1:2] Near[]
i
i 1 2
1
1
2
2
3
3 Cost matrix of given G
4 i/j 1 2 3 4 5 6 7
4
5 1 0 28 ∞ ∞ ∞ 10 ∞
5
2 28 0 16 ∞ ∞ ∞ 14
6
6 3 ∞ 16 0 12 ∞ ∞ ∞
7
4 ∞ ∞ 12 0 22 18

5 ∞ ∞ ∞ 22 0 25 24

6 10 ∞ ∞ ∞ 25 0 ∞

7 ∞ 14 ∞ 18 24 ∞ 0
19
Time Complexity: Depends on the DS used for the G
1. O(n2), if we use adjacency matrix to read the graph
edges, n is number of vertices in the G.
2. O((n+ |E|)logN), if we use adjacency list to read the G.

20
Kruskal’s algorithm.

21
Minimum Cost Spanning Tree – KRUSKAL’s Algorithm
Algorithm kruskal(E,cost,n,t)
if(j!=k) then
// E is the set of edges in G.G has n vertices.cost[u,v]
//is the cost of edge(u,v). t is the set of edges in the {
//minimum –cost spanning tree.the final cost i=i+1;
t[I,1]=u; t[I,2]=v;
// is returned.
mincost=mincost+cos[u,v];
{ Union(j,k);
Construct a heap out of the edge costs using Hepify; }
for i=1 to n do parent[i]=-1; }
//each vertex is in a different set. If(i!=n-1) then
write(“no spanning tree”);
i=0; mincost=0; else return mincost;
while((i<n-1) and (heap not empty)) do }
{
delete a minimum cost edge (u,v) from the
heap and reheapify using Adjust; Time Complexity: O(|E|log|E|). E is the number
of edges and log|E| is time to construct minheap.
j=Find(u);
K=Find(v);
23
Min heap
1,6
10
4,3
2,7
12 14
2,3
4,7 5,7
16 24
18 22
5,6 4,5
25 28 1,2

Min heap after <1,6> deletion


Initially , according to parent[i]=-1 4,3
12
2,3
{1}, {2}, {3}, {4}, {5}, {6}, {7}. 2,7
16 14
5,6
4,7 5,7
25 24
18 22
1,2 4,5
28
24
Other example :Graph and its spanning tree

25
Difference between Prim’s Algorithm and Kruskal’s Algorithm-

Prim’s Algorithm Kruskal’s Algorithm

The tree that we are making or


The tree that we are making or
growing usually remains
growing always remains connected.
disconnected.
Kruskal’s Algorithm grows a solution
Prim’s Algorithm grows a solution from
from the cheapest edge by adding the
a random vertex by adding the next
next cheapest edge to the existing
cheapest vertex to the existing tree.
tree / forest.
Prim’s Algorithm is faster for dense Kruskal’s Algorithm is faster for sparse
graphs. graphs.
Time complexity: O(n^2), if we read
the edge cost of the graph using
adjacency matrix. Time complexity: O(|V|+|E|)
((n+|E|)logn) if read the graph is splay
tree/adjacent list

26
Single-source shortest paths algorithm(Dijkstra Algorithm)

Time complexity :O(n2)

27
1 f i/j 1 2 3 4 5 6
1 0 50 45 10 ∞ ∞
2 f
2 ∞ 0 10 15 ∞ ∞
3 f 3 ∞ ∞ 0 ∞ 30 ∞
4 f 4 20 ∞ ∞ 0 15 ∞
5 f 5 ∞ 20 35 ∞ 0 ∞
6 ∞ ∞ ∞ ∞ 3 0
6 f

i Dist[]
1 0.0
2 50
3 45
4 10
5 ∞
6 ∞

Initially 28
Bellman Ford’s Algorithm Dijkstra’s Algorithm

1. Bellman Ford’s Algorithm works when there is


Dijkstra’s Algorithm doesn’t work when there is negative
negative weight edge, but it does not work with negative
weight edge.
weight cycle, it can detects only.

2. The result contains the information about the other The result contains whole information about the network,
vertices they are connected to. not only the vertices they are connected to.

3. It can easily be implemented in a distributed way. It can not be implemented easily in a distributed way.

4. It is more time consuming than Dijkstra’s algorithm. It is less time consuming. The time complexity is
Its time complexity is O(VE(or N^2)). O(E logV or (nlog n)).

5. Dynamic Programming approach is taken to


Greedy approach is taken to implement the algorithm.
implement the algorithm.

Dijkstra’s Algorithm is one example of a single-source shortest or SSSP algorithm, i.e., given a source vertex it finds
shortest path from source to all other vertices.
Floyd Warshall Algorithm is an example of all-pairs shortest path algorithm, meaning it computes the shortest path
between all pair of nodes.
29

You might also like