DAA.UNIT-2
DAA.UNIT-2
Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the
maximum and minimum, Merge sort, Quick sort.
The Greedy Method: The general Method, container loading, knapsack problem, Job
sequencing with deadlines, minimum-cost spanning Trees.
------------------------------------------------------------------------------------------------
Divide-and-conquer method: Divide-and-conquer are probably the best
knowngeneral algorithm design technique. The principle behind the Divide-and-conquer
algorithm design technique is that it is easier to solve several smaller instance of a problem
than the larger one.
The “divide-and-conquer” technique involves solving a particular problem by dividing it
into one or more cub-problems of smaller size, recursively solving each sub-problem and
then “merging” the solution of sub-problems to produce a solution to the original problem.
Divide-and-conquer algorithms work according to the following general plan.
1. Divide: Divide the problem into a number of smaller sub-problems ideally of
aboutthe same size.
2. Conquer: The smaller sub-problems are solved, typically recursively. If the
sub-problem sizes are small enough, just solve the sub-problems in a straight
forward manner.
3. Combine: If necessary, the solution obtained the smaller problems are
connected toget the solution to the original problem.
The following figure shows-
Low High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Average Case:In binary search, the average case efficiency is near to the worst case
efficiency.
So the average case efficiency will be taken as O(log n).
Efficiency in average case = O (log n).
Binary Search
12 6 18 4 9 8 2 15 3 7
12 6 2 4 9 8 1815 7 6
Since i = 7 j=6, then swap pivot element and 6th element ( jth element), we get
8 6 2 4 9 12 18 15
Thus pivot reaches its original position. The elements on left to the right pivot are
smaller than pivot (12) and right to pivot are greater pivot (12).
8 6 2 4 9 12 18 15
Sublist 1 Sublist 2
Now take sub-list1 and sub-list2 and apply the above process recursively, at last we
get sorted list.
Ex 2: Let the given list is-
8 18 56 34 9 92 6 2 64
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] i j
8 18 56 34 9 92 6 2 64 2 98
8 18 56 34 9 92 6 2 64 2 8
8 2 56 34 9 92 6 18 64 3 7
8 2 6 34 9 92 56 18 64 4 3
Since ij, then swap jth element, and pivot element, we get
6 2 8 34 9 92 56 18 64
< > < >
Sublist 1 Sublist 2
Now take a sub-list that has more than one element and follow the same
process as above. At last, we get the sorted list that is, we get
2 6 8 9 18 34 56 64 92
The following algorithm shows the quick sort
algorithm-Algorithm Quicksort(i, j)
{
// sorts the array from a[i] through a[j]
If ( i<j) then //if there are more than one element
{
//divide P into two sub-programs
K: = partition (a, i, j+1);
//Here K denotes the position of the partitioning
element //solve the sub problems
Quicksort(i, K-1);
Quicksort(K=1, j);
// There is no need for combining solution
}
}
Algorithm Partition (a, left, right)
{
/ The element from a[left] through a[right] are rearranged in such a manner that if
initially
/ pivot =a[left] then after completion a[j]= pivot, then return. Here j is the position where
/ pivot partition the list into two partitions. Note that a[right]= .
pivot: a[left];
i:= left; j:=right;
repeat
{
repeat
i: =i+1;
until (a[i] ≥ pivot);
repeat
j: =j-1;
until (a[j] < pivot);
if( i<j) then
Swap (a, i, j);
}until (i ≥ j);
a[left]: = a[j];
a[j]: = pivot;
return j;
}
Algorithm Swap (a, i, j)
{
//Example a[i] with a[j]
temp:= a[i];
a[i]: = a[j];
a[j]:= temp;
Advantages of Quick-sort: Quick-sort is the fastest sorting method among all the
sortingmethods. But it is somewhat complex and little difficult to implement than other
sorting methods.
Efficiency of Quick-sort: The efficiency of Quick-sort depends upon the selection of
pivotelement.
Best Case: In best case, consider the following two assumptions-
1. The pivot, which we choose, will always be swapped into the exactly the middle of
the list. And also consider pivot will have an equal number of elements both to its
left and right.
y
2. The number of elements in the list is a power of 2 i.e. n= 2 .
Worst Case: In worst case, assume that the pivot partition the list into two parts, so that
one ofthe partition has no elements while the other has all the other elements.
Fig:2.1.Worst case Analysis
Total number of comparisons will be-
2
Thus, the efficiency of quick-sort in worst case is O (n ).
Average Case: Let cA(n) be the average number of key comparisons made by quick-sort on
alist of elements of size n. assuming that the partitions split can happen in each position k(1
k n) With the same probability 1/n, we get the following recurrence relation.
Quick Sort
2
Worst Case O(n )
Merge Sort:
Merge sort is based on divide-and-conquer technique. Merge sort method is a two phase
process-
1. Dividing
2. Merging
Dividing Phase: During the dividing phase, each time the given list of elements is divided
intotwo parts. This division process continues until the list is small enough to divide.
Merging Phase: Merging is the process of combining two sorted lists, so that, the resultant
list isalso the sorted one. Suppose A is a sorted list with n element and B is a sorted list with
n2 elements. The operation that combines the elements of A and B into a single sorted list C
with n=n1 + n2, elements is called merging.
Algorithm-(Divide algorithm)
Algorithm Divide (a, low, high)
{
// a is an array, low is the starting index and high is the end index of a
}
The merging algorithm is as follows-
1
500 345 13 256 98 12 3 34 45 78 92
In worst case, each partition divides array such that one side has n/4 elements and other side
has 3n/4 elements. The worst case height of recursion tree is Log 3/4 n which is O(Log n).
Greedy method: It is most straight forward method. It is popular for solving the
optimization problems.
Optimization Problem:An optimization problem is the problem of finding the best solution
(optimal solution) from all the feasible solutions (practicable of possible solutions).(OR)
A Problem which demands or requires either maximum or minimum
result is called an optimization problem.
A problem which demands maximum result is called Maximization problem. Ex: Greedy
Knapsack problem, Greedy Job Sequencing with deadlines problem.
A problem which demands minimum result is called Minimization problem. Ex: Minimum
Spanning Tree problem, Single source shortest path problem.
In an optimization problem, we are given a set of constraints and an optimization functions.
Solutions that satisfy the constraints are called feasible solutions.
A feasible solution for which the optimization function has the best possible value is called
optimal solution.
Example:
Problem: Finding a minimum spanning tree from a weighted connected directed graph G.
Constraints: Every time a minimum edge is added to the tree and adding of an edge does not
form a simple circuit.
Feasible solutions: The feasible solutions are the spanning trees of the given graph G.
Optimal solution: An optimal solution is a spanning tree with minimum cost i.e.
minimum spanning tree.
Question: Find the minimum spanning tree for the following graph.
From the above spanning tree the figure 4 gives the optimal solution, because it is the
spanning tree with the minimum cost i.e. it is a minimum spanning tree of the graph G.
Characteristics of the Greedy algorithm:
Suppose that a problem can be solved by a sequence of decisions. The greedy method
has that each decision is locally optimal. These locally optimal solutions will finally add up
to a globally optimal solution. Such optimal decision taken at every step must be feasible,
locally optimal and irrecoverable.
Feasible: The choice which is made has to be satisfying the problems constraints.
Locally optimal: The choice has to be the best local choice among all feasible
choices available on that step.
Irrecoverable: The choice once made cannot be changed on sub-sequent steps of the algorithm
(Greedy method).
Control Abstraction for Greedy Method (OR) The General Method (OR) The General
Principle of Greedy Method:
Algorithm GreedyMethod (a, n)
{
// a is an array of n
inputs Solution: =Ø;
for i: =0 to n do
{
s: = select (a);
if (feasible (Solution, s)) then
{
Solution: = union (Solution, s);
}
else
reject (); // if solution is not feasible reject it.
}
return solution;
}
In greedy method, there are three important activities:
1.A selection of solution from the given input domain is performed,
i.e. s:= select(a).
3.From the set of feasible solutions, the particular solution that minimizes or maximizes the
given objection function is obtained. Such a solution is called optimal solution.
Q: A child buys a candy 42 rupees and gives a 100 note to the cashier. Then the
cashier wishes to return change using the fewest number of coins. Assume that the cashier
has Rs.1, Rs. 5 and Rs. 10 coins.
Note: This problem can be solved using the greedy method.
Applications of Greedy method: The following are the some applications of greedy
method:
1. Greedy Job Sequencing with deadlines
2. Greedy knapsack Problem
3. Minimum Cost Spanning tree (Prim’s, Krushkal’s)
4. Single Source Shortest path (Dijkstra’s)
5. Optimal Merge Patterns.
Subset paradigm:
Consider the inputs in an order based on some selection procedure (Use some optimization
measure for selection procedure).
At every stage, examine an input to see whether it leads to an optimal solution.
If the inclusion of input into partial solution yields an infeasible solution, discard the input;
otherwise, add it to the partial solution.
Most of these, however, will result in algorithms that generate sub- optimal solutions.
This version of the greedy technique is called the subset paradigm.
Examples: Greedy Knapsack problem, job sequencing with problem, Minimum Spanning tree.
Ordering paradigm:
Some algorithms do not need selection of an optimal subset but make decisions by looking at
the inputs in some order.
Difference between subset paradigm & ordering paradigm :Subset paradigm is one type of
greedy method that generates sub- optimal solutions whereas ordering paradigm is another
type of greedy method that does not call for sub-optimal solution. Subset paradigm works
based on optimal subset whereas Ordering paradigm works based on optimal ordering.
SOLUTION: In the given data, the jobs are already in descending order of their profits. So,
the job sequence is calculated and presented below.
JOBS J1 J2 J3 J4 J5 J6 J7
PROFI 3 5 20 18 1 6 30
TS
DEADL 1 3 4 3 2 1 2
INES
JOBS J7 J3 J4 J6 J2 J1 J5
PROFI 30 20 18 6 5 3 1
TS
DEADL 2 4 3 1 3 1 2
INES
JOB SEQUENCE:
{} J7 Assigned Slot 0
-
[1,2]
{J7} [1,2] J3 Assigned Slot 30
[3,4]
{J7,J3} [1,2], [3,4] J4 Assigned Slot 30+20=50
[2,3]
{J7,J4,J3} [1,2], [2,3],[3,4] J6 Assigned Slot 50+18=68
[0,1]
{J6,J7,J4, J3} [0,1], [1,2], J2 Cannot fit, 68+6=
74
[2,3],[3,4] Reject
{J6,J7,J4, J3} [0,1], [1,2], J1 Cannot fit, 74
[2,3],[3,4] Reject
{J6,J7,J4, J3} [0,1], [1,2], J5 Cannot fit, 74
[2,3],[3,4] Reject
{J6,J2, J4,J3} [0,1], [1,2], J1 Cannot fit, 44+5=49
[2,3],[3,4] Reject
{J6,J2, J4,J3} [0,1], [1,2], J5 Cannot fit, 49
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J5 Cannot fit, 90+20=110
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J6 Cannot fit, 110
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J7 Cannot fit, 110
[2,3],[3,4] Reject
SOLUTION:
OBJECTS (i) I1 I2 I3
PROFITS (Pi) 25 24 15
WEIGHTS 18 15 10
(Wi)
Pi/ Wi 1.3 1.6 1.5
OPTIMAL- 0 1 1/2
SEQUENCE (Xi )
WiXi (18*0)= 0 (15*1)= 15 (10*1/2)= 5
PiXi (25*0)= 0 (24*1)= 24 (15*1/2)= 7.5
∑ WiXi = (0+15+5) =20 ≤ M(20) AND
(MAXIMUM PROFIT) ∑ PiXi = (0+24+7.5) =31.5
OPTIMAL-SEQUENCE= Xi (0, 1, 1/2).
EXAMPLE-2: Find an optimal solution to the knapsack instance n=4 objects and the
capacity of knapsack M=15, profits (10, 5, 7, 11) and weight are (3, 4, 3, 5).
SOLUTION:
OBJECTS (i) I1 I2 I3 I4
PROFITS (Pi) 10 5 7 11
WEIGHTS (Wi) 3 4 3 5
Pi/ Wi 3.33 1.25 2.33 2.20
OPTIMAL- 1 1 1 1
SEQUENCE (Xi )
WiXi (3*1)= 3 (4*1)= 4 (3*1)= 3 (5*1)= 5
PiXi (10*1)= 10 (5*1)= 5 (7*1)= 7 (11*1)= 11
SOLUTION:
OBJECTS (i) I1 I2 I3 I4 I5 I6 I7
PROFITS(Pi) 10 5 15 7 6 18 3
WEIGHTS(Wi) 2 3 5 7 1 4 1
Pi/ Wi 5 1.66 3 1 6 4.5 3
OPTIMAL 1 2/3 1 0 1 1 1
SEQUENCE
(Xi )
WiXi (2*1)=2 (3*2/3)=2 (5*1)=5 (7*0)=0 (1*1)=1 (4*1)=4 (1*1)=1
Problem-01: Construct the minimum spanning tree (MST) for the given graph
using Krushkal’s Algorithm-
Solution:
Step-01:
Step-02:
Step-03:
Step-04:
Step-05:
Step-06:
Step-07:
Since all the vertices have been included in the MST, so we stop.
Weight of the MST= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14 = 99 units.
Prim’s Algorithm-
Steps for implementing Prim’s Algorithm-
Step-01:
Randomly choose any vertex.
We usually select and start with a vertex that connects to the edge having
least weight.
Step-02:
Find all the edges that connect the tree to new vertices, then find the least
weight edge among those edges and include it in the existing tree.
If including that edge creates a cycle, then reject that edge and look for the next
least weight edge.
Step-03:
Keep repeating step-02 until all the vertices are included and Minimum Spanning
Tree (MST) is obtained.
Time Complexity:
Worst case time complexity of Prim’s Algorithm
= O(ElogV) using binary heap
= O(E + VlogV) using Fibonacci heap
Explanation
If adjacency list is used to represent the graph, then using breadth first
search, all the vertices can be traversed in O(V + E) time.
We traverse all the vertices of graph using breadth first search and use a min
heap for storing the vertices not yet included in the MST.
To get the minimum weight edge, we use min heap as a priority queue.
Min heap operations like extracting minimum element and decreasing key
value takes O(logV) time.
Problem-01:
Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-
Solution-
Step-01:
Step-02:
Step-03:
Step-04:
Step-05:
Step-06: