0% found this document useful (0 votes)
23 views

Module 2 Daa FINAL 20201

Divide and conquer is a general algorithm design paradigm where a problem is divided into smaller subproblems, the subproblems are solved independently, and then the solutions are combined to solve the original problem. The general steps are to divide the problem into subproblems, solve the subproblems recursively, and then combine the solutions. Binary search and merge sort are examples of divide and conquer algorithms. The time complexity of divide and conquer algorithms can be analyzed using recurrence relations.

Uploaded by

trichy_sathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Module 2 Daa FINAL 20201

Divide and conquer is a general algorithm design paradigm where a problem is divided into smaller subproblems, the subproblems are solved independently, and then the solutions are combined to solve the original problem. The general steps are to divide the problem into subproblems, solve the subproblems recursively, and then combine the solutions. Binary search and merge sort are examples of divide and conquer algorithms. The time complexity of divide and conquer algorithms can be analyzed using recurrence relations.

Uploaded by

trichy_sathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

MODULE—2
DIVIDE AND CONQUER
Topics: General method, Binary search, Recurrence equation for divide and conquer, Finding
the maximum and minimum, Merge sort, Quick sort, Stassen‟s matrix multiplication,
Advantages and Disadvantages of divide and conquer, Decrease and Conquer Approach,
Topological Sort.

Divide and conquer General Method


The Divide-and-conquer strategy Suggests splitting t h e inputs of size „n‟ into „k‟
distinct subsets such that 1<k ≤ n, producing k sub problems.
These sub problems must be solved & then a method to be found to combine the solutions
of sub problems (sub solutions) to produce the solution to the original problem of size „n‟. If the
sub problems are relatively large then divide & conquer strategy must be reapplied. For the
reapplication sub problems the divide & conquer strategy will be expressed as recursive algorithm.

Figure 2.1: Divide and conquer


Thus, the general plan of divide and conquer strategy can be represented as follows:

KS, KR & KK, Dept. of ISE, RNSIT DAA-18CS42 Page 1


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

1. DIVIDE: A problems instance is divided into several sub problem instances of the same
problem, ideally of about the same size.
2. RECUR: solve the sub problems recursively
3. CONQUER : If necessary, the solutions obtained for the smaller instances are combined to
get a solution to the original instance
Control abstraction of divide and conquer
“Control abstraction is a procedure whose flow of control is clear but whose primary
operations are specified by other procedures whose precise meanings are left undefined”.
Consider the following algorithm (DAndC) which is invoked for problem p to be solved.
Small(P) is a Boolean-valued function that determines whether the input size is small enough that the
answer can be computed without splitting.

If the size of p is n and the sizes of k sub problems are n1, n2,….., nk then the computing time of
(DAndC ) is described by the recurrence relation. Divide and conquer (DAndC ) is described by the
recurrence relation given below which is used to know the computing time.

Where, T(n) is the time for DAnd C on any input of size n


and g(n) is the time to compute the answer directly for small inputs.
The function f(n) is the time for dividing P and combining the solutions to sub problems.

KS, KR & KK, Dept. of ISE, RNSIT DAA-18CS42 Page 2


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Divide-and-conquer recurrence
In the most typical case of divide-and-conquer a problem’s instance of size n is divided into two
instances of size n/2.
More generally, an instance of size n can be divided into b instances of size n/b, with a of
them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.)
Assuming that size n is a power of b to simplify our analysis, we get the following recurrence for the
running time T (n):

T (n) = aT (n/b) + f (n)

Where f (n) is a function that accounts for the time spent on dividing an instance of size n into
instances of size n/b and combining their solutions.
For the sum example discussed under general examples below, a = b = 2 and f(n)=1.
Obviously, the order of growth of its solution T (n) depends on the values of the constants a and b and
the order of growth of the function f (n).

General examples of divide and conquer

Example 1 [detecting a counterfeit coin]


Statement: you are given a bag with 16 coins. Told that one in that is counterfeit, which even is
lighter than genuine coin. Now, task is to determine whether the bag contains a counterfeit coin or not.
[Machine is supported to weigh the coins].

Solution using divide and conquer strategy


1. Divide the original instance in to two or more instances.
16 is divided in to 2 each sets A and B.
2. Determining A or B contains counterfeit coin use machine to compare the weights. If both
have different weights‟ then counterfeit coin is present.
3. Take the result from step2 and generate the answer for the original 16-coin instance.

Example 2 : Computing sum of n numbers a0,a1……..an-1, if n>1, we divide the problem in to 2


instances a0+…..an-1=(a0+ a[n/2]-1)+(a[n/2]+….an-1).

KS, KR & KK, Dept. of ISE, RNSIT DAA-18CS42 Page 3


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
Separately we add two subgroups solution to get the solution for original problem. It may not be
efficient to use this strategy for this problem. But there exists many algorithms which work with this
strategy.

Master Theorem
The efficiency analysis of divide & conquer algorithm is simplified by the Master Theorem form:

If f(n)∈Θ(nd)
where d≥0 in recurrence
equation then,

(Analogous results hold for O & Ω notations also).

Example: Computing sum of n numbers a0,a1……..an-1, if n>1, we divide the problem in to 2


instances a0+…..an-1=(a0+ ........ a[n/2]-1)+(a[n/2]+….an-1). Using master‟s theorem
T(n)=2*A(n/2)+1
Thus , a=2 , b=2 and d≥0 in recurrence equation then, a>bd is correct
2>20
Thus, T(n)= ѳ(nlog22)
T(n)=ѳ(n),

Binary Search
Let ai (1≤i≤n) is a list of elements stored in non decreasing order. The searching problem is to
determine whether a given element is present in the list. If key x is present we have to determine
the index value j such that aj=x. If x is not in the list j is set to be zero.
Let P= (n, ai, . . . al, x) denotes an instance of binary search problem. Divide & Conquer
can be used to solve this binary search problem. Let Small (P) be true if n=1. S(P) will take the
value i if x=ai , otherwise it will take 0.
If P has more than one element it can be divided into new sub-problems as follows:
Take an index q within the range [i, l] & compare x with aq. There are three possibilities.

i. If x= aq the problem is immediately solved (Successful search)


ii. If x< aq , key x is to be searched only in sub list ai, ai+1, . . ., aq-1.

KS, KR & KK, Dept. of ISE, RNSIT DAA-18CS42 Page 4


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
iii. If x >aq, key x is to be searched only in sub list aq+1, aq+2,....al. If q is chosen such t hat aq

is the middle element i.e. q=(n+1)/2, then the resulting searching algorithm is known as Binary
Search algorithm.
Recursive Binary search Algorithm:
Algorithm BinarySearch(a, i, l, x)
// Implements recursive Binary search algorithm
// Input: An array a[i. .. l] sorted in ascending order and a search key x
// Output: If x is present return j such that x=a[j]; else return 0.
{
if( i=l) then
{ if(x==a[i]) then return I;
else
return 0;
}
else
{

mid= ⌊(i+l)/2⌋;
if(x==a[mid]) then return mid;
else if (x<a[mid]) then return BinarySearch(a, i, mid-1, x);
else BinarySearch(a,mid+1, l, x);
}
}

Non-recursive Binary search Algorithm:


Algorithm BinarySearch(a, n, x)
// Implements non-recursive Binary search algorithm
// Input: An array a[1. .. n] sorted in ascending order and a search key x
// Output: If x is present return j such that x=a[j]; else return 0.
{
low=1; high=n;
while(low≤high) do
{

mid=⌊(low+high)/2⌋;

KS, KR & KK, Dept. of ISE, RNSIT DAA-18CS42 Page 5


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
if(x < a[mid]) then high=mid-1;
else if(x > a[mid]) then low=mid+1;
else return mid;
}return0; }

Testing:-

To fully test binary search, we need not concern with the values of a[1:n]. By varying x sufficiently,
we can observe all possible computation sequences of BinarySearch without taking different values
for a.To test all successful searches, x must take on the n values in a. To test all
unsuccessful searches, x need only take on n+ 1 different value. So complexity of testing
BinarySearch is 2n+1 for each n.

Analysis:-
We do the analysis with frequency count(Key operation count) and space required for the algorithm.
In binary search the space is required to store n elements of the array and to store the variables low,
high, mid and x i.e. n+4 locations
To find the time complexity of the algorithm, as the comparison count depends on the
specifics of the input, so we need to analyze best case, average case & worst case efficiencies
separately. We assume only one comparison is needed to determine which of the three
possibilities of if condition in the algorithm holds. Let us take an array with 14 elements.

Index pos 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Elements -10 -6 -3 -1 0 2 4 9 12 15 18 19 22 30
comparisons 3 4 2 4 3 4 1 4 3 4 2 4 3 4

From the above table we can conclude that, no element requires more the 4 comparisons. The average
number of comparisons required is (sum of count of comparisons/number of elements) i.e 45/14=3.21
comparisons per successful search on average.
There are 15 possible ways that an unsuccessful search may terminate depending on the
value of x. if x < a[1], the algorithm requires three comparisons to determine x is not present. For all
the remaining possibilities the algorithm requires 4 element comparisons. Thus the average number of
comparisons for an unsuccessful search is (3+14*4)/15=59/15=3.93.

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 6


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
To derive the generalized formula and for better understanding of algorithm is to consider the
sequence of values for mid that are produced by BinarySearch for all possiblevalues of x. these
possible values can described using binary decision tree in which the

value in each node is the value of mid. The below figure is the decision tree for n=14.

11
3

5 9 13
1

6 8 10 12 14 2
4

Figure: Decision tree


In the above decision tree, each path through the tree represents a sequence of comparisons
in the binary search method. If x is present, then algorithm will end at one of the circular node
(internal nodes) that lists the index into the array where x was found. If x is not present, the
algorithm will terminate at one of the square nodes (external nodes).

Theorem 1:
If n is in the range [2k-1, 2k), then BinarySearch makes at most k element comparisons for
successful search and either k-1 or k comparisons for an unsuccessful search. (i.e. The time for a
successful search is O(logn) and for unsuccessful search is Θ(logn)).
Proof:
Let us consider a binary decision tree, which describes the function of Binary search on n
elements. Here all successful search end at a circular node, where as all unsuccessful search will end

at square node. If 2k-1≤n<2k, then all circular nodes are at levels 1,2,…,k, where as all square nodes
are at levels k & K+1(root node is at level 1). The number of element comparisons needed to

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 7


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
terminate at a circular node on level i is i, where as the number of element comparisons needed to
terminate at a square node at level i is only i-1, hence the proof.
The above proof will tell the worst case time complexity is O(logn) for successful
search & Θ(logn) for unsuccessful search.

Theorem 2: Algorithm BinarySearch(a, n, x) works correctly.


Proof:-
Assume all the statements will work as expected & comparisons operations are carried out
appropriately. Initially low=1, high=n, n≥0 & a[1]≤a[2] ≤. . . ≤a[n]. If n=0, the algorithm does not
enter the loop & 0 is returned. Otherwise through the loop the possible elements to be checked for
equality with x are a[low], a[low+1], ... ,a[mid], ..., a[high]. If x=a[mid] the algorithm terminates
successfully. Otherwise the range is narrowed by either increasing low to mid+1 or decreasing
high to mid-1.
This narrowing of range does not affect the outcome of the search. If low becomes greater
than high, then key x is not present& the loop is exited.

Analysis:
Input size parameter is n. Basic operation is key comparison.
Worstcase: is possible if the array does not contain key element or it takes maximum comparisions.
Cworst(n)=cworst(n/2)+cworst(n/2) for n>=1
Cworst(1)=1
Consider n=2k
Cworst(2k)=k+1=log 2 n+1
There fore cworst(n)= log 2 n

Overall Time complexity of binary search is

For Successful search


Best:ѳ(1), Average : ѳ(logn) Worst: ѳ(logn)
For unsuccessful case (all same)
Best, Average, Worst: ѳ(logn)

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 8


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
Example 1: Consider an array A of elements below for binary search
-15, -6, 0, 7, 9 , 23, 54, 82 and Key=7

Recursive calls: BinarySearch(a,1,12,7) for the recursive algorithm defined above BinarySearch(a, i,
l, x).
Solution:
Index 1 2 3 4 5 6 7 8
Array a
-15 -6 0 7 9 23 54 82
elements

 Mid =(1+8)/2=4.5= rounded to 4


 Compare key x with mid element i.e. 7=a[mid]=a[4]=7
 Key matches with the middle element of the array so the algorithm returns 4 as the index position of
the element found in array a. so it is successful.
 This case is the best case also .As it takes only one comparison.

Example 2:
Consider an array A of elements below for binary search
-15, -6, 0, 7, 9 , 23, 54, 82 and Key=54
Solution:
Index 1 2 3 4 5 6 7 8
Array a
-15 -6 0 7 9 23 54 82
elements
 Compute mid =(1+8)/2=4
 Compare key x with mid element i.e 82 =a[mid] i.e a[4]=7 not equal to key 54.
 if (x<a[mid]) then no
 so (x>a[mid]) as 54>7, so call BinarySearch(a,mid+1, l, x);

Index 5 6 7 8

Array a 9 23 54 82

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 9


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
BinarySearch(a,5,8,7) so array is divided in to two parts and we compare only the right subarray.

In right subarray again compute mid element


Index 1 2 3 4 mid=(5+8)/2=6
Array a -6 0 7 Since a[mid]=23 is lesser than 54 .again recursively call the
elements -15 Binarysearch for right subarray
BinarySearch(a,mid+1, l, x) i.e BinarySearch(a,7, 6, 8);

Index 5 6 Index 7 8
Array a 9 23 Array a 54 82
elements elements

Compute mid=(7+8)/2=7
A[7]=54 matches with the key.so return sussess with index 7 saying key 54 is found at position 7 in
the array.

Finding the Maximum and minimum


Problem statement: The problem is to find the maximum and minimum items in a set of n
elements.
Algorithm 1: Straight method.

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 10


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
Note: This algorithm takes 2n-2 comparisons.

Divide and conquer

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 11


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 12


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Merge Sort
This problem is one of the best example for divide and conquer.Given a sequence of n elements

a[1],a[2],...,a[n], the merge sort algorithm will split into two sets a[1], ... a[⌊n/2⌋] and

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 13


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

a[⌊n/2⌋+1],….,a[n]. Each set is individually sorted & Resulting sorted sets are merged to get a
single sorted array of n elements.
Procedure:
 Divide: Partition array in to two sub lists
 Conquer: Then sort two sub lists
 Combine: Merge sub problems

ALGORITHM Mergesort(A[0..n − 1])


//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in non decreasing order
if n > 1
copy A[0.._n/2_ − 1] to B[0.._n/2_ − 1]
copy A[_n/2_..n − 1] to C[0.._n/2_ − 1]
Mergesort(B[0.._n/2_ − 1])
Mergesort(C[0.._n/2_ − 1])
Merge(B, C, A) //see below

ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])


//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else
A[k]←C[j ]; j ←j + 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 14


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Example: Consider array of elements 8 3297154


The merge sort operation performed on it is depicted below.

Example of a merged operation

Analysis
Assuming for simplicity that n is a power of 2, the recurrence relation for the number of key
comparisons C(n) is
C(n) = 2C(n/2) + C merge (n) for n > 1, C(1) = 0.

Let us analyze C merge(n), the number of key comparisons performed during the merging stage. At
each step, exactly one comparison is made, after which the total number of elements in the two arrays
still needing to be processed is reduced by 1.
In the worst case, neither of the two arrays becomes empty before the other one contains just
one element (e.g., smaller elements may come from the alternating arrays).

Therefore, for the worst case C merge(n) = n − 1, and we have the recurrence

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 15


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
C worst(n) = 2Cworst(n/2) + n − 1 for n > 1, C worst(1) = 0.
Hence, according to the Master Theorem,
C worst(n) ∈ _(n log n)
In fact, it is easy to find the exact solution to the worst-case recurrence for n = 2k:
Cworst(n) = n log2 n − n + 1.
Analysis of Merge Sort using Master theorem:

If the time for merging operation is proportional to n, then:

a n=1, a is constant

T(n)=

2 T(n/2)+c n n>1, c is a constant

Assume n=2k, then


T(n)= 2T(n/2)+cn
= 2[2T(n/4)+cn/2]+cn
= 4T(n/4)+2cn
= 4[2T(n/8)+cn/4]+2cn
= 8T(n/8)+cn+2cn
=23T(n/23)+3cn

=2kT(n/2k)+kcn
=nT(1)+kcn //n=2k
T(n) =na+cnlogn
If 2k<n≤2k+1, then T(n)≤T(2k+1)

So, T(n)∈O(nlogn)

Quick sort

Quick sort is the other important sorting algorithm that is based on the divide-and conquers
approach. Unlike merge sort, which divides its input elements according to their position in the
array, quick sort divides them according to their value.

KR & KK, Dept. of ISE, RNSIT DAA-CBCS Page 16


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
A partition is an arrangement of the array‟s elements so that all the elements to the left of
some element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater
than or equal to it:
A[0] . . . A[s − 1] _ _ all are ≤A[s]

A[s] A[s + 1] . . . A[n − 1] _ _all are ≥A[s]

After a partition is achieved, A[s] will be in its final position in the sorted array, and we can continue
sorting the two sub arrays to the left and to the right of A[s] independently (e.g., by the same
method).

Note: the difference with merge sort: there, the division of the problem into two sub problems is
immediate and the entire work happens in combining their solutions; here, the entire work happens
in the division stage, with no work required to combine the solutions to the sub problems.
Here is pseudo code of quick sort:

ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Sub array A[l..r] sorted in non decreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r]).

Quick sort execution procedure:


 Call to quick sort which performs sorting recursively for sub arrays.
 Partitioning or split position is identified by using partitioning function which divides the
array in to two sub arrays.
Partitioning function procedure:
 1. Make first element as the pivot element. P //many ways of selecting a pivot element exist.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 17


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
 2. set i (low index to point to 0)and j to high which is n-1.
 Increment i until A[i]≥ p.If condition is met stop incrementing i.
 Decrement j until A[j ]≤ p. If condition is met stop decrementing j
 5.Compare the positions of i and j (three cases may arise,<,>,=)
a) If i<j perform swap(A[i], A[j ]) and continue incrementing i and from the same positions.
If scanning indices i and j have not crossed, i.e., i < j, we simply exchange A[i] and A[j ] and resume
the scans by incrementing I and decrementing j, respectively:

b) If i>j then swap pivot element with a[j].return j as the split position. If the scanning indices
have crossed over, i.e., i > j, we will have partitioned the
subarray after exchanging the pivot with A[j ]:

c) Finally, if the scanning indices stop while pointing to the same element, i.e., i = j,
the value they are pointing to must be equal to p (why?). Thus, we have the
subarray partitioned, with the split position s = i = j :

We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the pivot
with A[j ] whenever i ≥ j . j is the split position. Towards left of this positions elements will be
lesser in value and right side elements will be larger in value. And the same are considered as two
subarrays which we take separately to sort.

ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoare‟s algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 18


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
// this function‟s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
Note that index i can go out of the subarray‟s bounds in this pseudocode. Rather than checking for
this possibility every time index i is incremented, we can append to arrayA[0..n − 1]a “sentinel” that
would prevent index i from advancing beyond position n. Note that the more sophisticated method
of pivot selection mentioned at the end of the section makes such a sentinel unnecessary.
An example of sorting an array by quicksort is given in Figure.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 19


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Analysis:
We start our discussion of quick sort‟s efficiency by noting that the number of key comparisons
made before a partition is achieved is n + 1 if the scanning indices cross over and n if they coincide

Best case:If all the splits happen in the middle of corresponding subarrays, we will have the best
case. The number of key comparisons in the best case satisfies the recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.
According to the Master Theorem, Cbest(n) ∈ _(n log2 n); solving it exactly for n = 2k yields

Cbest(n) = n log2 n.
Worst case:In the worst case, all the splits will be skewed to the extreme: one of the two subarrays
will be empty, and the size of the other will be just 1 less than the size of the subarray being
partitioned. So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0]
with itself, the algorithm will be left with the strictly increasing array A[1..n − 1] to sort. The total
number of key comparisons made will be equal to

Cworst(n) = (n + 1) + n + . . . + 3 = (n + 1)(n + 2)2− 3 ∈ _(n2).

Average case: Let Cavg(n) be the average number of key comparisons made by quicksort on a
randomly ordered array of size n. A partition can happen in any position s (0 ≤ s ≤ n−1) after
n+1comparisons are made to achieve the partition. After the partition, the left and right subarrays
will have s and n − 1− s elements, respectively. Assuming that the partition split can happen in each
position s with the same probability 1/n, we get the following recurrence relation:
Cavg(n) = 1nn−1s=0[(n + 1) + Cavg(s) + Cavg(n − 1− s)] for n > 1,
Cavg(0) = 0, Cavg(1) = 0.

Its solution, which is much trickier than the worst- and best-case analyses, turns out to be
Cavg(n) ≈ 2n ln n ≈ 1.39n log2 n.
Thus, on the average, quicksort makes only 39% more comparisons than in the best case. Moreover,
its innermost loop is so efficient that it usually runs faster than mergesort on randomly ordered
arrays of nontrivial sizes. This certainly justifies the name given to the algorithm by its inventor.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 20


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Analysis according to the master theorem :-


□ Quick algorithm is a recursive algorithm.
□ The size of the input is specified by the number of elements we need to sort.
□ The key operation of the algorithm is comparison.
□ The count of the comparison is not just depends only on the size of the input, it also
depends on the type of the input i.e. sorted array, unsorted array, partially sorted array
etc. So we need to analyze best case, worst case & average case analysis
separately.
Best-case Analysis:
The best case of quick sort occurs, when the problem‟s instance is divided into two
equal parts on each recursive call of the algorithm. So the recurrence relation will be
0 if n=1

T(n)= T(n/2)+T(n/2)+n otherwise //it is written as 2T(n/2)+ n

Therefore a=2, b=2, f(n)=n=n1=nd so, d=1.

So, according to Master theorem T(n) is given by

Θ(nd) if a<bd
T(n)= Θ(ndlogbn) if a=bd
Θ(nlogba) if a>bd

For the quick sort algorithm, a=bd holds good,

∴ T(n)= Θ(ndlogbn)= Θ(n1log2n)∈ Θ(nlogn)

T(n)∈Θ(nlogn)

Worst case Analysis:


The worst case of the quick sort occurs, when at each invocation of the quick sort algorithm, the
current array is partitioned into two sub arrays with one of them being empty.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 21


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
This situation occurs when the input list is arranged in either ascending or descending order.
Ex: 10 | 11 12 13

10 11 | 12 13
10 11 12 | 13
10 11 12 13

The recurrence relation for the above situation is :

0 if n=1

T(n)= T(0)+ T(n-1) +C n otherwise

To sort left sub array To sort right sub array

∴ T(n)= T(0)+T(n-1)+Cn

T(n)=T(n-1)+Cn //T(0)=0
By the method of backward substitution
T(n) = T(n-1)+Cn
= T(n-2)+C(n-1)+Cn
=T(n-3)+C(n-2)+C(n-1)+Cn
=T(n-4)+C(n-3)+C(n-2)+C(n-1)+Cn
…..

=T(n-i)+C[(n-i-1)+(n-i-2)+ ........ +(n-3)+(n-2)+(n-1)+n]


….

=T(n-n)+C[(n-(n-1))+(n-(n-2))+ ....... +(n-3)+(n-2)+(n-1)+n]


=T(0)+C[1+2+3+4+. . . . +(n-3)+(n-2)+(n-1)+n ] //T(0)=0
=C[1+2+3+4+....... +(n-3)+(n-2)+(n-1)+n]
T(n)=n(n+1)/2 =n2

T(n)∈Θ(n2)
Average Case Analysis:-The average case of the quick sort will appear for typical or random or
real time input. Where the given array may not be exactly portioned into two equal parts as in best
case or the sub-arrays are not skewed as in worst case.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 22


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Here the pivot element is placed at an arbitrary position from 1 to n. Let


k be the position of the pivot element, as shown below

1, 2, 3, .................................. k-1 K K+1, k+2, ...................... ,n

Then, T(n)= T(k-1) + T(n-k) + (n+1)


For partitioning

T(n)=(n+1)2log(n+1) = nlogn

T(n) )∈Θ (nlogn)

Stassen’s matrix multiplication


Problem definition: Let A and B be two nxn matrices he product matrix C=AB is also an nxn
matrix whose i, jth element is formed by taking the elements in the i th row of A and the j th
column of B and multiplying them to get for all i and j between 1 to n.

To compute C(i,j) using this formula, we need n multiplications. As the matrix C has n2 elements,
the time for the resulting matrix multiplication algorithm, is ѳ(n3).
The divide and conquer strategy suggests another way to compute the product of two nxn
matrices. For simplicity we assume that n is a power of two, that is, that there exists a non negative
integer k such that n=2k.. In case n is not a power of two, then enough rows and columns of zeros
can be added to both A and B so that the resulting dimensions are a power of two. If A and B are
each partitioned in to four square sub matrices, each sub matrix having dimensions n/2 x n/2. Then
the product AB can be computed by using the above formula for the product of two matrices. If and
B are defined as below

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 23


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

To compute the AB using above formula we need to perform 8 multiplications of n/2 x n/2
matrices and four additions of n/2 x n/2 matrices. Since two n/2 x n/2 matrices can be added in C n
2
for the constant C. Therefore T(n)can be given as

Where b and c are constants.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 24


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Compared to regular multiplication, this method uses 7 multiplication and 18 additions or


subtractions. In this method first we compute P,Q ,R,S T ,U and V (7 multiplications) and 10 matrix
additions or subtractions
For Examples: Refer class work.

Advantages and disadvantages of divide and conquer

Divide and conquer method is a top down technique for designing an algorithm which consists of
dividing the problem in to smaller sub problems hoping that the solutions of the sub problems are
easier to find. The solution of all smaller problems is then combined to get a solution for the original
problem.
Advantages:
 The difficult problem is broken down to sub problems and each problem is solved separately
and independently. This is useful for obtaining solutions in easier way for difficult problems.
 This technique facilitates the discovers of new efficient algorithms. Example: Quick sort,
Merge sort etc
 The sub problems can be executed on parallel processor.
 Hence time complexity can be reduced.

Disadvantages
 Large number of sub lists are created and need to be processed
 This algorithm makes use of recursive methods and the recursion is slow and complex.
 Difficulties in solving larger size of inputs

DECREASE AND CONQUER

General method
This technique is based on exploiting the relationship between a solution to a given instance of a
problem and a solution to a smaller instance of the same problem. Once such relationship is
established, it can be exploited either top down (recursively) or bottom up (without a recursion).
There are three major variations of decrease-and-conquer:
 Decrease by a constant.
 Decrease by a constant factor.

KR& KK, Dept. of ISE, RNSIT DAA-CBCS Page 25


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
 Variable size decrease.

1. Decrease by a constant : This technique suggests reducing a problem‟s instance by the same
constant (example:one)on each iteration of the algorithm.

Figure : Decrease by a constant

Eg: the exponentiation problem of computing an for positive integer exponents.


an=an-1.a

2. Decrease by a constant Factor : This technique suggests reducing a problem‟s instance by the
same constant factor (may be 2)on each iteration of the algorithm.
Decrease by (half) factor

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 26


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

Figure : Decrease by a constant Factor


o Eg: an=(an/2)2.
o Efficiency is O(log n).

Variable size decrease


The size-reduction pattern varies from one iteration of an algorithm to another. Euclid‟s algorithm for
computing the greatest common divisor provides a good example of such a situation. Recall that this
algorithm is based on the formula

gcd (m, n) = gcd (n, m mod n).

Though the value of the second argument is always smaller on the right-hand side than on the left-hand
side, it decreases neither by a constant nor by a constant factor.

TOPOLOGICAL SORTING

Depth-first search and breadth-first search are principal traversal algorithms for Traversing digraphs
as well, but the structure of corresponding forests can be more complex than for undirected graphs.
Thus, even for the simple example of Figure below, exhibits all four types of edges possible in
a DFS forest of a directed graph: tree edges ( ab, bc, de), back edges ( ba) from vertices to their

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 27


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42
ancestors, forward edges ( ac) from vertices to their descendants in the tree other than their children,
and cross edges ( dc), which are none of the

A back edge in a DFS forest of a directed graph can connect a vertex to its parent. Whether or not it is
the case, the presence of a back edge indicates that the digraph has a directed cycle.
A directed cycle in a digraph is a sequence of three or more of its vertices that starts and ends
with the same vertex and in which every vertex is connected to its immediate predecessor by an edge
directed from the predecessor to the successor.
If a DFS forest of a digraph has no back edges, the digraph is a dag, an acronym for
directed acyclic graph.
We can list its vertices in such an order that for every edge in the graph, the vertex where the
edge starts is listed before the vertex where the edge ends .This problem is called topological sorting.
There are two methods to implement this algorithm.
1. DFS Method
2. Source Removal Method.
1. Algorithm1: (DFS method)
Procedure:
Perform a DFS traversal and note the order in which vertices become dead-ends (i.e., popped off the
traversal stack). Reversing this order yields a solution to the topological sorting problem, provided, of
course, no back edge has been encountered during the traversal. If a back edge has been encountered,
the digraph is not a dag, and topological sorting of its vertices is impossible.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 28


DESIGN AND ANALYSIS OF ALGORITHMS 18CS42

2. Algorithm2: (Source removal


method) Procedure:
Repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming edges, and
delete it along with all the edges outgoing from it. The order in which the vertices are deleted yields a
solution to the topological sorting problem.

Note: Refer class work for more examples.

KR &KK, Dept. of ISE, RNSIT DAA-CBCS Page 29

You might also like