SOU Lecture Handout ADA Unit-3
SOU Lecture Handout ADA Unit-3
1. Introduction
Many useful algorithms are recursive in structure: to solve a given problem, they call
themselves recursively one or more times to deal with closely related sub problems.
The divide-and-conquer approach involves three steps at each level of the recursion:
Divide: Break the problem into several sub problems that are similar to the original
problem but smaller in size,
Conquer: Solve the sub problems recursively, and If the sub problem sizes are small
enough, just solve the sub problems in a straightforward manner.
Consider an arbitrary problem, and let adhoc be a simple algorithm capable of solving
the problem.
We call adhoc as basic sub-algorithm such that it can be efficient in solving small
instances, but its performance on large instances is of no concern.
The general template for divide-and-conquer algorithms is as follows.
function DC(x)
for i = 1 to l do yi DC(xi)
return y
Let g(n) be the time required by DC on instances of size n, not counting the time needed for
the recursive calls.
The total time t(n) taken by this divide-and-conquer algorithm is given by Recurrence
equation,
The recurrence equation and its solution are applicable to find the time complexity of
every problem which can be solved using Divide & Conquer Technique.
Following example shows how divide and conquer helps multiplying two large integers.
Multiplication of 981 by 1234. First we pad the shorter operand with a non- significant
zero to make it the same length as the longer one; thus 981 become 0981.
The above procedure still needs four half-size multiplications: wy, wz, xy and xz.
There is no need to compute both wz and xy; all we really need is the sum of the two
terms, consider the product
After only one multiplication, we obtain the sum of all three terms needed to calculate
the desired product.
p = wy = 09 x 12 =108
q = xz = 81 x 34 = 2754
r = (w + x) x (y + z) =90x46 = 4140,
and finally
Thus the product of 981 and 1234 can be reduced to three multiplications of two-figure
numbers (09 x 12, 81 x 34 and 90 x 46) together with a certain number of shifts
(multiplications by powers of 10), additions and subtractions.
It thus seems reasonable to expect that reducing four multiplications to three will enable
us to cut 25% of the computing time required for large multiplications.
Solving it gives,
Let T[1 . . . n] be an array of increasing sorted order; that is T [i] ≤ T [j] whenever 1 ≤ i ≤
j ≤ n.
Let x be some number. The problem consists of finding x in the array T if it is there.
If x is not in the array, then we want to find the position where it might be inserted.
The basic idea of binary search is that for a given element we check out the middle
element of the array.
We continue in either the lower or upper segment of the array, depending on the outcome
of the search until we reach the required (given) element.
Here the technique Divide & Conquer applies. Total number of elements to be searched
is divided in half size every time.
i ← 1; j ← n
while i < j do
k ← (i + j ) ÷ 2
if x ≤ T [k] then j ← k
else i ← k + 1
return i
Analysis
To analyze the running time of a while loop, we must find a function of the variables
involved whose value decreases each time round the loop.
Here it is j – i + 1
Initially d = n.
I. Either j is set to k – 1
II. Or i is set to k + 1
Let d and d’ stand for the value of j – i +1 before and after the iteration under
consideration. Similarly i, j , i’ and j’.
So, j ← k – 1 is executed.
d’ = j’ – i’ + 1
d’ ≤ (i + j) /2 – i
d’ ≤ (j – i)/2 ≤ (j – i + 1) / 2
d’ ≤ d/2
So, i ← k + 1 is executed
d’ = j’ – i’ + 1
d’ ≤ j - (i + j -1) /2 ≤ (2j – i – j + 1) / 2 ≤ ( j – i + 1) / 2
d’ ≤ d/2
i = j → d’ = 1
We conclude that whatever may be case, d’ ≤ d/2 which means that the value of d is
at least getting half each time round the loop.
Let dk denote the value of j – i + 1 at the end of kth trip around the loop. d0 = n.
For n integers, how many times does it need to cut in half before it reaches or goes below
1?
n / 2k ≤ 1 → n ≤ 2 k
if i = j then return i
k ← (i + j) ÷ 2
Analysis
Let t(n) be the time required for a call on binrec( T[i,…,j], x ), where n = j – i + 1 is the
number of elements still under consideration in the search.
Comparing this to the general template for divide and conquer algorithm,
l = 1, b = 2 and k = 0.
Let us consider a simple problem that can be solved by divide and conquer technique.
Problem Statement
The Max-Min Problem in algorithm analysis is finding the maximum and minimum value in
an array.
Solution
To find the maximum and minimum numbers in a given array numbers[] of size n, the
following algorithm can be used. First we are representing the naive method and then we will
present divide and conquer approach.
Naïve method is a basic method to solve any problem. In this method, the maximum and
minimum number can be found separately. To find the maximum and minimum numbers, the
following straightforward algorithm can be used.
max := numbers[1]
min := numbers[1]
for i = 2 to n do
max := numbers[i]
min := numbers[i]
Analysis
The number of comparisons can be reduced using the divide and conquer approach.
Following is the technique.
In this approach, the array is divided into two halves. Then using recursive approach
maximum and minimum numbers in each halves are found. Later, return the maximum of
two maxima of each half and the minimum of two minima of each half.
In this given problem, the number of elements in an array is $y - x + 1$, where y is greater
than or equal to x.
$\mathbf{\mathit{Max - Min(x, y)}}$ will return the maximum and minimum values of an
array $\mathbf{\mathit{numbers[x...y]}}$.
if y – x ≤ 1 then
else
Analysis
If T(n) represents the numbers, then the recurrence relation can be represented as
$$T(n) =
\begin{cases}T\left(\lfloor\frac{n}{2}\rfloor\right)+T\left(\lceil\frac{n}{2}\rceil\right)+2 &
for\: n>2\\1 & for\:n = 2 \\0 & for\:n = 1\end{cases}$$
Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the
recursion tree.
So,
DEPARTMENT OF COMPUTER ENGINEERING Page | 9
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
Compared to Naïve method, in divide and conquer approach, the number of comparisons is
less. However, using the asymptotic notation both of the approaches are represented by O(n).
6. Merge Sort
The divide & conquer approach to sort n numbers using merge sort consists of separating
the array T into two parts where sizes are almost same.
These two parts are sorted by recursive calls and then merged the solution of each part
while preserving the order.
The algorithm considers two temporary arrays U and V into which the original array T is
divided.
When the number of elements to be sorted is small, a relatively simple algorithm is used.
Merge sort procedure separates the instance into two half sized sub instances, solves
them recursively and then combines the two sorted half arrays to obtain the solution to
the original instance.
6.1. Algorithm for merging two sorted U and V arrays into array T
Procedure merge(U[1,…,m+1],V[1,…,n+1],T[1,…,m+n])
i, j ← 1
U[m+1], V[n+1] ← ∞
for k ← 1 to m + n do
Procedure mergesort(T[1,…,n])
else
array U[1,…,1+n/2],V[1,…,1+n/2]
U[1,…,n/2] ← T[1,…,n/2]
V[1,…,n/2] ← T[n/2+1,…,n]
mergesort(U[1,…,n/2])
mergesort(V[1,…,n/2])
merge(U, V, T)
6.3. Analysis
Let T(n) be the time taken by this algorithm to sort an array of n elements.
Separating T into U & V takes linear time; merge (U, V, T) also takes linear time.
Now,
Example
7. Quick Sort
As a first step, this algorithm chooses one element of an array as a pivot or a key
element.
Elements are moved so that those greater than the pivot are shifted to its right whereas
the others are shifted to its left.
Two pointers low and up are initialized to the lower and upper bounds of the sub array.
Up pointer will be decremented and low pointer will be incremented as per following
condition.
7.1. Algorithm
{Permutes the elements in array T[i,…,j] and returns a value l such that, at the
end, i<=l<=j, T[k]<=p for all i ≤ k < l, T[l]=p, And T[k] > p for all l < k ≤ j,
where p is the initial value T[i]}
P ← T[i]
K ← i; l ← j+1
While k < l do
Procedure quicksort(T[i,…,j])
else
pivot(T[i,…,j],l)
quicksort(T[i,…, l - 1])
DEPARTMENT OF COMPUTER ENGINEERING Page | 13
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
quicksort(T[l+1,…,j]
7.2. Analysis
1. Worst Case
And this in turn depends on which element is chosen as key or pivot element.
The worst case behavior for quick sort occurs when the partitioning routine produces one
sub problem with n-1 elements and one with 0 elements.
T(n)=T(n-1)+T(0)+Θ(n)
T(n)=T(n-1)+ Θ (n)
T(n)= Θ (n2)
2. Best Case
Recurrence equation:
T(n)=2T(n/2)+ Θ(n)
l = 2, b = 2, k = 1, so l = bk
T(n)= Θ(nlogn)
3. Average Case
If suppose the partitioning algorithm produces a 9-to-1 proportional split the recurrence
will be
Solving it,
T(n)= Θ(nlogn)
The running time of quick sort is therefore Θ(nlogn) whenever the split has constant
proportionality.
7.3. Example
8. Matrix Multiplication
Consider the problem of multiplying two n × n matrices. Computing each entry in the
product takes n multiplications and there are n2 entries for a total of O(n3 ) work.
Strassen devised a better method which has the same basic flavor as the multiplication of
long integers.
The key idea is to save one multiplication on a small problem and then use recursion.
First we show that two 2 x 2 matrices can be multiplied using less than the eight scalar
multiplications apparently required by the definition. Let A and B are two matrices to be
multiplied.
Consider the following operations, each of which involves just one multiplication.
Let t (n) be the time needed to multiply two n x n matrices by recursive use of equations.
Assume for simplicity that n is a power of 2. Since matrices can be added and subtracted
in a time in 0(n2 ),
Where g (n) ϵ 0 (n2). This recurrence is another instance of our general analysis for
divide-and-conquer algorithms.
Square matrices whose size is not a power of 2 are easily handled by padding them with
rows and columns of zeros, at most doubling their size, which does not affect the
asymptotic running time.
Since lg7 < 2.81, it is thus possible to multiply two n x n matrices in a time in 0(n2.81),
provided scalar operations are elementary.
9. Exponentiation
function exposeq(a, n)
r←a
for i ←1 to n - 1 do
r←a * r
return r
This algorithm takes a time in Θ(n) since the instruction r ←a * r is executed exactly n-1
times, provided the multiplications are counted as elementary operations.
If we wish to handle larger operands, we must take account of the time required for each
multiplication.
Let M(q, s) denote the time needed to multiply two integers of sizes q and s.
Assume for simplicity that q1 ≤ q2 and s1 ≤ s2 imply that M(q1, s1 ) ≤ M(q2 ,s2).
Let us estimate how much time our algorithm spends multiplying integers when
exposeq(a, n) is called.
Let m be the size of a. First note that the product of two integers of size i and j is of size
at least i + j - 1 and at most i + j;
Let ri and mi be the value and the size of r at the beginning of the i-th time round the
loop.
Since ri+1 = a * ri, the size of ri+1 is at least m + mi -1 and at most m + mi.
Therefore, the multiplication performed the i-th time round the loop concerns an integer
of size m and an integer whose size is between im - i + 1 and im, which takes a time
between M(m, im - i + 1) and M(m, im).
The total time T(m, n) spent multiplying when computing an with exposeq is therefore,
Where m is the size of a. If we use the classic multiplication algorithm, then M (q, s) ϵ Θ
(qs).Let c be a constant such that M(q, s) ≤ c qs so,
The key observation for improving exposeq is that an = (an/ 2 )2 when n is even. This
yields the following recurrence.
function expoDC(a, n)
if n = 1 then return a
return a * expoDC(a, n - 1)
When n is even, one multiplication is performed (the squaring of a n/2) in addition to the
N(n/2) multiplications involved in the recursive call on expoDC(a, n/2).
When n is odd, one multiplication is performed (that of a by an-1) in addition to the N(n -
1) multiplications involved in the recursive call on expoDC (a, n - 1). Thus we have the
following recurrence.
To handle such a recurrence, it is useful to bind the function from above and below with
non-decreasing functions. When n > 1 is odd,
On the other hand, when n is even, N(n)= N(n/2))+1 since Ln/2J = n/2 in that case.
Therefore, N(Ln/2˩) + 1 ≤ N(n) ≤ N(Ln/2˩) + 2 for all n > 1.
Let M (q, s) denote again the time needed to multiply two integers of sizes q and s, and
let T(m,n) now denote the time spent multiplying by a call on expoDC(a, n), where m is
the size of a. Recall that the size of ai is at most im.
Solving it gives,
Where, α = 2 with classic multiplication algorithm and, α = lg3 with divide & conquer
To summarize, the following table gives the time to compute an, where m is the size of a,
depending whether we use exposeq or expoDC, and whether we use the classic or divide-
and-conquer (D&C) multiplication algorithm.