0% found this document useful (0 votes)
8 views

SOU Lecture Handout ADA Unit-3

The document discusses divide and conquer algorithms and binary search. It covers the divide and conquer approach of breaking problems into smaller subproblems, solving the subproblems recursively, and combining the solutions. Binary search is presented as a classic divide and conquer algorithm where the search space is halved on each iteration until the target is found.

Uploaded by

prince2412001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

SOU Lecture Handout ADA Unit-3

The document discusses divide and conquer algorithms and binary search. It covers the divide and conquer approach of breaking problems into smaller subproblems, solving the subproblems recursively, and combining the solutions. Binary search is presented as a classic divide and conquer algorithm where the search space is halved on each iteration until the target is found.

Uploaded by

prince2412001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1010043316

(ANALYSIS & DESIGN OF


ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

CHAPTERS 3 – DIVIDE & CONQUER

1. Introduction

1.1. Divide and Conquer technique

 Many useful algorithms are recursive in structure: to solve a given problem, they call
themselves recursively one or more times to deal with closely related sub problems.

 These algorithms typically follow a divide-and-conquer approach.

 The divide-and-conquer approach involves three steps at each level of the recursion:

 Divide: Break the problem into several sub problems that are similar to the original
problem but smaller in size,

 Conquer: Solve the sub problems recursively, and If the sub problem sizes are small
enough, just solve the sub problems in a straightforward manner.

 Combine: Combine these solutions to create a solution to the original problem.

2. Recurrence and different methods to solve recurrence

 Consider an arbitrary problem, and let adhoc be a simple algorithm capable of solving
the problem.
 We call adhoc as basic sub-algorithm such that it can be efficient in solving small
instances, but its performance on large instances is of no concern.
 The general template for divide-and-conquer algorithms is as follows.

function DC(x)

if x is sufficiently small or simple then return adhoc(x)

decompose x into smaller instances x1 , x2, …, xl

for i = 1 to l do yi DC(xi)

recombine the YL 's to obtain a solution y for x

DEPARTMENT OF COMPUTER ENGINEERING Page | 1


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

return y

The running-time analysis of such divide-and-conquer algorithms is almost automatic.

Let g(n) be the time required by DC on instances of size n, not counting the time needed for
the recursive calls.

The total time t(n) taken by this divide-and-conquer algorithm is given by Recurrence
equation,

Provided n is large enough.

 The solution of equation is given as,

If there exists an integer k such that g(n) ∈ θ(nk)


then

 The recurrence equation and its solution are applicable to find the time complexity of
every problem which can be solved using Divide & Conquer Technique.

3. Multiplying Large Integers Problem using divide and conquer method

 Consider the problem of multiplying large integers.

 Following example shows how divide and conquer helps multiplying two large integers.

 Multiplication of 981 by 1234. First we pad the shorter operand with a non- significant
zero to make it the same length as the longer one; thus 981 become 0981.

 Then we split each operand into two parts:

 0981 gives rise to w = 09 and x = 81, and 1234 to y = 12 and z = 34.

DEPARTMENT OF COMPUTER ENGINEERING Page | 2


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

 Notice that 981 = 102W + x and 1234 = 102y + z.

 Therefore, the required product can be computed as

981 x 1234 = (102w + x) * (102y + z)

= 104wy + 102(wz + xy) + xz

= 1080000 + 127800 + 2754 = 1210554.

 The above procedure still needs four half-size multiplications: wy, wz, xy and xz.

 There is no need to compute both wz and xy; all we really need is the sum of the two
terms, consider the product

r = (w + x) x (y + z) = wy + (wz + xy) + xz.

 After only one multiplication, we obtain the sum of all three terms needed to calculate
the desired product.

 This suggests proceeding as follows.

p = wy = 09 x 12 =108

q = xz = 81 x 34 = 2754

r = (w + x) x (y + z) =90x46 = 4140,

and finally

981 x 1234 = 104p + 102 (r - p - q) + q

= 1080000 + 127800 + 2754 1210554.

 Thus the product of 981 and 1234 can be reduced to three multiplications of two-figure
numbers (09 x 12, 81 x 34 and 90 x 46) together with a certain number of shifts
(multiplications by powers of 10), additions and subtractions.

 It thus seems reasonable to expect that reducing four multiplications to three will enable
us to cut 25% of the computing time required for large multiplications.

DEPARTMENT OF COMPUTER ENGINEERING Page | 3


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

 We obtain an algorithm that can multiply two n-figure numbers in a time,

T(n)= 3t (n /2) + g(n), when n is even and sufficiently large.

Solving it gives,

4. Binary Search using Divide and Conquer approach

4.1. Binary Search Method

 Binary Search is an extremely well-known instance of divide-and-conquer approach.

 Let T[1 . . . n] be an array of increasing sorted order; that is T [i] ≤ T [j] whenever 1 ≤ i ≤
j ≤ n.

 Let x be some number. The problem consists of finding x in the array T if it is there.

 If x is not in the array, then we want to find the position where it might be inserted.

4.2. Binary Search Algorithm (Iterative)

 The basic idea of binary search is that for a given element we check out the middle
element of the array.

 We continue in either the lower or upper segment of the array, depending on the outcome
of the search until we reach the required (given) element.

 Here the technique Divide & Conquer applies. Total number of elements to be searched
is divided in half size every time.

Function biniter ( T[1,…,n], x )

i ← 1; j ← n

while i < j do

k ← (i + j ) ÷ 2

DEPARTMENT OF COMPUTER ENGINEERING Page | 4


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

if x ≤ T [k] then j ← k

else i ← k + 1

return i

Analysis

 To analyze the running time of a while loop, we must find a function of the variables
involved whose value decreases each time round the loop.

 Here it is j – i + 1

 Which we call as d. d represents the number of elements of T still under consideration.

 Initially d = n.

 Loop terminates when i ≥ j, which is equivalent to d ≤ 1

 Each time round the loop there are three possibilities,

I. Either j is set to k – 1

II. Or i is set to k + 1

III. Or both i and j are set to k

 Let d and d’ stand for the value of j – i +1 before and after the iteration under
consideration. Similarly i, j , i’ and j’.

 Case I : if x < T [k]

So, j ← k – 1 is executed.

Thus i’ = i and j’ = k -1 where k = (i + j) ÷ 2

Substituting the value of k, j’ = [(i + j) ÷ 2] – 1

d’ = j’ – i’ + 1

Substituting the value of j’, d’ = [(i + j) ÷ 2] – 1 – i + 1

DEPARTMENT OF COMPUTER ENGINEERING Page | 5


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

d’ ≤ (i + j) /2 – i

d’ ≤ (j – i)/2 ≤ (j – i + 1) / 2

d’ ≤ d/2

 Case II : if x > T [k]

So, i ← k + 1 is executed

Thus i’ = K + 1 and j’ = j where k = (i + j) ÷ 2

Substituting the value of k, i’ = [(i + j) ÷ 2] + 1

d’ = j’ – i’ + 1

Substituting the value of I’, d’ = j - [(i + j) ÷ 2] + 1 + 1

d’ ≤ j - (i + j -1) /2 ≤ (2j – i – j + 1) / 2 ≤ ( j – i + 1) / 2

d’ ≤ d/2

 Case III : if x = T [k]

i = j → d’ = 1

 We conclude that whatever may be case, d’ ≤ d/2 which means that the value of d is
at least getting half each time round the loop.

 Let dk denote the value of j – i + 1 at the end of kth trip around the loop. d0 = n.

 We have already proved that dk = dk -1/ 2

 For n integers, how many times does it need to cut in half before it reaches or goes below
1?

 n / 2k ≤ 1 → n ≤ 2 k

 k = lgn , search takes time.

The complexity of biniter is Θ (lg n).

4.3. Binary Search Algorithm (Recursive)


DEPARTMENT OF COMPUTER ENGINEERING Page | 6
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

Function binsearch ( T[1,…,n], x )

if n = 0 or x > T[n] then return n + 1

else return binrec ( T[1,…,n], x )

Function binrec( T[i,…,j], x )

if i = j then return i

k ← (i + j) ÷ 2

if x ≤ T [k] then return binrec( T[i,…,k], x )

else return binrec( T[k + 1,…,j], x )

Analysis

 Let t(n) be the time required for a call on binrec( T[i,…,j], x ), where n = j – i + 1 is the
number of elements still under consideration in the search.

 The recurrence equation is given as,

t(n) = t(n/2) + Θ(1)

Comparing this to the general template for divide and conquer algorithm,

l = 1, b = 2 and k = 0.

So, t(n) Є Θ(lg n)

 The complexity of binrec is Θ (lg n).

5. Max-Min Problem using Divide and Conquer Approach

Let us consider a simple problem that can be solved by divide and conquer technique.

Problem Statement

The Max-Min Problem in algorithm analysis is finding the maximum and minimum value in
an array.

DEPARTMENT OF COMPUTER ENGINEERING Page | 7


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

Solution

To find the maximum and minimum numbers in a given array numbers[] of size n, the
following algorithm can be used. First we are representing the naive method and then we will
present divide and conquer approach.

5.1. Naïve Method

Naïve method is a basic method to solve any problem. In this method, the maximum and
minimum number can be found separately. To find the maximum and minimum numbers, the
following straightforward algorithm can be used.

Algorithm: Max-Min-Element (numbers[])

max := numbers[1]

min := numbers[1]

for i = 2 to n do

if numbers[i] > max then

max := numbers[i]

if numbers[i] < min then

min := numbers[i]

return (max, min)

Analysis

The number of comparison in Naive method is 2n - 2.

The number of comparisons can be reduced using the divide and conquer approach.
Following is the technique.

DEPARTMENT OF COMPUTER ENGINEERING Page | 8


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

5.2. Divide and Conquer Approach

In this approach, the array is divided into two halves. Then using recursive approach
maximum and minimum numbers in each halves are found. Later, return the maximum of
two maxima of each half and the minimum of two minima of each half.

In this given problem, the number of elements in an array is $y - x + 1$, where y is greater
than or equal to x.

$\mathbf{\mathit{Max - Min(x, y)}}$ will return the maximum and minimum values of an
array $\mathbf{\mathit{numbers[x...y]}}$.

Algorithm: Max - Min(x, y)

if y – x ≤ 1 then

return (max(numbers[x], numbers[y]), min((numbers[x], numbers[y]))

else

(max1, min1):= maxmin(x, ⌊ ((x + y)/2)⌋ )

(max2, min2):= maxmin(⌊ ((x + y)/2) + 1)⌋ ,y)

return (max(max1, max2), min(min1, min2))

Analysis

Let T(n) be the number of comparisons made by $\mathbf{\mathit{Max - Min(x, y)}}$,


where the number of elements $n = y - x + 1$.

If T(n) represents the numbers, then the recurrence relation can be represented as

$$T(n) =
\begin{cases}T\left(\lfloor\frac{n}{2}\rfloor\right)+T\left(\lceil\frac{n}{2}\rceil\right)+2 &
for\: n>2\\1 & for\:n = 2 \\0 & for\:n = 1\end{cases}$$

Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the
recursion tree.

So,
DEPARTMENT OF COMPUTER ENGINEERING Page | 9
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

$$T(n) = 2.T (\frac{n}{2}) + 2 = 2.\left(\begin{array}{c}2.T(\frac{n}{4}) +


2\end{array}\right) + 2 ..... = \frac{3n}{2} - 2$$

Compared to Naïve method, in divide and conquer approach, the number of comparisons is
less. However, using the asymptotic notation both of the approaches are represented by O(n).

6. Merge Sort

 The divide & conquer approach to sort n numbers using merge sort consists of separating
the array T into two parts where sizes are almost same.

 These two parts are sorted by recursive calls and then merged the solution of each part
while preserving the order.

 The algorithm considers two temporary arrays U and V into which the original array T is
divided.

 When the number of elements to be sorted is small, a relatively simple algorithm is used.

 Merge sort procedure separates the instance into two half sized sub instances, solves
them recursively and then combines the two sorted half arrays to obtain the solution to
the original instance.

6.1. Algorithm for merging two sorted U and V arrays into array T

Procedure merge(U[1,…,m+1],V[1,…,n+1],T[1,…,m+n])

i, j ← 1

U[m+1], V[n+1] ← ∞

for k ← 1 to m + n do

if U[i] < V[j]

then T[k] ← U[i] ; i ← i + 1

else T[k] ← V[j] ; j ← j + 1

DEPARTMENT OF COMPUTER ENGINEERING Page | 10


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

6.2. Algorithm merge sort

Procedure mergesort(T[1,…,n])

if n is sufficiently small then insert(T)

else

array U[1,…,1+n/2],V[1,…,1+n/2]

U[1,…,n/2] ← T[1,…,n/2]

V[1,…,n/2] ← T[n/2+1,…,n]

mergesort(U[1,…,n/2])

mergesort(V[1,…,n/2])

merge(U, V, T)

6.3. Analysis

 Let T(n) be the time taken by this algorithm to sort an array of n elements.

 Separating T into U & V takes linear time; merge (U, V, T) also takes linear time.

 Now,

T(n)=T(n/2)+ T(n/2)+g(n) where g(n) ϵ Θ(n).

T(n) = 2t(n/2)+ Θ (n)

Applying the general case, l=2, b=2, k=1

Since l = bk the second case applies which yields t(n) ϵ Θ(nlogn).

Time complexity of merge sort is Θ (nlogn).

DEPARTMENT OF COMPUTER ENGINEERING Page | 11


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

Example

7. Quick Sort

 Quick sort works by partitioning the array to be sorted.

 Each Partition is internally sorted recursively.

 As a first step, this algorithm chooses one element of an array as a pivot or a key
element.

 The array is then partitioned on either side of the pivot.

 Elements are moved so that those greater than the pivot are shifted to its right whereas
the others are shifted to its left.

 Two pointers low and up are initialized to the lower and upper bounds of the sub array.

 Up pointer will be decremented and low pointer will be incremented as per following
condition.

1. Increase low pointer until T[low] > pivot.


DEPARTMENT OF COMPUTER ENGINEERING Page | 12
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

2. Decrease up pointer until T[up] ≤ pivot.

3. If low < up then interchange T[low] with T[up].

4. If up ≤ low then interchange T[up] with T[i].

7.1. Algorithm

Procedure pivot(T[i,…,j]; var l)

{Permutes the elements in array T[i,…,j] and returns a value l such that, at the
end, i<=l<=j, T[k]<=p for all i ≤ k < l, T[l]=p, And T[k] > p for all l < k ≤ j,
where p is the initial value T[i]}

P ← T[i]

K ← i; l ← j+1

Repeat k ← k+1 until T[k] > p

Repeat l ← l-1 until T[l] ≤ p

While k < l do

Swap T[k] and T[l]

Repeat k ← k+1 until T[k] > p

Repeat l ← l-1 until T[l] ≤ p

Swap T[i] and T[l]

Procedure quicksort(T[i,…,j])

{Sorts subarray T[i,…,j] into non decreasing order}

if j – i is sufficiently small then insert (T[i,…,j])

else

pivot(T[i,…,j],l)

quicksort(T[i,…, l - 1])
DEPARTMENT OF COMPUTER ENGINEERING Page | 13
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

quicksort(T[l+1,…,j]

7.2. Analysis

1. Worst Case

 Running time of quick sort depends on whether the partitioning is balanced or


unbalanced.

 And this in turn depends on which element is chosen as key or pivot element.

 The worst case behavior for quick sort occurs when the partitioning routine produces one
sub problem with n-1 elements and one with 0 elements.

 In this case recurrence will be,

T(n)=T(n-1)+T(0)+Θ(n)

T(n)=T(n-1)+ Θ (n)

T(n)= Θ (n2)

2. Best Case

 Occurs when partition produces sub problems each of size n/2.

 Recurrence equation:

T(n)=2T(n/2)+ Θ(n)

l = 2, b = 2, k = 1, so l = bk

T(n)= Θ(nlogn)

3. Average Case

 Average case running time is much closer to the best case.

 If suppose the partitioning algorithm produces a 9-to-1 proportional split the recurrence
will be

T(n)=T(9n/10)+ T(n/10)+ Θ(n)


DEPARTMENT OF COMPUTER ENGINEERING Page | 14
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

Solving it,

T(n)= Θ(nlogn)

 The running time of quick sort is therefore Θ(nlogn) whenever the split has constant
proportionality.

7.3. Example

DEPARTMENT OF COMPUTER ENGINEERING Page | 15


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

8. Matrix Multiplication

 Consider the problem of multiplying two n × n matrices. Computing each entry in the
product takes n multiplications and there are n2 entries for a total of O(n3 ) work.

 Strassen devised a better method which has the same basic flavor as the multiplication of
long integers.

 The key idea is to save one multiplication on a small problem and then use recursion.

 First we show that two 2 x 2 matrices can be multiplied using less than the eight scalar
multiplications apparently required by the definition. Let A and B are two matrices to be
multiplied.

 Consider the following operations, each of which involves just one multiplication.

DEPARTMENT OF COMPUTER ENGINEERING Page | 16


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

 The required product AB is given by the following matrix.

 It is therefore possible to multiply two 2 x 2 matrices using only seven scalar


multiplications.

 Let t (n) be the time needed to multiply two n x n matrices by recursive use of equations.

 Assume for simplicity that n is a power of 2. Since matrices can be added and subtracted
in a time in 0(n2 ),

t(n)= 7t(n/2) + g(n)

 Where g (n) ϵ 0 (n2). This recurrence is another instance of our general analysis for
divide-and-conquer algorithms.

 The general equation applies with l = 7, b = 2 and k = 2.

 Since l > bk, the third case yields t(n) ϵ 0(nlg7).

 Square matrices whose size is not a power of 2 are easily handled by padding them with
rows and columns of zeros, at most doubling their size, which does not affect the
asymptotic running time.

 Since lg7 < 2.81, it is thus possible to multiply two n x n matrices in a time in 0(n2.81),
provided scalar operations are elementary.

DEPARTMENT OF COMPUTER ENGINEERING Page | 17


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

9. Exponentiation

 Let a and n be two integers. We wish to compute the exponentiation x = a n. For


simplicity, assume that n > 0. If n is small, the obvious algorithm is adequate.

9.1. Exponentiation using Sequential Approach

function exposeq(a, n)

r←a

for i ←1 to n - 1 do

r←a * r
return r

 This algorithm takes a time in Θ(n) since the instruction r ←a * r is executed exactly n-1
times, provided the multiplications are counted as elementary operations.

 If we wish to handle larger operands, we must take account of the time required for each
multiplication.

 Let M(q, s) denote the time needed to multiply two integers of sizes q and s.

 Assume for simplicity that q1 ≤ q2 and s1 ≤ s2 imply that M(q1, s1 ) ≤ M(q2 ,s2).

 Let us estimate how much time our algorithm spends multiplying integers when
exposeq(a, n) is called.

 Let m be the size of a. First note that the product of two integers of size i and j is of size
at least i + j - 1 and at most i + j;

 Let ri and mi be the value and the size of r at the beginning of the i-th time round the
loop.

 Clearly, r1 = a and therefore m1 = m.

 Since ri+1 = a * ri, the size of ri+1 is at least m + mi -1 and at most m + mi.

DEPARTMENT OF COMPUTER ENGINEERING Page | 18


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

 Therefore, the multiplication performed the i-th time round the loop concerns an integer
of size m and an integer whose size is between im - i + 1 and im, which takes a time
between M(m, im - i + 1) and M(m, im).

 The total time T(m, n) spent multiplying when computing an with exposeq is therefore,

 Where m is the size of a. If we use the classic multiplication algorithm, then M (q, s) ϵ Θ
(qs).Let c be a constant such that M(q, s) ≤ c qs so,

 Thus T(m, n) ϵ Θ (m2n2 )

 If we use the divide-and-conquer multiplication algorithm, M(q, s) ϵ Θ (s qlg3) when s >


q, and a similar argument yields T(m, n) ϵ Θ (mlg3 n2).

9.2. Exponentiation using Divide & Conquer Technique

 The key observation for improving exposeq is that an = (an/ 2 )2 when n is even. This
yields the following recurrence.

function expoDC(a, n)

if n = 1 then return a

DEPARTMENT OF COMPUTER ENGINEERING Page | 19


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

if n is even then return [expoDC(a, n/2)] 2

return a * expoDC(a, n - 1)

 To analyze the efficiency of this algorithm, we first concentrate on the number of

 multiplications (counting squaring as multiplications) performed by a call on expoDC(a,


n).

 The number of multiplications is a function only of the exponent n; let us denote it by


N(n).

 No multiplications are performed when n = 1, so N(1)= 0.

 When n is even, one multiplication is performed (the squaring of a n/2) in addition to the
N(n/2) multiplications involved in the recursive call on expoDC(a, n/2).

 When n is odd, one multiplication is performed (that of a by an-1) in addition to the N(n -
1) multiplications involved in the recursive call on expoDC (a, n - 1). Thus we have the
following recurrence.

 To handle such a recurrence, it is useful to bind the function from above and below with
non-decreasing functions. When n > 1 is odd,

N(n)= N(n - 1) + 1 = N((n - 1)/2) + 2 = N([n/2])+2.

 On the other hand, when n is even, N(n)= N(n/2))+1 since Ln/2J = n/2 in that case.
Therefore, N(Ln/2˩) + 1 ≤ N(n) ≤ N(Ln/2˩) + 2 for all n > 1.

DEPARTMENT OF COMPUTER ENGINEERING Page | 20


*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA

 Let M (q, s) denote again the time needed to multiply two integers of sizes q and s, and
let T(m,n) now denote the time spent multiplying by a call on expoDC(a, n), where m is
the size of a. Recall that the size of ai is at most im.

 Inspection of the algorithm expoDC yields the following recurrence.

with the recurrence for N, this implies,

For all n > 1.

 Solving it gives,

 Where, α = 2 with classic multiplication algorithm and, α = lg3 with divide & conquer

 To summarize, the following table gives the time to compute an, where m is the size of a,
depending whether we use exposeq or expoDC, and whether we use the classic or divide-
and-conquer (D&C) multiplication algorithm.

DEPARTMENT OF COMPUTER ENGINEERING Page | 21


*Proprietary material of SILVER OAK UNIVERSITY

You might also like