DAA Unit-3 (B)
DAA Unit-3 (B)
Analysis
Unit-3 of Algorithm
Divide and Conquer
Algorithm
Divide & Conquer (D&C) Technique
Many useful algorithms are recursive in structure: to solve a given problem, they call
themselves recursively one or more times.
These algorithms typically follow a divide-and-conquer approach:
The divide-and-conquer approach involves three steps at each level of the recursion:
1. Divide: Break the problem into several sub problems that are similar to the original problem
but smaller in size.
2. Conquer: Solve the sub problems recursively. If the sub problem sizes are small enough, just
solve the sub problems in a straightforward manner.
3. Combine: Combine these solutions to create a solution to the original problem.
2
D&C: Running Time Analysis
The running-time analysis of such divide-and-conquer (D&C) algorithms is almost
automatic.
Let be the time required by D&C on instances of size .
The total time taken by this divide-and-conquer algorithm is given by recurrence
equation,
is the power of in
3
Binary Search
4
Binary Search
Binary Search is an extremely well-known instance of divide-and-conquer approach.
Let be an array of increasing sorted order; that is whenever .
Let be some number. The problem consists of finding in the array if it is there.
If is not in the array, then we want to find the position where it might be inserted.
5
Binary Search - Example
Input: sorted array of integer values. .
[0] [1] [2] [3] [4] [5] [6]
3 6 7 11 32 33 53
3 6 7 11 32 33 53
IsIs77<=midpoint
midpointvalue?
key? NO.
YES.
3 6 7 11 32 33 53
3 6 7 11 32 33 53
=<>value
value
valueof
of
ofmidpoint?
midpoint?
midpoint?NO.
NO.
YES.
8
Binary Search - Example
[0] [1] [2] [3] [4] [5] [6]
𝑥=7
3 6 7 11 32 33 53
3 6 7 11 32 33 53
10
Binary Search – Recursive Algorithm
BinarySearch(arr, item, beg, end)
if beg<=end
midIndex = (beg + end) / 2
if item == arr[midIndex]
return midIndex
else if item < arr[midIndex]
return binarySearch(arr, item, midIndex + 1, end)
else
return binarySearch(arr, item, beg, midIndex – 1)
return -1
11
Binary Search - Analysis
Let be the time required for a call on binrecwhere is the number of elements still under
consideration in the search.
The recurrence equation is given as,
Comparing this to the general template for divide and conquer algorithm,
2. Explain binary search algorithm and find the element in the following array. [7]
3. Let be a sorted array of distinct integers. Give an algorithm that can find an index such
that and provided such an index exists. Prove that your algorithm takes time in in the
worst case. (**)
13
Merge Sort
14
Merge Sort - Example
Unsorted Array
724 521 2 98 529 31 189 451
1 2 3 4 5 6 7 8
1 2 1 2 Split 1 2 1 2
724 521 2 98 529 31 189 451
1 1 1 1 1 1 1 1
724 521 2 98 529 31 189 451
17
Merge Sort - Algorithm
merge(U[p..q],V[q+1..r],T[]
MergeSort(A, p, r): )
if p > r i ← 1;
return j ← 1;
q = (p+r)/2 U[q+1], V[r+1] ← ∞;
mergeSort(A, p, q) for k ← 1 to q + r do
mergeSort(A, q+1, if U[i] < V[j]
r)
T[k] ←
merge(A, p, q, r) U[i];
i ← i + 1;
else
T[k] ←
19 V[j];
Merge Sort - Analysis
Let be the time taken by this algorithm to sort an array of elements.
Separating into takes linear time; also takes linear time.
where .
{
𝜃 ( 𝑛 ) 𝑖𝑓 𝑎 <𝑏
𝑘 𝑘
𝑡 ( 𝑛 ) = 𝜃 ( 𝑛 𝑘 𝑙𝑜𝑔𝑛 ) 𝑖𝑓 𝑎=𝑏𝑘
𝜃 (𝑛 ) 𝑖𝑓 𝑎 >𝑏 𝑘
𝑙𝑜𝑔𝑏 𝑎
20
Quick Sort
21
Quick Sort – Example(Inplace)
Quick sort chooses the first element as a pivot element, a lower bound is the first
index and an upper bound is the last index.
The array is then partitioned on either side of the pivot.
Elements are moved so that, those greater than the pivot are shifted to its right
whereas the others are shifted to its left.
Each Partition is internally sorted recursively.
Pivot
Element
0 1 2 3 4 5 6 7 6 7
42 23 74 11 65 58 94 36 99 87
LB UB
Quick Sort - Example
Procedure pivot(T[i,…,j]; 0 1 2 3 4 5 6 7 8 9
var l)
p ← T[i] 42 23 74 11 65 58 94 36 99 87
k ← i; l ← j+1 k l
Repeat
k ← k+1 until T[k] > p or Swap
k ≥ j
Repeat 42 23 36
74 11 65 58 94 74
36 99 87
l ← l-1 until T[l] ≤ p k l
While k < l do
Swap T[k] and T[l]
Swap
Repeat k ← k+1 until
T[k] > p 42 23 36 42
11 11 65 58 94 74 99 87
Repeat l ← l-1 until k l
T[l] ≤ p
Swap T[i] and T[l]
LB = 0, UB = 9 p = 42
k = 0
l = 10
Quick Sort - Example
Procedure pivot(T[i,…,j];
var l)
LB UB
p ← T[i]
k ← i; l ← j+1 0 1 2 3 4 5 6 7 8 9
Repeat 11 23 36 42 65 58 94 74 99 87
k ← k+1 until T[k] > p or
k ≥ j
Repeat
l ← l-1 until T[l] ≤ p 11 23 36
While k < l do k l
Swap T[k] and T[l]
Repeat k ← k+1 until
LB UB
T[k] > p
Repeat l ← l-1 until 11 23 36 42 65 58 94 74 99 87
T[l] ≤ p
Swap T[i] and T[l] 23 36
k l
11 23 36 42 65 58 94 74 99 87
Quick Sort - Example
LB UB
Procedure pivot(T[i,…,j];
var l) 4 5 6 7 8 9
p ← T[i]
k ← i; l ← j+1 11 23 36 42 65 58 94 72 99 87
Repeat Swap
k ← k+1 until T[k] > p or
k ≥ j 65 65
58 58 94 72 99 87
Repeat
l ← l-1 until T[l] ≤ p k l
While k < l do
Swap T[k] and T[l]
Repeat k ← k+1 until 65 65 94 72 99 87
58
T[k] > p
Repeat l ← l-1 until LB UB
T[l] ≤ p 11 23 36 42 65
58 65 94 72 99 87
Swap T[i] and T[l]
Quick Sort - Example
LB UB
Procedure pivot(T[i,…,j]; Swap
var l)
p ← T[i] 94 72 87
99 99
87
k ← i; l ← j+1
Repeat k l
k ← k+1 until T[k] > p or Swap
k ≥ j 94 72 94
87 87 99
Repeat
l ← l-1 until T[l] ≤ p k l
While k < l do LB UB
Swap T[k] and T[l]
Repeat k ← k+1 until Swap
T[k] > p 72 87
87 72 94 99
Repeat l ← l-1 until
k l
T[l] ≤ p
Swap T[i] and T[l]
11 23 36 42 58 65 72 87 94 99
Quick Sort - Examples
Sort the following array in ascending order using quick sort algorithm.
1. 5, 3, 8, 9, 1, 7, 0, 2, 6, 4
2. 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9 (HW)
3. 9, 7, 5, 11, 12, 2, 14, 3, 10, 6 (HW)
27
Quick Sort - Algorithm
28
Quick Sort - Analysis
1. Worst Case
• Running time depends on which element is chosen as key or pivot element.
• The worst case behavior for quick sort occurs when the array is partitioned into one sub-array with
elements and the other with element.
• In this case, the recurrence will be,
When does the worst case appear?: when the element are sorted.
29
Quick Sort - Analysis
2. Best Case
• Occurs when partition produces sub-problems each of size n/2.
• Recurrence equation:
3. Average Case
• Average case running time is much closer to the best case.
• If suppose the partitioning algorithm produces a 9:1 proportional split the recurrence will be,
30
Multiplying Large Integers
31
Multiplying Large Integers Problem
Multiplying two 𝑛 digit large integers using divide and conquer method.
Example: Multiplication of by .
1. Convert both the numbers into same length nos. and split each operand into two parts:
0981 1234
2. We can write as,
𝑤=09 𝑥=81 𝑦 =12 𝑧 =34
0981=102𝑤+𝑥 1234=102 𝑦+ 𝑧
=
32
Multiplying Large Integers Problem
Now, the required product can be computed as,
Additional terms
33
Multiplying Large Integers Problem
Now we can compute the required product as follows:
𝒓 =( 𝒘 + 𝒙 ) × ( 𝒚 + 𝒛¿)𝒘 ∙ 𝒚 + ( 𝒘 ∙ 𝒛 + 𝒙 ∙ 𝒚 ) + 𝒙 ∙ 𝒛
4 2
10 𝑤∙ 𝑦 +10 ( 𝑤 ∙ 𝑧 + 𝑥 ∙ 𝑦 ) + 𝑥 ∙ 𝑧
2
981 ×1234=10 4 𝑝 +10 (𝑟 −𝑝 − 𝑞)+𝑞
¿ 1080000+ 127800+2754
¿ 1210554.
34
a= aL aR for Binary Number
b= bL bR
𝑝=𝑤 ∙ 𝑦 =𝑎 𝐿 ∗ 𝑏𝐿
𝑞= 𝑥 ∙ 𝑧 =𝑎 𝑅 ∗ 𝑏𝑅
𝑟 =( 𝑤+ 𝑥 ) × ( 𝑦 + 𝑧 )=( 𝑎 𝐿+ 𝑎 𝑅 ) × ( 𝑏 𝐿+𝑏 𝑅 )
𝒓 =( 𝒘 + 𝒙 ) × ( 𝒚 + 𝒛¿)𝒘 ∙ 𝒚 + ( 𝒘 ∙ 𝒛 + 𝒙 ∙ 𝒚 ) + 𝒙 ∙ 𝒛
𝑛 𝑛/ 2
𝑎∙ 𝑏=2 𝑎 𝐿∙ 𝑏 𝐿+2 ( 𝑎 𝐿∙ 𝑏 𝑅+𝑎 𝑅 ∙ 𝑏 𝐿) +𝑎 𝑅 ∙ 𝑏 𝑅
𝑛 /2
𝑎 × 𝑏=2 𝑛 𝑝+ 2 (𝑟 − 𝑝 − 𝑞)+𝑞
35
Algorithm
Multiply(a, b, n)
if n = 1 return a*b
aL= The left half of a
aR= The right half of a
bL= The left half of b
bR= The right half of b
Solving it gives,
37
Multiplying Large Integers Problem
Example: Multiply with using divide & conquer method.
Solution using D&C
38
Strassen’s Algorithm for
Matrix Multiplication
39
Matrix Multiplication
Multiply following two matrices. Count how many scalar multiplications are required.
40
Matrix Multiplication
In general, and are two matrices to be multiplied.
and
Ci,j=
Computing each entry in the product takes multiplications and there are entries for a
total of .
41
Algorithm for simple Addition
Addition(A,B) //A,B,C are two dimensional matrices.
N rows[A]
Let C be the matrix having A+B of size n*n.
for i 0 to n
for j 0 to n
C[i][j] (A[i][k]+B[k][j])
return C
A[j+1← temp
42
Algorithm for simple multiplication
Naive_square_multiply(A,B) //A,B,C are two dimensional matrices.
n rows[A]
Let C be the matrix having A*B of size n*n.
for i 0 to n
for j 0 to n
for k 0 to n
C[i][j] C[i][j] +(A[i][k]*B[k][j])
return C
A[j+1← temp
43
Divide and Conquer Approach
Let us first assume that n is an exact power of 2 in each of the n x n matrices for A and B.
This simplifying assumption allows us to break a big n x n matrix into smaller blocks or
quadrants of size n/2 x n/2, while also ensuring that the dimension n/2 is an integer.
We calculate following values recursively. ae + bg, af + bh, ce + dg and cf + dh.
44
In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions.
Addition of two matrices takes O(N2) time. So the time complexity can be written as
T(N) = 8T(N/2) + O(N ) 2
47
MOTIVATION BEHIND STRASSEN ALGORITHM
Simple Divide and Conquer also leads to O(N3), can there be a better way?
In the above divide and conquer method, the main component for high time complexity is
8 recursive calls. The idea of Strassen’s method is to reduce the number of recursive calls
to 7. Multiplication is more costly operation than addition.
Addition and Subtraction of two matrices takes O(N2) time. So time complexity can be
written as
T(N) = 7T(N/2) + O(N2)
48
Strassen’s Algorithm for Matrix Multiplication
Consider the problem of multiplying two matrices.
Strassen’s devised a better method which has the same basic method as the
multiplication of long integers.
The main idea is to save one multiplication on a small problem and then use recursion.
49
Strassen’s Algorithm for Matrix Multiplication
50
Strassen’s Algorithm - Analysis
It is therefore possible to multiply two matrices using only seven
scalar multiplications.
Let be the time needed to multiply two matrices by recursive use
of equations.
Where
The general equation applies with and .
Since the third case applies and
Since , it is possible to multiply two matrices in a time .
51
Exponentiation
52
Exponentiation - Sequential
Let and be two integers. We wish to compute the exponentiation .
Algorithm using Sequential Approach:
function exposeq(a, n)
r ← a
for i ← 1 to n - 1 do
r ← a * r
return r
This algorithm takes a time in since the instruction is executed exactly times, provided
the multiplications are counted as elementary operations.
53
Exponentiation – D & C
The efficient exponentiation algorithm is based on the simple observation that for an
even n, an=a(n/2) * a(n/2).
The case of odd b is trivial, as it's obvious that an=a * a(n-1).
So now we can compute by doing only log(n) squarings and no more than log(n)
multiplications, instead of n multiplications - and this is a very improvement for a large n.
Suppose, we want to compute
We can write as,
In general,
56
Exponentiation – D & C
Algorithm using Divide & Conquer Approach:
function expoDC(a, n)
if n = 0 then return 1
if n = 1 then return a
if n is even then return [expoDC(a,
n/2)]2
else return a * expoDC(a, n - 1)
Number of operations performed by the algorithm is given by,
T
57
Exponentiation – D & C
an
an/2
an/2
an/4 an/4
N/2x = 1
an/8 an/8
N = 2x
logN = x
O(logN)
a1 N=212
a1
Speed of the machine is 210 / second
58
Thank You!