0% found this document useful (0 votes)
43 views

Divide and Conquer

The document discusses the divide and conquer algorithm and provides examples of how it can be applied. It begins by defining divide and conquer as an approach that divides a problem into smaller subproblems, solves those subproblems recursively, and then combines the solutions to solve the original problem. It then provides more details on the steps of divide, conquer, and combine. Examples provided where divide and conquer can be applied include binary search, finding minimum and maximum values, quicksort, and mergesort. Details are given on how binary search, finding min/max, and quicksort specifically use the divide and conquer approach.

Uploaded by

trhdtyjd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Divide and Conquer

The document discusses the divide and conquer algorithm and provides examples of how it can be applied. It begins by defining divide and conquer as an approach that divides a problem into smaller subproblems, solves those subproblems recursively, and then combines the solutions to solve the original problem. It then provides more details on the steps of divide, conquer, and combine. Examples provided where divide and conquer can be applied include binary search, finding minimum and maximum values, quicksort, and mergesort. Details are given on how binary search, finding min/max, and quicksort specifically use the divide and conquer approach.

Uploaded by

trhdtyjd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

 Divide and Conquer (DAC):

In this, a problem is divided into sub-problems and the solutions of these sub-problems
are combined into a solution for the large problem. It has three steps:
i. Divide- the problem into a number of subproblems that are smaller instances of the
same problem.
ii. Conquer- the subproblems by solving them recursively. If the subproblem sizes are
small enough, however, just solve the subproblems in a straightforward manner.
iii. Combine- the solutions to the subproblems into the solution for the original problem.

Problem
Divide

Sub- Sub- Sub-


Problem Problem Problem
Conquer

Sub- Sub- Sub-


Solution Solution Solution
Combine

Solution

Algorithm of Divide and Conquer


(Given a problem P of size n=2k)
1. Algorithm D and C (P) {
2. if n is small, solve it
3. else {
4. divide P into sub-problems P1, P2, …… Pk
5. Apply D and C to each of these sub-problems
6. Combine all the solutions
7. }
8. }
Note: The complexity of many divide-and-conquer algorithms is given by recurrences
of the form

T(1) , n=1
T(n) = a.T(n/b) + f(n) , n > 1
Derivation of Recurrence Relation: Let P be a problem, which is divided into a set of sub-
problems P1, P2 and so on. Let the size of the problem be n which is equal to 2k.
i.e. Size of problem-> n=2k and also assume the size of the whole tree=k.

So, P As n=2 k
P1 P2 n/2=2k/2=2k-1
P3 P4 P5 P6 n/4=2k/4=2k/22=2k-2
P7 P8 Similarly, n/8=2k-3 so on.

And so on……
Divide till the problem becomes small or equal to 1 (20) so that we can not divide it further.
To determine the time complexity in terms of Recurrence relation:
Let P be a problem of size n where T(n) represents the Time Complexity of problem P.
So, T(n) = T(n/2) +T(n/2) +f(n)

Time required to divide the problem Cost-function required to combine the solution
= 2. T(n/2) +f(n)
If the problem is divided into three sub-problems (P1, P2, P3) then T(n)=3T(n/2)+f(n)
Therefore, in a general way, T(n) = a. T(n/b)+f(n) when n>1

The Time Complexity of Divide and Conquer is O(log n)

Applications of Divide and Conquer


1. Binary Search
2. Find the minimum and maximum
3. Quick Sort
4. Merge Sort
5. Strassen’s Matrix Multiplication
 Binary Search:
Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the
search interval in half. The idea of binary search is to use the information that the array
is sorted and reduce the time complexity to O(log n).

Condition for when to apply Binary Search in a Data Structure


The data must be sorted (in ascending order).

Steps of Binary Search


1. To find the MID position, compare the ITEM whether it is on the right-hand side
or left-hand side.
2. Search for the element.
3. Find the MID again.
Algorithm of Binary Search
In binary search, data is in a sorted array with lower bound (LB) and upper bound
(UB). The variables BEG, END, and MID denote the beginning, ending, and middle
location of a segment of data. The algorithm of binary search is as follows:
Step 1: Set BEG = LB, END = UB, MID = (BEG + END)/2
Step 2: Repeat steps 3 & 4 while BEG ≤ END and DATA [MID] ≠ ITEM
Step 3: If ITEM < DATA [MID]
Set END = MID-1
else
if ITEM > DATA [MID]
Set BEG = MID+1
[end of if structure]
Step 4: MID = (BEG + END)/2
[end of step 2 loop]
Step 5: If DATA [MID] = ITEM
Print SEARCH IS SUCCESSFUL
and Set LOC = MID
Step 6: else LOC = 0
Print SEARCH IS UNSUCCESSFUL
Step 7: [EXIT]
Example:
Let us have a sorted array of elements like 5, 10, 15, 20, 25, 30. Suppose we want to search 10
out of these i.e. ITEM = 10.

5, 10, 15, 20, 25, 30

x[0] x[5]
BEG = 0 END = 5
ITEM that we want to search
So, MID = (BEG + END)/2
i.e MID = (0 + 5)/2 = 5/2 ≈ 3
So, DATA [MID] = 3rd element i.e. 20.
Here, ITEM < DATA [MID]
So, END = MID-1
i.e END = 3-1=2
Also, BEG = 0
So, MID = (BEG + END)/2 i.e. 0+2/2 = 1st element which is 10.
This means SEARCH IS SUCCESSFUL
The Complexity of the Binary Search

Worst Case : O(log n)

Best Case: O (1)

Average Case: O (log n)

Advantages of Binary Search


 Binary search is faster than linear search, especially for large arrays.
 More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
 Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.

Drawbacks of Binary Search


 The array should be sorted.
 Binary search requires that the data structure being searched be stored in contiguous
memory locations.
 Binary search requires that the elements of the array be comparable, meaning that they
must be able to be ordered.
Applications of Binary Search
 Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the optimal
hyperparameters for a model.
 It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
 It can be used for searching a database.

 Finding the Minimum and Maximum using DAC:

One approach is to go with Linear Search but it will take time so we are finding Minimum and
Maximum using Divide and Conquer technique.

Example: 9 6 4 7 10 14 8 11 (solved separately)

 Here, there are 8 elements and you have to find min and max numbers using DAC.
 So, take the first element as i i.e i=9
And last element as j i.e. j=11 where i and j are the pointers that are pointing towards
the first and last element.
 Now, find MID.
MID=(i+j)/2
 This will divide the given list into smaller sub-list (That’s what we do in DAC)
 Repeat the above steps till we are left with 2 elements in the list because we want to
compare them and find min and max so for comparison, we at least need 2 elements in
the list.
 Compare them
 Now, the COMBINE step occurs, start comparing the elements such that the min
element will be compared with the min AND
 the max element will be compared with the max.
 Repeat the same process till you get the min and max elements from the complete list.
 This way, DAC works in finding the minimum and maximum elements from a given
list of numbers.

Algorithm of finding Min and Max. using DAC:

DAC Max_Min (A, i, j, max, min)


{
int mid;
if (i = = j)
max = min = A[i];
else if (i = = j - 1)
{
if (A[i] < A[j])
max = A[j], min = A[i];
else
max = A[i], min = [A[j];
}
else
{
mid = (i + j) / 2;
}
DAC Max_Min (A, i, mid, max, min)
DAC Max_Min (A, mid+1, max, min)

if (max1 < max2)


max = max2;
else
max = max1;

if (min1 < min2)


min = min1;
else
min = min2;
}

Time Complexity of finding Min and Max. = O(n)

 Quick Sort:
 QuickSort is one of the different sorting techniques which is based on the concept
of Divide and Conquer just like Merge Sort.
 But in Quick Sort, all the heavy lifting (major work) is done while dividing the array
into subarrays while in the case of Merge Sort, all the real work happens during the
merging of the arrays.
 It is also called the Partition-Exchange Sort because it divides the big problem
into sub-parts, swapping of elements takes place and it follows the DAC
approach.
 How does QuickSort work?

The key process in quickSort is a partition(). The target of partitions is to place


the pivot (any element can be chosen to be a pivot) at its correct position in the
sorted array and put all smaller elements to the left of the pivot, and all greater
elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its
correct position and this finally sorts the array.

 The algorithm divides the list into three main parts:

Elements less than the Pivot element Elements greater than


Pivot element (Central element) Pivot element

 Selection of Pivot element: Pivot element can be any element from the array. It
can be the first element, the last element or any random element.

Example: 35 50 15 25 80 20 90 45 (solved separately)


As the pivot element can be any element in the given array so here we take the first element
(35) as the pivot element (denoted by v). (Note: You can take any element as a Pivot element).
Here, we have reserved the first element as Pivot element (v). Now, denote the next element
(50) as P and the last element (45) of the array as Q where P and Q are the pointers such that P
will move in RHS and Q will move in LHS.

P: It will be incremented by one and will move towards RHS


It will stop when it gets element greater than Pivot element i.e if P>v then Stop P

Q: It will be decremented by one and will move towards LHS


It will stop when it gets element lesser than Pivot element i.e if Q<v then Stop Q

In the end, we will take the +∞ symbol because if our array is like 35 22 18 12 9 4 where
v=35 and P=22 so when we compare other elements with v then the rest of all elements are
lesser than P so it will move on. Therefore, to stop P we need some +∞ at the end. But for Q,
we will not take -∞ because when we are comparing Q with the pivot element, we are looking
for the element lesser than or equal to the Pivot element that is we are moving towards LHS so
when we compare other elements with v while moving towards LHS then at the end, there will
be a pivot element somewhere so Q will stop there. So, this is the concept of how P and Q are
moving.

If P and Q have not crossed each other then Swap P and Q


If P and Q have crossed each other then Swap v and Q

For practice: 25 20 15 35 80 50 90 45 -sort it


5 3 8 1 4 6 2 7 -sort it

Step-by-Step Process of Partition-Exchange Sort


In the Quick Sort algorithm, partitioning of the list is performed using the following steps-

Step 1 - Consider the first element of the list as a pivot (i.e., the element at the first position in
the list).

Step 2 - Define two variables P and Q. Set P and Q to the first and last elements of the list
respectively.

Step 3 - Increment P until list[P] > pivot then stop.

Step 4 - Decrement Q until list[Q] < pivot then stop.

Step 5 - If P < Q then exchange list[P] and list[Q].

Step 6 - Repeat steps 3,4 & 5 until P > Q.

Step 7 - Exchange the pivot element with a list[Q] element.


Algorithm of Quick Sort
Partition (A, start, end)
{
v = A [start];
i = start;
for (j = start + 1; j ≤ end; j + +) Main Part of the
{ Quick Sort
if (A [ j] ≤ v)
Algorithm
{
i = i + 1;
swap (A [i], A[ j])
}
}
swap (A [i], A [start])
}
(Here, A is the array of unsorted elements, start is the starting index of the array and end is
the last index of the array. i and j are the looping variables and v is the pivot element)

QuickSort (A, start, end)


{
if (start < end)
{
M = Partition (A, start, end)
QuickSort (A, start, m-1)
QuickSort (A, m+1, end)
}
}
(M is the index position of the pivot element (v) after first PASS when Pivot element comes
it’s correct position)
Time Complexity of the Quick Sort Algorithm
Worst Case : O(n2)
In quick sort, the worst case occurs when the pivot element is either the greatest or smallest
element. Suppose, if the pivot element is always the last element of the array, the worst case
would occur when the given array is sorted already in ascending or descending order. The
worst-case time complexity of quicksort is O(n2).

Best Case: O (n log n)


In Quicksort, the best case occurs when the pivot element is the middle element or near the
middle element. The best-case time complexity of quicksort is O(n*logn).

Average Case: O (n log n)


It occurs when the array elements are in jumbled order that is not properly ascending and not
properly descending. The average case time complexity of quicksort is O(n*logn).

Advantages of Quick Sort


 It is a divide-and-conquer algorithm that makes it easier to solve problems.
 It is efficient on large data sets.
 It has a low overhead, as it only requires a small amount of memory to function.

Disadvantages of Quick Sort


 It has a worst-case time complexity of O(n2), which occurs when the pivot is chosen
poorly.
 It is not a good choice for small data sets.
 It is not a stable sort, meaning that if two elements have the same key, their relative order
will not be preserved in the sorted output in case of quick sort, because here we are
swapping elements according to the pivot’s position (without considering their original
positions).

You might also like