0% found this document useful (0 votes)
10 views8 pages

C34_EXP2_AOA

The document outlines an experiment focused on implementing Merge Sort and Binary Search using the Divide and Conquer approach, detailing the aim, prerequisites, outcomes, and theoretical background. It includes an algorithm for Merge Sort, its time complexity analysis, and a practical coding example of Binary Search. Additionally, it discusses observations, learning outcomes, and answers to questions regarding the time complexity of Merge Sort.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

C34_EXP2_AOA

The document outlines an experiment focused on implementing Merge Sort and Binary Search using the Divide and Conquer approach, detailing the aim, prerequisites, outcomes, and theoretical background. It includes an algorithm for Merge Sort, its time complexity analysis, and a practical coding example of Binary Search. Additionally, it discusses observations, learning outcomes, and answers to questions regarding the time complexity of Merge Sort.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

PART A

(PART A: TO BE REFFERED BY STUDENTS)

Experiment No.02
A.1 Aim:
Write a program to implement Merge sort / Binary Search using Divide and Conquer
Approach and analyze its complexity.

A.2 Prerequisite: -

A.3 Outcome:
After successful completion of this experiment students will be able to analyze the time
complexity of various classic problems. A.4 Theory:

Merge sort is based on the divide-and-conquer paradigm. Its worst-case running time has a
lower order of growth than insertion sort. Since we are dealing with subproblems, we state
each subproblem as sorting a subarray A[p .. r]. Initially, p = 1 and r = n, but these values
change as we recurs through subproblems.

To sort A[p .. r]:

1. Divide Step

If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two subarrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].

2. Conquer Step

Conquer by recursively sorting the two subarrays A[p .. q] and A[q + 1 .. r].

3. Combine Step

Combine the elements back in A[p .. r] by merging the two sorted subarrays A[p .. q] and
A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r).

Algorithm:

MERGE (A, p, q, r)
n1 ← q − p + 1 n2
←r−q
Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
FOR i ← 1 TO n1
DO L[i] ← A[p + i − 1]
FOR j ← 1 TO n2
DO R[j] ← A[q + j ]
L[n1 + 1] ← ∞
R[n2 + 1] ← ∞
i←1
j←1
FOR k ← p TO r
DO IF L[i ] ≤ R[ j]
THEN A[k] ← L[i]
i←i+1
ELSE A[k] ← R[j]
j←j+1

Time Complexity:

In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If
the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2)
+ n follows from the definition of the algorithm (apply the algorithm to two lists of half the
size of the original list and add the n steps taken to merge the resulting two lists). The closed
form follows from the master theorem.
In the worst case, the number of comparisons merge sort makes is equal to or slightly
smaller than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).
Time complexity=O(nlogn)
PART B
(PART B: TO BE COMPLETED BY STUDENTS)
(Students must submit the soft copy as per following segments within two hours of the practical.
The soft copy must be uploaded on the Blackboard or emailed to the concerned lab in charge
faculties at the end of the practical in case the there is no Black board access available)

Roll No.: C34 Name: Shrinath Babar

Class: SEC Batch: C2

Date of Experiment: Date of Submission

Grade:

B.1 Software Code written by student:


#include <stdio.h>

int binarySearch(int arr[], int low, int high, int target) {

int mid = low + (high - low) / 2;

if (low > high) {

return -1;

if (arr[mid] == target) {

return mid;

} else if (arr[mid] > target) {


return binarySearch(arr, low, mid - 1, target);

} else {

return binarySearch(arr, mid + 1, high, target);

int main() { int arr[] = {1, 3, 5, 7, 9, 11,

13, 15, 17, 19}; int size = sizeof(arr) /

sizeof(arr[0]); int target, result;

printf("Enter the number to search: ");

scanf("%d", &target);

result = binarySearch(arr, 0, size - 1, target);

if (result == -1) {

printf("Element not found in the array.\n");

} else {

printf("Element found at index %d.\n", result);

getch();

return 0;

B.2 Input and Output:


B.3 Observations and learning:
Observations:

• The binary search algorithm efficiently finds the target element by repeatedly halving the
search space.
• It performs recursive calls, narrowing the search range based on comparisons with the
middle element.
• The time complexity is O(log n), and space complexity is O(log n) due to recursion.
• The algorithm handles edge cases such as target not found or empty arrays correctly.

Learning:

• Divide and Conquer: Binary search reduces the problem size exponentially, making it
efficient for large datasets.
• Efficiency: The algorithm’s O(log n) time complexity is much faster than linear search
(O(n)).
• Recursive Nature: Binary search uses recursion to divide the problem, with a O(log n)
space complexity.
• It only works on sorted arrays, and unsorted data requires sorting before using binary
search.
B.4 Conclusion:
• Binary search is an efficient algorithm with O(log n) time complexity for searching in
sorted arrays.
• It uses the divide-and-conquer approach and is highly effective for large datasets.
• The algorithm assumes sorted data and is optimal for such use cases.

B.5 Question of Curiosity


Q1: Derive time complexity of merge sort?

Merge Sort is a divide-and-conquer algorithm. It works by recursively splitting the array into two
halves until each subarray contains a single element (which is trivially sorted), and then merging
the subarrays back together in sorted order.

• Divide Step: The array is divided into two halves, which takes constant time O(1) at each
level of recursion.
• Conquer Step: Merging two sorted halves takes linear time O(n) where n is the size of
the array being merged.

Since the array is divided into two halves at each step, the depth of the recursion tree is log₂(n).
At each level, we perform O(n) work to merge the subarrays. Therefore, the total time
complexity can be derived as:

• Time Complexity: The number of levels in the recursion tree is log₂(n), and at each level,
merging takes O(n) time. So, the total time complexity is:

T(n)=O(nlogn)

Q2: What is worst case and best case time complexity of merge sort?
• Best Case Time Complexity: Merge sort always divides the array into two halves and
merges them. Even if the array is already sorted, merge sort will still perform the same
number of recursive calls and merge operations. Therefore, the best case time complexity
is O(n log n).
• Worst Case Time Complexity: The worst case happens when the array is in reverse
order or unsorted. However, merge sort still divides the array and performs the merge step
in the same way, irrespective of the input arrangement. Therefore, the worst case time
complexity is also O(n log n).

Thus, both the best case and worst case time complexity of merge sort are O(n log n).

Q3: How many comparisons are done in merge sort?

The number of comparisons in merge sort depends on the number of elements in the array and
the number of recursive calls made during the merge process.

• During the merge step, each pair of elements from two subarrays is compared.
• At each level of recursion, n comparisons are made to merge two subarrays into a sorted
one, where n is the number of elements in the current subarray.

Since the recursion tree has log₂(n) levels, the total number of comparisons is proportional to the
total number of elements at each level. Therefore, the number of comparisons performed by
merge sort is O(n log n).

Q4: Can we say merge sort works best for large n? Yes or no? Reason?

Yes, merge sort works very efficiently for large n due to its O(n log n) time complexity. It
consistently performs well even for large arrays, and its time complexity does not degrade as
much as algorithms like bubble sort or insertion sort (which have O(n²) time complexity).

• Logarithmic growth: The log n factor means that as n increases, the number of recursive
levels increases slowly compared to algorithms with quadratic complexity.
• Stable performance: Unlike algorithms like quicksort, merge sort guarantees O(n log n)
performance in all cases (best, worst, and average).
• Parallelizable: Merge sort can be parallelized, making it even faster for large datasets
when executed on multi-core processors.

You might also like