0% found this document useful (0 votes)
9 views8 pages

merge

This conference paper presents a new in-place sorting algorithm based on an enhanced merge sort technique that utilizes recursive partitioning and multiple pivots. The algorithm achieves O(n) best-case and O(n log n) average and worst-case time complexities, and it aims to reduce memory usage by avoiding external arrays during the merging process. The paper includes a detailed methodology, pseudo code, and a case study demonstrating the algorithm's performance compared to traditional sorting methods.

Uploaded by

zhumalyalan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

merge

This conference paper presents a new in-place sorting algorithm based on an enhanced merge sort technique that utilizes recursive partitioning and multiple pivots. The algorithm achieves O(n) best-case and O(n log n) average and worst-case time complexities, and it aims to reduce memory usage by avoiding external arrays during the merging process. The paper includes a detailed methodology, pseudo code, and a case study demonstrating the algorithm's performance compared to traditional sorting methods.

Uploaded by

zhumalyalan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/312963714

Merge sort enhanced in place sorting algorithm

Conference Paper · May 2016


DOI: 10.1109/ICACCCT.2016.7831730

CITATIONS READS

6 7,455

2 authors, including:

Tribikram Pradhan
Indian Institute of Technology BHU
40 PUBLICATIONS 405 CITATIONS

SEE PROFILE

All content following this page was uploaded by Tribikram Pradhan on 01 February 2020.

The user has requested enhancement of the downloaded file.


2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

Merge Sort Enhanced In Place Sorting


Algorithm
1
Vignesh R, 2Tribikram Pradhan,
1,2
Department of Information and Communication Technology (ICT) Manipal Institute of Technology,
1,2
Manipal University Manipal – 576 014, Karnataka, India
1 1,2
[email protected], [email protected]

Abstract—This paper aims at introducing a new sorting uses the concept of divide and conquer to sort the
al- gorithm which sorts the elements of an array In Place. array recursively using bottom up approach. Instead of
This algorithm has O(n) best case Time Complexity using an external array to merge the two sorted sub
and O(n log n) average and worst case Time Complexity. arrays, we use multiple pivots to keep track of the
We achieve our goal using Recursive Partitioning
minimum element of both the sub arrays and sort it In
combined with In Place merging to sort a given array. A
comparison is made between this particular idea and Place.
other popular implementations. We finally draw out a
conclusion and observe the cases where this outperforms Rest of the paper is organized as follows. Section II
other sorting algorithms. We also look at its shortcomings discusses the various references used in making this
and list the scope for future improvements that could be paper. Section III describes the basic working idea
made. behind this algo- rithm. Section IV contains the
pseudo code required for the implementation of this
Keywords—Time Complexity, In Place, Recursive algorithm. In Section V, we do a Case Study on the
Partitioning
merging process over an array. In Section VI, we derive
the time and space complexities of our code. In Section
I. INTRODUCTION
VII, we do an experimental analysis of this algorithm
In mathematics and computer science, the process of
on arrays of varying sizes. In Section VIII, we draw out
ar- ranging similar elements in a definite order is
asymptotic conclusion based on Section VI and VII.
known as sorting. Sorting is not a new term in
We finally list out the scope for future improvements
computing. It finds its significance in various day to
and conclude the paper in Section X.
day applications and forms the backbone of
computational problem solving. From complex search
II. LITERATURE SURVEY
engine algorithms to stock markets, sorting has an
You Ying, Ping You and Yan Gan[2], in the year 2011
impeccable presence in this modern day era of
made a comparison between the 5 major types of
information technology. Efficient sorting also leads in
sorting algorithms. They came to a conclusion that
optimization of many other complex problems.
Insertion or Selection sort performs well for small
Algorithms related to sorting have always attracted a
range of elements. It was also noted that Bubble or
great deal of Computer Scientists and Mathematicians.
Insertion sort should be preferred for ordered set of
Due to the simplicity of the problem and the need for
elements. Finally, for large random input parameters,
solving it more systematically, more and more sorting
Quick or Merge sort outperforms other sorting
algorithms are being devised to suit the purpose.
algorithms.
There are many factors on which the performance of
a sorting algorithm depends, varying from code
Jyrki Katajainen, Tomi Pasanen and Jukka
complexity to effective memory usage. No single
Teuhola[4], in the year 1996 explained the uses and
algorithm covers all aspects of efficiency at once.
performance analysis of an In Place Merge Sort
Hence, we use different algorithms under different
algorithm. Initially, a straightforward variant was
constraints.
applied with O(n log 2n)+ O(n) comparisons and 3(n log
When we look on developing a new algorithm, it is
2n) + O(n) moves. Later, a more advanced variant was
impor- tant for us to understand how long might the
introduced which required at most (n log 2n) + O(n)
algorithm take to run. It is known that the time for
comparisons and (n log 2n) moves, for any fixed array
any algorithm to execute depends on the size of the
of size ’n’.
input data. In order to analyze the efficiency of an
algorithm, we try to find a relationship on it’s time
Antonio S Symbonis[6], in 1994 showed the stable
dependence with the amount of data given.
merging of two arrays of sizes ‘ m’ and ‘ n’, where m
Another factor to take into consideration is the
< n, with O(m + n) assignments, O(m log (n/m + 1))
space used up by the code with respect to the input.
comparisons and a constant amount of additional
Algorithms that need constant minimum extra space
space. He also mentioned the possibility of an In
are called In Place. They are generally preferred over
Place merging without the use of an internal buffer.
algorithms that take extra memory space for their
execution.
Wang Xiang[7], in the year 2011 presented a brief
In this paper, we introduce a new algorithm which

ISBN No.978-1-4673-9545-8 698


2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

analysis of the performance measure of Quick Sort 3.1 DIVIDE AND CONQUER
algorithm. This paper discusses about the Time
Complexity of Quick Sort algorithm and makes a We use the Divide and Conquer strategy to split the
comparison between the improved Bubble Sort and given array into individual elements. Starting from
Quick Sort through analysing the first order derivative individual elements, we sort the array using Bottom Up
of the function that is found to co-relate Quick Sort Approach, keeping track of the minimum and
with other sorting algorithms. maximum value of the sub arrays at all times. The
technique used for splitting the array is similar to
Shrinu Kushagra, Alejandro Lopez-Ortiz and J. Ian that of a standard Merge Sort, where we recursively
Munro [8], in 2013, presented a new approach which partition the array from start to mid, and from mid to
consisted of multiple pivots in order to sort elements. last after which, we call the sort function to sort that
They performed an experimental study and also particular sub array.
provided analysis on cache behavior of these
algorithms. Here, they proposed a 3 pivot mechanism
for sorting and improved the performance by 7-8%. 3.2 PIVOT BASED MERGING

Hossain, Nadir, Alma, Amiruzzaman and


M.Quadir[9], in the year 2004 came up with an
algorithm which was more efficient than the
traditional Merge Sort algorithm. This technique used
divide and conquer method to divide the entire data
until two elements are present in each group instead
of a single element like the standard Merge Sort. This
technique reduces the number of recursive calls and the
subdivision of problem, hence increasing the overall
efficiency of the algorithm.

Guigang Zheng, Shaohua Teng, Wei Zhang and


Xiufen Fu[13], in 2009 presented an enhanced
method on indexing and its corresponding parallel Fig. 1. A recursive algorithm to split and sort array
algorithm. The experiment demonstrated that elements
execution time for indexing based sorting algorithm This is the part from which this algorithm differs
was less than other sorting algorithms. On the basis of from a standard Merge Sort. Instead of merging the
index table and parallel computing, it was shown that two sorted sub arrays in a different array, we use
every two sub-merging sequence of the Merge Sort multiple pivots to sort them In Place and save the extra
algorithm were sorted in single processor computer. space consumed. Our function prototype to sort the
This saved the waiting and disposal time and hence array looks somewhat like this:
had better efficiency than the original Merge Sort
algorithm. Procedure sort (int *ar, int i, int j)

Bing-Chao Huang and Michael A. Langston[14], *ar = pointer to the array


in 1987 proposed a practical linear-time approach for i = starting point of the first sub array
merging two sorted arrays using a fixed additional j = ending point of the second sub array
space. We use 4 pivots ’a’, ’x’, ’y’ and ’b’ in the code to
accomplish our task. ‘a’ and ‘b’ initially mark starting
Rohit Yadav, Kratika Varshney and Nitin Verma[20], points of the two sorted sub arrays respectively. As a
in the year 2013 discussed the run time complexities of result, ’a’ is initialized to ’i’ and ’b’ is obtained by
the recursive and non recursive approach of the merge dividing the sum of ‘i’ and ‘j’ by two and
sort algorithm using a simple unit cost model. New incrementing it. ‘x’ is the point below which our final
implementations the for two way and four way array is sorted and is initialized to ’i’. ‘y’ is an
bottom-up merge sort were given, the worst case intermediate pivot, which marks the bound for pivot ‘a’
complexities of which were shown to be bounded by and is initialized to ’b’. All in all, our function is
5.5n log 2n + O(n) and 3.25n log 2n + O(n), targeted at sorting the main array ‘ar’ from position ‘i’
respectively. to ‘j’ given that the elements from ‘i’ to ‘b-1’ and from
‘b’ to ‘j’ are already sorted.
III. METHODOLOGY
In this particular section, we lay emphasis on the idea The variable ‘a’ is used for keeping track of the
behind the working of this algorithm. The proposed minimum value in the first sub array that has not yet
algorithm solves our problem in two steps, the been accessed (for most of the time, barring a few
strategies behind which are stated below. passes). Similarly, ’b’ is used for keeping track of the
minimum value in the second sub array that has not

699
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

yet been accessed (again, for most of the time, 30: end if
31: else if a = x and b = y and ar[b] < ar[a] then
barring a few passes). As mentioned earlier, ‘x’ is the 32: swap ( ar[x] , ar[b] )
point before which our final array is sorted. So at 33: a ← b, b ← b + 1 , x ← x + 1
any point, the array from ‘i’ to ‘x-1’ is sorted. 34: if ctr = b − 1 then
35: ctr ← ctr + 1
Finally, we have another variable called ‘ctr’, which 36: end if
is initialized and always kept equal to ’b’ till the 37: else if a = x and b = y and ar[b] >= ar[a] then
38: x ← x + 1 and a ← a + 1
second sub array (from ’b’ to ’j’) is sorted. If not, 39: else if b = a + 1 and ar[b] < ar[a] then
we keep on incrementing ‘ctr’ and swap it with its 40: swap ( ar[b] , ar[x] )
41: swap ( ar[a] , ar[b] )
next value until the element at ‘ctr’ gets placed in 42: b←b+1, x←x+1, a←a+ 1
its correct position and the second sub array becomes 43: if ctr = b − 1 then
sorted once again. We then make ‘ctr’ equal to ’b’. 44: ctr ← ctr + 1
45: end if
46: else if b = a + 1 and ar[b] >= ar[a] then
Our logic revolves around comparing the current 47: swap ( ar[x] , ar[a] )
48: a←y, x←x+1
minimum values in the two sorted sub arrays (values at 49: else if a = y and x < y and ctr != b + 1 and ar[b] < ar[a] then
’a’ and ’b’), and swapping the smaller number with the 50: swap ( ar[x] , ar[b] )
51: b←b+1, x←x+1
value at ’x’. We then increment ’x’ and reposition (’a’ 52: if ctr = b − 1 then
or ’b’) accordingly. 53: ctr ← ctr + 1
54: end if
55: else if b > a + 1 and ar[b] >= ar[a] then
IV. PSEUDO CODE 56: swap ( ar[x] , ar[a] )
57: x ← x + 1 , a ← a+ 1
58: end if
Given below is the working pseudo code for the idea 59: end while
proposed. We have two main functions to achieve our
purpose, one to split the array and the other to sort that
particular sub array In Place. V. CASE STUDY
In this Case Study, we take a look into the merging
Algorithm 1 SPLITTING ALGORITHM process of the two sorted sub arrays. Let us consider an
array of size 18 elements for the sake of this example.
1: Procedure split (int * ar, int i, int j):
2: if j = i + 1 or j = i then
The two sorted sub arrays are from ’i’ to ’b-1’ and from
3: if ar[ i ] > ar[ j ] then ’b’ to ’j’.
4: swap ( ar[ j ] , ar[ i ] )
5: return
6: end if INITIAL ARRAY:
7: else
8: mid = (i + j)/2
9: split (ar, i, mid)
10: split (ar, mid+1, j)
11: if ar[mid + 1] < ar[mid] then
PASS 1:
12: sort (ar, i, j)
13: end if This is the first pass inside the ’while’ loop of our
14: end if
’sort’ procedure. As mentioned, we compare the current
minimum values of the two sub arrays (value at ’a’ (-4)
and ’b’ (-3)). The value at ’a’ is less than that at ’b’.
Since ’a’ is equal to ’x’, we don’t need to swap the
1: Procedure sort (int * ar, int i, int j) :
values at ’a’ and ’x’. Instead, we increment ’a’ and ’x’.
2: x ← i, a ← x, b ← (i + j)/2 + 1, y and ctr ← b The first element (-4) is now in its correct position.
3: while x < b do
4: if ctr < j and ar[ctr] > ar[ctr + 1] then
’a’ holds the minimum value of first sub array that has
5: swap ( ar[ctr] , ar[ctr + 1] ) not yet been accessed (-1) and the array before ’x’ is
6: ctr ← ctr + 1 sorted.
7: end if if a = x and b = y and ar[b] >= ar[a] then
8: if ctr ≥ j or ar[ctr] <= ar[ctr + 1] then
x ← x + 1 and a ← a + 1
9: ctr ← b
end if
10: end if
11: if b > j and a > x and b = a + 1 and a > y and ctr = b then
12: b ← a, ctr ← b, a ← y
13: else if b > j and a > x and ctr = b then
14: b ← y, ctr ← b, a ← x PASS 2:
15: else if b > j and ctr = b then
16: break In this pass, the value at ’b’ is less than that at ’a’. So
17: end if we swap the value at ’b’ with the value at ’x’. ’x’ is
18: if a = x and x = y and ctr = b then
19: y←b incremented accordingly. The second element (-3) is
20: else if x = y then now in its correct sorted position. We reassign ’a’ as ’b’
21: y←a
22: end if
and increment ’b’. ’b’ now contains the current
23: if a > y and b > a + 1 and ar[b] < ar[a] and ctr = b then minimum value of the second sub array (-2) and ’a’
24: swap ( ar[a] , ar[b] ) keeps track of its previously pointed value (-1).
25: swap ( ar[a] , ar[x] ) if a = x and b = y and ar[b] < ar[a] then
26: x ← x + 1 , a ← a+ 1 swap ( ar[x] , ar[b] )
27: if ar[ctr] > ar[ctr + 1] then a ← b, b ← b + 1 , x ← x + 1
28: swap ( ar[ctr] , ar[ctr + 1] ) if ctr = b − 1 then
29: ctr ← ctr + 1

700
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

ctr ← ctr + 1 if a > y and b > a + 1 and ar[b] < ar[a] and ctr = b then
end if swap ( ar[a] , ar[b] )
end if swap ( ar[a] , ar[x] )
x ← x + 1 , a ← a+ 1
if ar[ctr] > ar[ctr + 1] then swap ( ar[ctr] , ar[ctr + 1] ) ctr ← ctr + 1
end if
end if
PASS 3:
Our motto behind each pass is to assign ’a’ and ’b’
such that they contain the current minimum values of
the two sub arrays (This condition is true for all but few PASS 8:
if ctr ≥ j or ar[ctr] <= ar[ctr + 1] then
passes (discussed in Pass 7)). ctr ← b
if b = a + 1 and ar[b] < ar[a] then end if
swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
b←b+1, x←x+1, a←a+ 1
if ctr = b − 1 then
ctr ← ctr + 1 PASS 9:
end if if b = a + 1 and ar[b] < ar[a] then
end if swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
b←b+1, x←x+1, a←a+ 1
if ctr = b − 1 then
ctr ← ctr + 1
PASS 4: end if
if b = a + 1 and ar[b] < ar[a] then end if
swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
b←b+1, x←x+1, a←a+ 1
if ctr = b − 1 then
ctr ← ctr + 1 PASS 10:
end if if b = a + 1 and ar[b] < ar[a] then
end if swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
b←b+1, x←x+1, a←a+ 1
if ctr = b − 1 then
ctr ← ctr + 1
PASS 5: end if
if b = a + 1 and ar[b] >= ar[a] then end if
swap ( ar[x] , ar[a] )
a←y, x←x+1 PASS 11:
end if
if x = y then
y←a
end if

PASS 6:
if b > a + 1 and ar[b] >= ar[a] then
swap ( ar[x] , ar[a] ) PASS 12:
x ← x + 1 , a ← a+ 1
end if if b = a + 1 and ar[b] >= ar[a] then
swap ( ar[x] , ar[a] )
a←y, x←x+1
end if
FEW NOTES:
1. It is noticeable up till now that our aim has been to
keep elements from ‘a’ to ‘b − 1’ and elements from
‘y’ to ‘a − 1’ sorted (for a >= y). PASS 13:
if b = a + 1 and ar[b] >= ar[a] then
2. Another thing worth observing is that elements swap ( ar[x] , ar[a] )
from ‘a’ to ‘b-1’ are less than the elements from ‘y’ to a←y, x←x+1
end if
‘a-1’, (provided a >= y). This means that the first sub
array can be accessed in sorted order from ‘a’ to ‘b-1’
and then from ‘y’ to ‘a-1’.
PASS 7:
Till now, we had assumed that the value of the PASS 14:
variable ‘ctr’ to be equal to ‘b’. This was only because if b = a + 1 and ar[b] < ar[a] then
ar[ctr] was less than or equal to ar[ctr+1] i.e. the array swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
starting from ‘b’ was sorted. However, to preserve the b←b+1, x←x+1, a←a+ 1
two conditions stated in Pass 6, we make a swap that
if ctr = b − 1 then
costs us the order of the two sub arrays. We solve this ctr ← ctr + 1
dilemma by swapping ar[ctr] with ar[ctr+1] and end if
increment ‘ctr’. We keep on doing this until ar[ctr] end if

becomes less than or equal to ar[ctr+1]. After this,


‘ctr’ once again is made equal to ‘b’ (PASS 8).

701
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

PASS 15: end if


if b = a + 1 and ar[b] < ar[a] then
end if
swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] )
b←b+1, x←x+1, a←a+ 1

if ctr = b − 1 then
PASS 22:
ctr ← ctr + 1
end if In the previous pass, ’b’ and ’ctr’ have again gone
end if
out of bounds. This condition is similar to that of
Pass 19 but with different side condition.
if b > j and a > x and ctr = b and a = y then
PASS 16: b ← y, ctr ← b, a ← x
end if
if x = y then
y←a
end if

PASS 17: PASS 23:


if b = a + 1 and ar[b] < ar[a] then if a = x and b = y and ar[b] < ar[a] then
swap ( ar[b] , ar[x] ) swap ( ar[x] , ar[b] )
a ← b, b ← b + 1 , x ← x + 1
swap ( ar[a] , ar[b] ) if ctr = b − 1 then
b←b+1, x←x+1, a←a+ 1 ctr ← ctr + 1
if ctr = b − 1 then end if
ctr ← ctr + 1 end if
end if
end if It took us about 23 passes to do an In Place merge on
18 elements. Although in code, it would have taken less
iterations since multiple conditions can be evaluated at
PASS 18: the same time. This more or less covers our sorting
if b = a + 1 and ar[b] < ar[a] then logic.
swap ( ar[b] , ar[x] )
swap ( ar[a] , ar[b] ) VI. COMPLEXITY ANALYSIS
b←b+1, x←x+1, a←a+ 1
if ctr = b − 1 then In this section, we analyze the time and space
ctr ← ctr + 1 complexity for this algorithm’s best and worst case
end if scenarios.
end if
6.1 TIME COMPLEXITY

PASS 19: 6.1.1 WORST CASE


Since the value of ‘b’ is greater than ‘j’, it has
gone out of bounds. Hence, we re- initialize the We saw in previous sections that our code structure was
values of our pivots accordingly. similar to the following:
if b > j and a > x and b = a + 1 and a > y and ctr = b then
b ← a, ctr ← b, a ← y 1. Procedure split (int * ar, int i, int j ) :
end if 2. if j = i + 1 or j = i then if ar[i] > ar[j] then
3. swap ( ar[j ] , ar[i] )
4. end if
5. return
6. else
PASS 20: 7. mid = (i + j)/2 split (ar, i, mid) split (ar, mid+1, j)
if a = x and x = y and ctr = b then 8. if ar[mid + 1] < ar[mid] then
9. sort (ar, i, j)
y←b 10. end if end if
end if
1. : Procedure sort (int * ar, int i, int j) :
2. while x < b do
3. // Sorting logic (multiple if-else statements)
4. end while
PASS 21:
if a = x and b = y and ar[b] < ar[a] then Our algorithm starts its execution in the ’split’
swap ( ar[x] , ar[b] ) procedure. Let ’C1’ be the constant time taken to
a ← b, b ← b + 1 , x ← x + 1 execute the ’if’ condition in this procedure and ’C2’ be
if ctr = b − 1 then the constant time taken to execute the ’else’ condition.
ctr ← ctr + 1 Inside the ’else’ condition, we recursively call the same

702
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

function twice for size T (n) = 4[2T (n/8) + C2] + 3C2


T (n) = 8T (n/8) + 7C2
’n/2’. We then call the ’sort’ procedure.
For ‟k‟ iterations the equation for T(n) becomes:
Our ’sort’ procedure comprises of a ’while’ loop that T (n) = 2k T (n/2k ) + (2k − 1)C2
has the logic for merging the two sorted arrays into a Similar to the previous condition, the base condition for
single sorted array. Let ’C3’ be the constant time taken recursion occurs when ‟n‟ is equal to 2 i.e. ‟T (2)‟.
to execute this procedure. Our overall equation for time This implies:
complexity becomes: n/2k = 2
k = log2 n -1
Again, this means that the recursion runs up to log2 n -1
times before reaching its base condition. Substituting
We use Recurrence Relation to find out the time this value of ‟k‟ in the above equation, we get:
complexity for the code. For a large input size ‟n‟, the T (n) = 2log2 n−1T (n/(2log2 n−1)) + (2log2 n−1 − 1)C2
above equation for calculating the Time Complexity T T (n) = (n/2)C1 + (n/2 − 1)C2
(n) can be simplified as: 6.2 SPACE COMPLEXITY
T(n) = 2T(n/2) + n + C2 + C3 This is an In Place sorting algorithm and takes constant
T (n) = 2[2T (n/4) + n/2 + C2 + C3] + n + C2 + C3 T (n) = amount of memory for sorting a particular array. This
4T (n/4) + 2n + 3C2 + 3C3 property is quite important for any algorithm since it
T (n) = 4[2T (n/8) + n/4 + C2 + C3] + 2n + 3C2 + 3C3 results in almost nonexistent computational space in the
T (n) = 8T (n/8) + 3n + 7C2 + 7C3 memory. In some cases, this is even considered more
For k iterations the equation for T(n) becomes: important than an algorithm‟s Time Complexity.
T (n) = 2k T (n/2k ) + kn + (2k − 1)C2 + (2k − 1)C3 6.3 STABILITY
The base condition for recursion in our algorithm Instability is a major drawback in this sorting
occurs when ‟n‟ is equal to 2 i.e. T(2). This implies: algorithm. Due to this, similar elements are not
n/2k = 2 evaluated as distinct and lose their order as a result.
k= log2 n -1 This issue can be sorted out by increasing the number
of pivots and treating similar elements as distinct, but
This also means that the recursion runs up to log2 n -1 the implementation becomes way too complicated and
times before reaching its base condition. Substituting is beyond the scope of this paper. For a sorted array
this value of ‟k‟ in the above equation, we get: however, the stability is maintained since no swaps are
T (n) = 2log2 n−1T (n/(2log2 n−1)) + (log2 n -1)n being made.
+ (2log2 n−1 − 1)C2 + (2log2 n−1 − 1)C3
IV EXPERIMENTAL ANALYSIS
We know, n/(2log2 n−1) = 2
Substituting this and the value of T (2), we get: We evaluated the performance of the code for array
T (n) = (n/2)C1 +n(log2 n)−n +(n/2 −1)C2 +(n/2 −1)C3 inputs up to 32,000 elements by denoting the time
taken to sort the elements.
6.1.2 BEST CASE 7.1 WORST CASE
The efficiency of this sorting algorithm is directly The worst case scenario in this algorithm occurs when
proportional to the orderliness in the given array. As a each element in the input array is distinct and there is
result, the best case of this algorithm occurs when the no order in the array whatsoever. We noticed that for
array is already sorted (or even almost sorted). Let us large values of input elements, this algorithm performs
consider the sorted array given as an example to find slower than the Standard Merge Sort and Quick Sort.
However, for array elements up to 1000, this
out the time taken by this algorithm in its best case:
algorithm is faster than both Merge and Quick Sort
even in its worst case.

If the array is already sorted, this would imply that the


starting element of the second sub array would be
greater than the ending element of first sub array i.e.
ar[mid + 1] ≥ ar[mid]. Hence the program would not
even enter the ‟sort‟ procedure. This means that our
time complexity for merging the two sorted sub arrays
is constant.
Hence, the time taken to split the array is the only factor
affecting the total Time Complexity. As a result, our
overall equation becomes:

T(n) = 2T(n/2) + C2
T (n) = 2[2T (n/4) + C2] + C2 T (n) = 4T (n/4) + 3C2

703
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

7.1 AVERAGE CASE noticed that the code slows down for very large
values of ‘ n ’ The instability of this algorithm is also
We consider the average case to be an array with a cause of concern.
partial order in it.
Future improvements can be made to enhance the
performance over larger number of input array. Since
we have the minimum and maximum value of the sub
array at any time, instead of starting from the
beginning, we can combine the current logic with an
end first search to reduce the number of iterations.
Regarding its stability, as mentioned earlier, this
algorithm can be made stable by increasing the
number of pivots but this would lead to other
complications. Any improvement though, however
trivial, would be highly appreciated.

R EFERENCES

[1] Dr. D. E. Knuth. ”Sorting and Searching”, The Art of Computer


Programming, 3rd volume, second edition
[2] You Yang, Ping Yu and Yan Gan. ”Experimental Study on the
Five Sort Algorithms”, International Conference on Mechanic
7.1 BEST CASE Automation and Control Engineering (MACE), 2011
[3] W. A. Martin. ”Sorting”, ACM Comp Survey., 3(4):147-174,
The Best Case scenario happens when the array is 1971
completely sorted or has similar elements. Since we [4] Jyrki Katajainen, Tomi Pasanen and Jukka Teuhola. ”Practical
know that the Time Complexity for this condition is in-place mergesort”, Nordic Journal of Computing Archive
Volume 3 Issue 1, 1996
O(n), we only compare this algorithm with those [5] R. Cole. “Parallel Merge Sort,” Proc. 27th IEEE Symp. FOCS,
having the similar Time Complexities (Bubble and pp. 511516, 1988
Insertion sort). Also, since the time difference between [6] Symvonis, Antonios.” Optimal stable merging.” The Computer
them was very small and non-comparable, we Journal 38. 8 (1995): 681-690.
compared the 3 algorithms with respect to the number [7] Wang Xiang. ” Analysis of the Time Complexity of Quick Sort
Algo- rithm”, Information Management, Innovation
of iterations taken to sort the array. We noticed that Management and Indus- trial Engineering (ICIII), International
this algorithm performs better than Bubble Sort but is Conference, 2011
slightly slower than Insertion Sort. [8] Shrinu Kushagra, Alejandro Lopez, J. Ian Munro and Aurick
Qiao ”Multi-Pivot Quicksort: Theory and Experiments”,
NUMBER OF ITERATIONS
Proceedings of the 16th Meeting on Algorithm Engineering and
ELEMENTS INSERTION BUBBLE HYBRID Experiments (ALENEX), pp. 47-60, 2014.
1000 998 1996 1023
2000 1998 3996 2047
[9] Nadir Hossain, Md. Golam Rabiul Alma, Md. Amiruzzaman
4000 3998 7996 4095 and S.M. Moinul Quadir. ”An Efficient Merge Sort Technique
8000 7998 15996 8191 that Reduces both Times and Comparisons”, Information and
16000 15998 31996 16383
32000 31998 63996 32767
Communication Technolo- gies: From Theory to Applications,
International Conference, 2004
[10] L. T. Pardo. ”Stable sorting and merging with optimal space
and time bounds”, SIAM Journal on Computing, 6(2):351–372,
1977.
[11] Jeffrey Ullman, John Hopcroft and Alfred Aho. ” The Design
and Analysis of Computer Algorithms”, 1974.
[12] E.Horowitz and S.Sahni. ”Fundamentals of Data Structures”,
Computer Science Press, Rockville, 1976
[13] Guigang Zheng, Guangzhou, Shaohua Teng, Wei Zhang and
Xiufen Fu. ”A cooperative sort algorithm based on indexing ”,
Computer Supported Cooperative Work in Design, 13th
International Conference, 2009
[14] Bing-Chao Huang and Michael A. Langston. ”Practical in-place
merg- ing”, Communications of the ACM CACM Homepage
table of contents archive Volume 31 Issue 3, 1 9 8 8
V. ASYMPTOTIC ANALYSIS [15] A.Symvonis. ”Optimal stable merging”, Computer Journal,
38:681–690, 1995
In this section, based on experimental analysis and [16] F. K. Hwang and S. Lin. ”A Simple algorithm for merging two
previously stated proof, we draw out an asymptotic disjoint linear ordered sets”, SIAM Journal on Computing, 1972
analysis of our algorithm. [17] Hovarth, E. C. ”Stable sorting in asymptotically optimal time
ANALYSIS and extra space”, Journal of the ACM 177-199, 1978
FACTORS BEST CASE AVERAGE CASE WORST CASE
TIME O(n) O(n log n) O(n log n) [18] S.Dvorak and B.Durian. ”Stable linear time sub linear space
SPACE O(1) O(1) O(1) merging”, The Computer Journal 30 372-375, 1 9 8 7
STABILITY YES NO NO [19] J.Chen. ”Optimizing stable in-place merging”, Theoretical
Computer Science, 302(1/3):191–210, 2003.
VI. C ONCLUSION [20] Rohit Yadav, Kratika Varshney and Nitin Verma. ”Analysis of
Recursive and Non-Recursive Merge Sort Algorithm”,
This idea, like most standard algorithms has room for International Journal of Advanced Research in Computer Science and
improvement. During our implementation phase, we Software Engineering Volume 3, Issue 11, November 2013

704

View publication stats

You might also like