Ansar
Ansar
net/publication/387218289
CITATIONS READS
0 2
2 authors, including:
SEE PROFILE
All content following this page was uploaded by Omar Khan Durrani on 19 December 2024.
Abstract-The Sorting Algorithm proposed by C.A.R Hoare complexity of solving it efficiently despite its simple,
in 1961 by name Quick sort, which is popularly known for familiar statement. For example, Bubble sort was analyzed as
being the fastest sorting algorithm. Quick sort is still being early as 1956. later other algorithms like selection sort,
practiced in the field of computers systems and its applications. insertion sort, shell sort, radix sort, and many others methods
The Algorithm whose efficiency to sort random data set is were introduced. Among these the popular once which are
represented in asymptotic notation as O(n log2 n) and when frequently used in system softwares and application software
quick sort algorithm has a input data set which is already are merge sort, quick sort, insertion sort, which we term as
Ordered then it takes a quadratic execution time which is popular sorting algorithms. Due to fast evolution in computer
considered as a worst case performance and this behavior is
technology, although many consider sorting as a solved
represented in asymptotic notation as O(n2). The worst case
performance is due the scan over heads which occur over the
problem, useful new sorting algorithms are still being
pre-sorted data set, in other words the partitioning gets skewed invented. Among the popular sorting algorithms Quicksort is
due to recursive calls and hence results in a quadratic the fastest one used in real world sorting problems. Quicksort
complexity. This research paper presents an algorithm which uses a divide and conquer technique for sorting the given
minimizes a worst case execution time making it linear when data file. We find sufficient work with respect to partitioning
the input list is in non-decreasing order. The paper describes the unsorted data, selection of the Pivot element which
how the improvements are accommodated in the existing quick divides the list and also regarding finding size of sub files.
sort. A priori analysis of proposed algorithm for different cases But fever work is found regarding improving the worst-case
is made along with a proof of correctness. Later the algorithm behaviors of Quick-sort algorithm which has been focused in
is verified for its correctness and asymptotic performance. The this research study. In this paper we have presented a version
algorithm is implemented using C++ and also we have of Quick-sort named as Qwimb sort which improves the
compared with other popular quick-sort version. worst case of the Hoare’s Quick-sort [5] and the later Quick
sort version found in[23].
Keywords-Worst case, Presorted data, Linear time
complexity, Early exit, part_no, Aorder status flag, Qwimb sort. II. QUICK-SORT ALGORITHM
Quick Sort is an algorithm based on the Divide and
I. INTRODUCTION
Conquer paradigm that selects a pivot element (in our
Problem Solving is one of the prime activity involved in example it is the left most element in the list) and reorders
the life cycle of human being and tools were the aids the given list in such a way that all elements smaller to it are
produced by him from the nature, starting from stone, sticks, on left side and those bigger than pivot is on the right side.
fire,until it progressed to machinery and finally computers Further the sub lists are recursively sorted until the list gets
and artificial intelligence. As we know that a computer works completely sorted as shown in Figure 1. Also you observe
on basis of Von Neumann's concept which is nothing but the Pivot element occupying its position as part of list being
stored program which works to interface humans for problem sorted. The time complexity of this algorithm is O (n log n)
solving. The generic name for a program is an Algorithm. An on un-ordered data set due to partitioning into two parts,
Algorithm is step-by-step procedure for solving a problem in along with two scans in opposite directions.
a finite amount of time. Among the different types of
algorithms, Sorting is frequently and widely used. In a
computer, a sorting algorithm is that which puts elements of
a list in a certain order. The most-used orders are numerical
order and lexicographical order. Efficient sorting is important
for optimizing the use of other algorithms (such as search
and merge algorithms) that require sorted lists to work
correctly; it is also often useful for canonicalizing data and
for producing human-readable output. Mathematically,
sorting is defined in [3] as as a list of numbers,
A=(a1,a2,a3......an) which on permutation (reordering)
results as, A=(a’1,a’2,a’3......a’n) of the input sequence such
Fig. 1. shows partitioning using leftmost element as Pivot
that a’1 <=a’2, <=a’3......<=a’n . The sequences are typically
stored in arrays which are consecutive storage locations in A. Analysis of Quick-sort
the computer memory, the numbers are also referred to as
the keys . The total time taken to re-arrange the array as described
in the above section always takes O (n) or αn where α is
Since the dawn of computing, the Sorting problem has some constant needed to execute in every partition. Let us
attracted a great deal of research, perhaps due to the suppose that the pivot we just choose has divided the array
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License
10
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
into two parts: one of size k and the other of size n − k. which is O (n log n). This is the best case for quick sort.
Notice that both these parts still need to be sorted. This gives
us the following relation: It also turns out that in the average case (over all
T (n) = T (k) + T (n − k) + α n (1.1) possible pivot configurations), quick sort has a time
where T (n) refers to the time taken by the algorithm to complexity of O (n log n), the proof of which is commonly
sort n elements and α is the constant computation time for found in [1,3,4].
processing n elements to partition the list into k and n-k parts.
III. LITERATURE REVIEW ON QUICK SORT ALGORITHM
In order to analyze for the worst case, consider when
pivot is the least element of the array (input array is in Thomas H. Cormen et. al. in [3] have quoted that Quick
ascending order), so that we have k = 1 and n − k = n – 1 in sort takes Quadratic time O (n2) in the worst case, spends a
(1.1). In such a case, we have: lot of time on the sorted or almost sorted data. It performs
about n2/2 comparisons even on nearly sorted data found
T (n) = T (1) + T (n − 1) + α n in[23], but swap count is low for sorted or almost sorted
input. Mark Allan Weiss in [4] has also stated that quick sort
by solving the recurrence as follows: has O(n2) worst case performance. Howrowitz et al in [1]
have said that, a possible input on which quick sort displays
worst case behavior is one in which the elements are already
= T (n − i) + iT (1) + α. ∑𝑖−1
𝑗=0(𝑛 − 𝑗) (1.2)
in order.
Now clearly such a recurrence can only go on until i = n Almost all the authors of Algorithm books and research
− 1 (because otherwise n – 1 would be less than 1). So, papers showing quick sort analysis, have agreed that quick
substitute i = n − 1 in (1.2), which gives us: sort perform no better than O(n2) in case of ordered input.
With all the above survey and many others references the
theoreticians and practitioners have considered Quick sort
T (n) = T (1) + (n − 1)T(1) + α ∑𝑛−2
𝑗=0 (𝑛 − 𝑗)
worst case to be classified in asymptotic class O (n2). But it
is found that there are some works which have encouraged to
on further simplification we arrive as shown below,
take up the study to improve in the worst case. A detail
literature survey done with respect to Quick sort algorithm is
T (n) = nT (1) + α (n (n − 2) − (n − 2) (n − 1)/2) presented in following paragraph before we present the
design and implementation of our version Qwimb sort..
which is O (n2).This is the worst case of quick-sort,
which happens when the pivot we picked turns out to be the 1961 C.A.R Hoare founded and implemented Quick sort
least element of the array to be sorted, in every step (i.e. in algorithm as found in[5]. It was first written in Algol 60,
every recursive call). A similar situation will also occur if the which sorts the array in the random-access store of a
pivot happens to be the largest element of the array to be computer, ie., a in-place sort. He also said that no extra space
sorted. is required. The Pivot is randomly selected using a random
number generator. In [6] Hoare has also indicated and
The best case of quick sort occurs when the pivot we described some places of refinement which can lead towards
pick happens to divide the array into two exactly equal parts, optimization. He analyzed his algorithm over random data
in every step. Thus we have k = n/2 and n−k = n/2 in and stated that the minimum number of comparisons
equation (1.1) for the original array of size n. required to achieve the reduction in entropy is log2N!
Consider, therefore, the recurrence: ≈Nlog2N. The average number of comparisons required by
quick sort is greater then the theoretical minimum by a factor
2loge2 ≈ 1.4. He further suggested that the factor could be
T (n) = 2 T (n/2) + α n (1.3) reduced by choosing Pivot as the median of a small random
sample of the items in the segment. He further described the
= 2 (2T (n/4) + α n/2) + α n inner loop of partition could be optimized using a machine
instruction to exchange elements. Mathematical analysis of
(Note: T (n/2) = 2T (n/4) + α n/2 by just substituting n/2 for algorithm was done using rules of inference. He also showed
n in (1.3) Merge sort running slower than Quick sort. The Function
quicksort() of Algorithm 64 is recursive and its partition()
function is iterative. Hoare did not speak about worst case
= 22 T (n/4) + 2 α n (By simplifying and grouping terms
and equal keys. He also claimed that no extra memory is
together). needed. He showed a road-map for research in quick sort
Algorithm. Later the claim made by Hoare was proved to be
= 22(2 T (n/8) + α n/4) + 2 α n incorrect by B. Randell & J. Russell in [7].They showed that
extra space was because of activation record needed by
= 23T (n/8) + 3 α n recursion. They also mentioned in their technical
correspondence that no changes are required to quick sort
= 2kT (n/2k) + k α n (Continuing likewise till the kth step) and hence it works satisfactorily. Finally promoted Hoare’s
work.
Notice that this recurrence will continue only until n = 2k
(otherwise we have n/2k < 1), i.e. until k = log n. Thus, by Robert Sedgewick in his thesis titled “Quick sort” [14]
putting k = log n, we have the following equation: which received outstanding thesis award under supervision
of D. E Knuth has paid special attention to methods of
T (n) = n T (1) + α n log n, mathematical analysis, which are used to demonstrate the
11
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
practical utility of the quick sort algorithm. He has developed in [21]. it showed Bsort failed to be O(n) for a simple data
an exact and efficient form of quick sort, further the same is set 2,4,6,8….n-2, 1,2,3….n-3 instead it took O(n2) time.
derived for the average best case and worst case running
times. Following improvements were made in Sedgewick’s In 1987, Roger L. Wainwright in [22] modified Quick
quick sort: sort algorithms with an early exit for sorted subfiles. He also
said that the improvements to quick sort have been made in
1. Pivot is Median of three elements. the following areas:
2. To use insertion sort method for sort small 1. Determining a better pivot value,
subfile(for < 10 element). 2. Considering the size of subfiles and
3. Loop unwrapping is applied by using assembly 3. Schemes of partitioning the files.
language. This was analyzed and found optimal.
Despite improvements the worst case still remains when
4. A analytical study of equal key is thoroughly done a file is nearly or completely sorted, which can be assumed
with implementation and testing. as a 4th area. A version of quick sort called qsorte is
Robert Sedgewick’s research work has showcased a presented that provides an early exit for sorted subfiles. He
complete analysis especially from the point of priori Tested on random files, sorted and nearly sorted and reverse
estimates and the same was verified in his practical sorted files. Results of quick sort, Quicker sort, Bsort, qsort
measurement as found in [16]. His experiments showed that are exhibited in experiments. qsorte perform good as quick
quick sort is up-to twice fast then its competitors. It was told sort for random files. Author has implemented using pascal
that small files to be avoided by being in recursion. Small language. Author says we should no longer refer to the quick
file can use insertion sort. He also clearly says that, quick sort algorithm as having a worst case behavior for sorted
sort needs O(n2) in worst case. Practical measurement was subfiles. He also showed that there are cases for which Quick
less shown by the author. Robert Sedgewick made a unique sort has a worst complexity of O(n2).
way of study. In 1993, Jon L Bentley and M Douglas McIL Roy in [23]
In November 1980 C.R Cook and D Jin Kim in [17] built a new qsort function for ‘C’ library based on Scowen’s
designed a Best Sorting Algorithm for Nearly Sorted Lists. quicker sort choosing a partition elements by new scheme.
The algorithm which is a hybrid version of quick sort, a Hoare’s version for a case of 2n integers 1 2 3 …n,n…..3 2 1
combination of straight insertion sort, quicker sort and merge took n2 comparisons to sort. The Authors have engineered
sort. The new algorithm performed well and showed the qsort version with needed improvements. The paper is
improvement in sorting twice, when compared with same fully supported by pioneers in the field of sorting. In this
input with straight insertion sort ,shell sort, quick sort and paper Program 7 used MACROS to improve its
heapsort. The author tested for sample size of 50, 800,200, performances and used insertion sort for small sub arrays.
500, 1000, 2000 elements. Program 7 proved to be best. The authors have mentioned
that, if worst case performance is important, Quicksort is the
In April 1983, D. Motzkin in [18] presented a Algorithm wrong algorithm.
which he called Meansort is based on quick sort. The
Algorithm used a special technique at every partition in In 2007, in the paper “Quicksort: A historical perspective
finding the Pivot. The algorithm name is based on the Mean and empirical study” [25] Laila khreisat studied and
value used as Pivot. The algorithm have improvements over compared the Quick sort variants and the new algorithm of
the average cases. The algorithm was implemented in pascal recent times. The study made was in terms of comparison
language. Mean sort is considered improvement over performed and the running times on reverse order, already
standard quick sort. It is efficient even if repeating keys are ordered and randomly generated orders. She tested on
present. Efficiency was measured based on interchanges various random integers data set from N=3000 to 500000.
comparison and particular stops. Means sort showed The performances of all the variants are really interesting
considerable improvement over quick sort. Laila khreisat in 2018, presented Introsort found in [26]
In April 1984, an article found in [19] “ How to sort which is combination of qsorte and heapsort. When qsorte
“ by Jon L Bentley, which showed in his experiments that the stops for stack overflow the sorting process is continued by
system sort, that is, qsort in UNIX Operating System was heapsort. The paper has no information about worst case of
fast but performed a bit slow then the quick sort because it O(n2) being eliminated.
has to be called, qsort is version of quick sort which is made In 2015, the research work in [28] showed the design and
part of Unix system and hence used as a command to sort implementation of a modified version of quick sort algorithm
files. He also mentioned that System commands should named Quicksort_wmb, in which the algorithm had an early
fulfill the user needs and not all systems have the system sort exit when ever it encountered the input array as presorted,
command. The author did not mentioned about worst case of This algorithm worked well for ordered data set (presorted
quick sort. as ascending or descending) and took linear time O(n)
In 1985, Roger L. Wainwright in [20] showed a class of instead of quadratic O(n2). In recent research it is realized
sorting algorithms based on quick sort. Bsort a variation of that the early exit leaves random arrays unsorted which had
quick sort combines the interchange technique used in the first element as the least value. When the leftmost
bubble sort with quick sort. The algorithm improved the element in the input arrau is considered as pivot element.
average behavior of quick sort and claimed that the worst A recent research of 2020 found in [29] by Bal, Aditi
case situation of comparison for sorted or nearly sorted lists Basu and Chakraborty, Soubhik titled “An Experimental
works best leading to removal of worst case ie., O(n2), which Study on a Modified Version of Quicksort” used the version
later was proved incorrect by a technical correspondence quicksort_wmb published in [27] to experiment over various
12
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
continuous and discrete probability distributions and and reorders the given list in such a way that all elements
measured the performance in terms of the number of smaller or equal to the Pivot are on one side and those bigger
comparisons the algorithm makes to sort the whole array. A than it are on the other. Thus the sub lists are recursively
number of common probability distributions having both sorted until the whole list gets completely sorted, also
continuous and discrete-were simulated to constitute the explained in section II when ordered input is considered, we
elements of a random unsorted list of numbers. The need to perform best case, that is, in linear time. We have
modified version quicksort_wmb was applied on these considered the first element of the array or list as the Pivot
arrays to sort the numbers. The results obtained were very element.
interesting. The continuous distributions were sorted faster
than the discrete ones by the algorithm, the reason for which, To make this to happen we consider the leftmost element
after investigation, was found to be the existence of ties in as the pivot. We have considered three global integer
discrete distributions, thus providing an evidence that the variables, no_part and Aorder which are initialized to
version of quicksort is sensitive to ties. The sensitivity of zero, further the decision to early exit from the quick sort is
quicksort_wmb to ties is not new. The interesting thing is made when the respective values of global variables
that the sensitivity to ties remains irrespective of the ie.,no_part and Aorder equals to 1 and recursion is avoided
improvement. later on. The global variable no_part counts the number of
partitions made in each execution of quick sort, Aorder (set
Finally as a summary of above detail survey it is to 1) is an indicator when the array is encountered to be in
observed that sufficient work has been done to improve ascending order.
speed for average cases especially in three areas, like
determining pivot. considering the size of sub-files and Aorder is set to value 1 in the partition function when i=1
schemes of partitioning, but as it is stated by Wainqright L.R and j=0, that is i does not get incremented further as
in [22], a fourth area which is improvement to Worst case key>=a[i] becomes false in the statement
need to considered for research. It is also seen that O(n2) do{i++;}while(key>=a[i]); (do- while is executed only once),
worst case of quick sort has not been improved much. where as key<a[j] is true until j become 0 after n executions
of the statement do{j--;}while(key<a[j]); (until key=a[j]). As
IV. DESIGN OF QWIMB SORTING shown in Figure 2 in case of ascending order input we can
make a early exit.
Mathematicians have contributed algorithmic analysis
from info-theoretic view point, on the other side we, the
algorithm engineers contribute from the angle of
programming languages and the computer architecture. A
responsibility that showers is to cope for the upcoming
challenges to improve and provide a compatible code design
keeping time efficiency as our objective. Quick sort is the
only Algorithm which is a heart in this field. Also as
mentioned by L. Wainwright in[22] where he suggested
research regarding improvement in the worst case. Hence we
considered Quick sort algorithm to analyze it and improve it
especially in the worst case.
A. Avoiding Worst case
Practical implementations of quick sort often pick a pivot
randomly from the list . This greatly reduces the chance of
going into worst-case situation. This method of selecting
pivot in random is seen to work excellently in practice but Fig. 2. shows the Position of index i & j at the end of first partition when
still much time is exploited by the randomizer [1] which the input array is in ascending order
matters the efficiency in execution. The other technique,
which deterministically prevents the worst case from ever Further the above changes when applied in Quicksort it
occurring, is to find the median of the array to be sorted each will fail to sort the random input array which has the first
time, and use that as the pivot. The median can be found in element as the lowest among the array elements. In other
linear time but that is saddled with a huge constant factor words when the Pivot (leftmost element) is the least
overhead, rendering it suboptimal for practical amongst all elements of input array. To avoid such cases, we
implementations [3]. add a small routine which checks for pre-sortedness and
avoids an early exit because the Aorder flag shall not be set
In the present research we shall show a very simple to 1, instead it continues to sort the random data set. This
version which performs good and improves in the modification will take extra time of about O(n). The extra
performance when the input array is already in ascending time is taken in case the array is presorted or if the element is
order. This variant has also sorted the cases shown in[21]. the smallest element for the other case, otherwise it runs
This variant which we call as qwimb sorting, we make the normally as the Hoare’s version does it.
early exit on two conditions, we check status of two pointers
along with a check of orderliness of input array. The next V. IMPLEMENTATION QWIMB SORTING
section shows the design in detail. With respect to the study made in[28,30], we have
A. Design Specification of Qwimb Sorting selected C++ programming language to implement Qwimb
sorting as it best suits being a general purpose programming
As we know that quick-sort algorithm is based on the
language. C++ has exhibited the behaviors of sorting
Divide-and-Conquer paradigm that selects a pivot element
algorithms as per the priori estimates and priori analysis
13
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
found in[1,30]. The following sections shows the different Line No. Statements
segments of the C++ code implementation.
8. Qwimb(a,j+1,high);
A. Global Declarations 9. } // end of if compound statement
The Global declarations as shown in Table 1 includes the 10. } // end of Qwimb
preprocessor statements which are basically needed for input
and output operations. The global static identifier no_part is
used to count the number of partitions made during the TABLE IV. C++ CODE OF PARTITION_IWMB ITERATIVE
FUNCTION
sorting. Aorder is a global and is used as a status flag which
is set when we find that the array is Presorted in Ascending Line No. Statements
order.
1. int partition_iwmb(int low,int high)
B. Aordered Function 2. {int key,i,j,temp;
The Aordered function in Table 2 scans from left most
3. no_part++; // Global variable counts partitions
element of array and returns a boolean value True, if the
array of input numbers are already in ascending order else a 4. key=a[low]; // assigning the pivot
False is returned when it finds a element which is greater 5. i=low; j=high; // initializing left & right pointer
than the consecutive next number in the array.
6. while(i<j)
C. Qwimb Recursive Function 7. {while(a[i] <= key) i++;// hunts element > key
The Qwimb function as shown in Table 3 recursively
8. while(a[j] > key) j--; // hunts element <= key
sorts the input array by calling partition_iwmb function
given in Table 4 to partition the array into two with respect to 9. if(i<j){temp=a[i];a[i]=a[j]; a[j]=temp;}
the pivot element. The Qwimb function exits from the 10. } // end of while
recursion when it finds the array to be in Ascending order
11. temp=a[low];a[low]=a[j];a[j]=temp;
with the help of partition count and Aorder flag status. // swap j th element with pivot
D. Partition_iwmb Function 12. if ((i==1) &&(j==0)&& Aordered(high,n))
The partition_iwmb function as shown in Table 4 assigns Aorder =1; // set flag if Ascending
the first element as the pivot and divides the array into two 13. return j; //returns the pivot }//end of partition_seek
parts. During the first partition it checks if the array is
already sorted with the help of the i and j index values and VI. POSTERIORI TESTING OF QWIMB SORTING
the boolean value returned by Aordered function and sets the
Aorder flag. The Qwimb code is tested for its correctness on different
input samples of data under following 3 categories:
TABLE I. GLOBAL DECLARATION FOR C++ CODE 1) Pseudo Random generated input numbers (Table 5)
Line No. Statements 2) Ordered input numbers (Table 6 )
1. #include <cstdlib>
3) Special cases of input numbers (Table 7)
2. using namespace std;
A. Program testing for random numbers and Non-
3. static int no_part=0; //global Variable decreasing numbers.
4. int Aorder=0; //global Variables The procedure to generate Pseudo Random
Numbers(PRNs) and Non-decreasing array of numbers is
TABLE II. C++ CODE OF AORDERED FUNCTION described in [28], further set ups like set number ,size
number and range were initialized as per necessity in the
Line No. Statements
driver segment of C++ program. Also number of partitions
1. bool Aordered(int a[],int n) made by the algorithm is also highlighted in the output.
2. {for(int i=0;i<n;++i) Partition counts gives a complexity indication, for example if
number of partitions less than the size of input array to 1
3. if (a[i]>a[i+1])return false; which indicates a average case complexity. Similarly when
4. return true; } // end of Aordered Number of partitions is 1, it indicates a best case complexity.
TABLE III. C++ CODE OF QWIMB RECURSIVE FUNCTION TABLE V. RESULT OBTAINED WITH PRNS
Line No. Statements Set Range Input Array Sorted output No. of
No elements(size= 5) elements Partitions
1. void Qwimb(int a[],int low,int high)
1 0-100 35 86 92 49 21 21 35 49 86 92 3
2. { int j;
2 0-100 62 27 90 59 63 27 59 62 63 90 3
3. if(low<high)
3 0-100 26 40 26 72 36 26 26 36 40 72 2
4. { j=partition_iwmb(a,low,high);
4 0-100 11 68 67 29 82 11 29 67 68 82 3
5. if (no_part= =1)
5 0-100 30 62 23 67 35 23 30 35 62 67 2
6. if(Aorder){cout<<”Aorder”<<endl;return;}
(Random data input shows a O(nlogn) complexity)
7. Qwimb(a.low,j-1);
14
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
TABLE VI. TABLE 6. RESULT OBTAINED WITH NON-DECREASING TABLE VIII. EXECUTION TIME FOR PRNS DATA SAMPLES
NUMBERS
Sample size Time (s) No_of_partitions
Set Range Input Array Sorted output No. of
No elements(size= 5) elements Partitions 100000 0.029839 67751
1 0 to13 :0 3 6 9 12 :0 3 6 9 12 1
200000 0.050392 135390
2 -4 to 0 -4 -3 -2 -1 0 -4 -3 -2 -1 0 1
300000 0.076136 203115
3 -1to3 -1 0 1 2 3 -1 0 1 2 3 1
400000 0.099895 271008
4 -8 to 0 -8 -6 -4 -2 0 -8 -6 -4 -2 0 1
500000 0.125967 338624
5 -12 to0 -12 -9 -6 -3 0 -12 -9 -6 -3 0 1
600000 0.155165 406315
(Ascending ordered input shows a O(n) complexity due to exit
after first partition.) 700000 0.182476 474123
600000 0.004548 1
700000 0.005159 1
VII. ASYMPTOTIC PERFORMANCE MEASUREMENT OF
QWIMB SORTING 800000 0.005913 1
A. Experiment Set-up and Data Generation 900000 0.00707 1
The Test bed for conducting our experiments has
Memory of 3.7 GiB, Intel® CoreTM i3-6006U CPU @ 1000000 0.007428 1
2.00GHz × 4 ,64-bit, 970.9 GB, Ubuntu 16.04 LTS. Coding Further, the experiments are conducted for different
Language used: linux system based compiler C++. The sorting algorithms like bubble sort, insertion sort, mergesort,
experiment Setup and data generation methods used are as Hoare’s quicksort and Bentley’s version ,along with our
specified in[28]. Qwimb sort for a comparative study. The sorting results on
A. Performance Measurement random data sets and ordered data sets are shown in tables 11
and 12. In-order to attain the Asymptotic behavior range we
Table 8 and Table 9 shows the results of executions on have considered the range of sample data set from 10000 to
random and ordered(ascending order) samples respectively 500,000 in steps of 10000 till 100000 and in steps of 100000
in our system which is achieved by some modification made onwards.
in the code. The output shows the data sample size, time of
execution for sorting in seconds and the number of partitions B. Observations
made by Qwimb() sorting algorithm. For simplicity we have Considering the results shown in tables 11 & 12 listed
considered output range from 100,000 samples to 10,00,000 below are the observations made:
in steps of 100,000. It is observed from the output that we
have fever partitions then the data sample size for random
data set, which indicates the time complexity of O (nlogn). 1) Asymptotic Behaviors of popular sorting Algorithms
Similarly when executed for non-decreasing data set we have are clearly experimented and explained in the
only one partition which indicates the time complexity of Ө research published in [28] therefore we concentrate
(n). more on the performance of our version of quick sort
(qwimb sorting).
15
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
2) In both tables 11 & 12 Bubble sort takes the Quick Merge Bubble Quick
Qwimb sort
maximum time to sort as it belongs to quadratic class Input Size Sort sort sort Sort-3w
time(s)
time(s) time(s) time(s) time(s)
of growth function.
80000 0.021 0.025 31.208 0.018413 0.019033
3) In table 11 for random data sets, Merge sort is slow
90000 0.024 0.029 39.373 0.020875 0.021631
when compare to Hoare’s Quick sort. It is also seen
that qwimb sort is a bit slower (negligible 100000 0.027 0.032 48.755 0.023114 0.024318
quantity)than Hoare’s Quick sort. And as the optimal 200000 0.0557 0.0697 195.58 0.048613 0.050938
version QuickSort-3w by Robert Sedgewick and 300000 0.0864 0.1024 442.68 0.074816 0.078249
Bentley, found in [23] has performed faster. All of
them belong to nlogn class of growth function. 400000 0.1148 0.1385 786.60 0.100895 0.106358
500000 0.1493 0.1764 1226.79 0.129536 0.134417
4) In table 12 for ordered data sets, we see that
execution time of both Hoare’s quick sort and
TABLE XI. SHOWS TIME TAKEN FOR NON-DECREASING DATA SAMPLES
Bubble sort are nearly the same, witnessing that both
have a quadratic time complexity i.e O(n2) as per Input
Quick Merge Bubble Quick Qwimb
their behaviors exhibited and priori analysis found Sort sort sort Sort-3w sort
Size
time(s) time(s) time(s) time(s) time(s)
in[1,28].
10000 0.32702 0.00194 0.34876 0.08701 7.6e-05
5) We see that when considering the size n=500,000 the 20000 1.30349 0.00327 1.31706 0.13098 0.0001
execution time for Hoare’s quicksort is 805.665
seconds, QuickSort-3w is 389.347 seconds and that 30000 2.92996 0.00527 2.98455 0.23124 0.0002
of Qwimb is 0.1344417 seconds witnessing the 40000 5.20315 0.00675 5.29415 0.29780 0.0002
complexities of Hoare’s as quadratic, QuickSort-3w 50000 8.13089 0.01015 8.52272 3.89705 0.0003
is far better than Hoare’s where as Qwimb
60000 11.7192 0.01036 11.9832 5.61249 0.0004
performing in linear time.
70000 15.9394 0.01218 16.2143 7.6635 0.0005
6) Finally, we see execution time of ordered data
samples for Qwimb sorting is linear and that with 80000 20.8201 0.01515 23.6834 9.9546 0.0005
with respect to Hoare’s Quick sort and Bentley and 90000 26.4484 0.01590 28.5272 12.655 0.0006
Sedgewick’s taking Quadratic complexity. 100000 32.6818 0.01901 33.6542 15.620 0.0007
As per the literature review it is really found that 300000 295.033 0.05822 299.489 139.79 0.0022
sufficient work is done in improving the speed in the average 400000 529.134 0.07896 530.479 249.14 0.0038
case of quick sort and fever is found to improve the speed of 500000 809.665 0.09970 827.23 389.34 0.0043
execution for the worst cases, which occurs when the input
list is presorted. Though some work is done but still has
REFERENCES
some special cases for which it lacks to speed up. The
Qwimb sorting which we have proposed handles the [1] Sartaj Sahni, Data structures and Algorithms in C++, university press
ascending data sets taken as input and sorts in linear time as publication 2004, chapter 4 performance measurement, pages 123-
136. https://books.google.co.in/books? id=QeVtQgAACAAJ}
it takes only one partition. Qwimb sorting has also performed
[2] Levitin, A. Introduction to the Design and Analysis of Algorithms.
a bit faster even in the average cases when we have randomly Addison-Wesley, Boston MA, 2007.
generated numbers as input data set. The work also shows https://books.google.co.in/books? id=pOZSS9YeJYkC
the mathematical analysis of Qwimb sorting. Also, program [3] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,Clifford
testing and experiments conducted verifies the priori analysis Stein, Introduction to Algorithms, Second Edition, Prentice-Hall New
made for program. Performance measurement considering Delhi,2004. https://books.google.co.in/books ?id=YsE2pwAACAAJ},
large data set is done so that we observe asymptotic [4] Weiss, M. A., Data Structures and Algorithm Analysis in C,Addison-
behaviors of algorithm. Presently the Qwimb algorithm Wesley,Second Edition, 1997 ISBN: 0-201-49840-
5.https://books.google.co.in/books? id=dG8YZ521YVYC
considers for ascending ordered and pseudo-random
generated data sets as input, in future the work can also give [5] C.A.R. Hoare. “Algorithm 64: Quicksort.” Communications of the
ACM, vol. 4, pp. 321, Jul.1961. doi/10.1145/366622.366644
consideration for decreasing ordered data sets as input hence
[6] C.A.R. Hoare. “Quicksort.” Computer Journal, vol. 5, 1962, pp. 10–
making it a sorting algorithm of time complexity O(nlogn). 15. https://academic.oup.com/comjnl/article/5/1/10/395338
[7] B.Randell & J. Russell. "Certification of algorithlns 63, 64, 65:
TABLE X. SHOWS TIME TAKEN FOR RANDOM DATA SAMPLES
Partition, Quicksort, Find . " Comm. ACM ,July 1961.
Quick Merge Bubble Quick https://dl.acm.org/doi/10.1145/366707.367561
Qwimb sort
Input Size Sort sort sort Sort-3w [8] R.S. Scowen, “Algorithm 271: Quickersort,” Comm. ACM 8,11, pp
time(s) 669-670, Nov. 1965. doi:10.1145/365660.365678
time(s) time(s) time(s) time(s)
10000 0.010 0.003 0.403 0.00187 0.002352 [9] BLAIR, C. R. “Certification of Algorithm 271: Quickersort”.
Comm.ACM,9,5(May1966),354.
20000 0.019 0.006 1.779 0.004057 0.004272 https://doi.org/10.1145/355592.365613
30000 0.013 0.009 4.102 0.006324 0.006825 [10] Van Emden, M.H. “Increasing the Efficiency of Quick-sort”.
Communications of the ACM 13, 9 (September 1970), 563-567.
40000 0.012 0.012 7.438 0.008894 0.009194 https://doi.org/10.1145/362736.362753
50000 0.013 0.016 11.844 0.011197 0.011541 [11] Van Emden, M.H. “Algorithm 402, qsort “,Comm. A C M 13, 11(Nov.
1970), 693-694. https://doi.org/10.1145/362790.362803
60000 0.015 0.018 17.129 0.013297 0.013772
70000 0.020 0.022 23.586 0.015919 0.016738
16
Asian Journal of Convergence in Technology Volume X and Issue III
ISSN NO: 2350-1146 I.F-5.11
[12] M. Foley and C.A.R Hoare, “Proof of a rccursive program: First Author Name: Dr. Omar Khan Durrani
Quicksort”. Computer . Journal. 14, 4 (Nov. 1971), 391-
395.https://doi.org/10.1093/comjnl/14.4.391
Assist. Prof. & Head of Dept. ISE, Ghousia College of
[13] R. Loeser. “Some performance tests of: quicksort: and
Engineering ,Ramanagram.
descendants.” ,Communications of the ACM, vol. 17, Mar. 1974, pp. Education: Bachelor's Degree in CSE from National
143–152. Institute of Engineering, University of Mysore , Mysore.
[14] R. Sedgewick. “Quicksort.” PhD dissertation, Stanford University, Master’s Degree in CSE from National Institute of
Stanford, CA, USA, 1975. Technology Karnataka, Surathkal, Deemed University.,
[15] Sedgewick, R. “Quicksort With Equal Keys”. SIAM Journal of Ph.D from VTU, Belgaum, Karnataka.
Computer, 6,2 (June 1977) 240-267. https://sedgewick.io/wp-
content/themes/sedgewick/papers/1977Equal.pdf Teaching Experience: 28 years.
[16] R. Sedgewick. “Implementing Quicksort programs”, Communications Research Areas : Algorithm Design and Analysis
of the ACM, vol. 21(10), pp. 847–857, 1978. in Searching and Sorting, Internet of things.
https://doi.org/10.1145/359619.359631 Publications: 13 papers(journals & Conferences. 01 Patent
[17] Cook, CR.. and Kim, D.J. “Best sorting algorithm for nearly sorted filled. Citations: 62
lists”. Communications of the ACM, 23, 11 (Nov. 1980), 620-624.
https://doi.org/10.1145/359024.359026
Awards: Three times Best paper awards, Classic reviewer
[18] Motzkin. D. “Meansort”, Communications of the ACM, 26,4 (Apr.
for IEEE Conference(CMT) and Asian Jounal, Awarded
1983), 250-251. https://doi.org/10.1145/2163.358088 High Impact teaching Skills by Dale Carnegie and Wipro’s
[19] J. Bentley. “Programming Pearl: How to sort.”, Communications of Mission10x.
the ACM, vol. 27(4),1984. https://doi.org/10.1145/3894.315111 Second Author Name :Dr. Sayed Abdulhayan
[20] R.L. Wainwright. “A class of sorting algorithms based on Quicksort”,
Communications of the ACM ,vol. 28(4), pp. 396-402, April 1985. Professor, Dept. CS&E, PACE Mangalore. Head for
https://doi.org/10.1145/3341.3348 Academic Industry Relationship, Incubation center,
[21] CORPORATE “Tech Correspondence, Technical Research activities and Entrepreneurship Development.
correspondence.” ,Communications of the ACM, vol. 29(4), pp. 331- Education: B.E(ECE), M.Tech(Digital Communication and
335, Apr. 1986, https://doi.org/10.1145/5684.315618
Networking), PhD(QoS and Security in 4G LTE-Advanced).
[22] R.L. Wainright. “Quicksort algorithms with an early exit for sorted
subfiles.” ,Communications of the ACM, pp. 183-190, 1987.
Total Teaching Experience: 18 years including 3 years of
https://doi.org/10.114 industrial experience.
[23] J. L. Bentley, M. D. McilRoy, “Engineering a Sort Research Areas :Wireless communication, Mobile
Function” ,Software—Practice and Experience, Vol. 23(11), Nov. communication, Antenna synthesis, Cloud Computing,
1993, pp 249 – 1265 . https://doi.org/10.1002/spe.4380231105 5G/6G, algorithms,DBMS.PhDs Guided: 04 Candidates.
[24] J.L. Bentley, R. Sedgewick, R. “Fast algorithms for sorting and Published papers : 44 papers in international journals and
searching strings” ,in Proc.8th Annual ACM-SIAM symposium on
Discrete algorithms, 1997, pp. 360– Conferences.Patents 01 approved & 05 filled.Citation :38
369.https://dl.acm.org/doi/10.5555/314161.314321 .Professional memberships: Member of Computer Society
[25] L. Khreisat. “QuickSort: A Historical Perspective and Empirical India(CSI), Member of Institute for Engineering Research
Study”, International Journal of Computer Science and Network and Publication (IFERP).
Security, vol. 7(12), Dec. 2007.
http://paper.ijcsns.org/07_book/200712/20071207.pdf
[26] L. Khreisat. “A Survey of Adaptive QuickSort
Algorithms”,international Journal of Computer Science and Security
(IJCSS), Volume (12) : Issue (1) : 2018.
https://www.cscjournals.org/library/manuscriptinfo.php?mc=IJCSS-
1372
[27] O.K. Durrani, S.A.K. Nazim, “Modified quick sort: worst case made
best case”. International Journal for Emerging Technology and
First Author Second Author
Advances Eng. 5(8). Website: www.ijetae.com . ISSN 2250-2459,
August 2015.
[28] O. K. Durrani, A. S. Farooqi, A. G. Chinmai and K. S. Prasad,
"Performances of Sorting Algorithms in Popular Programming
Languages," 2022 International Conference on Smart Generation
Computing, Communication and Networking (SMART GENCON),
Bangalore,india,2022,pp.1-7.
doi:10.1109/SMARTGENCON56628.2022.10084261
[29] Bal, A.B., Chakraborty, S. (2020). “An Experimental Study of a
Modified Version of Quicksort”. Advances in Computational
Intelligence. Advances in Intelligent Systems and Computing, vol
988.Springer,Singapore.
https://link.springer.com/chapter/10.1007/978-981-13-8222-2_27
[30] Omar khan Durrani and Dr. Sayed Abdulhayan , “Asymptotic
Performances of Popular Programming Languages for Popular
Sorting Algorithms”, Semiconductor Optoelectronics,Vol. 42 No. 1
(2023), 149-169, https://bdtgd.cn/article/view/2023/149.pdf
17