Sorting Updated 31 Oct 2023
Sorting Updated 31 Oct 2023
Sorting of Algorithms
Sorting can be performed on any one or combination of one or more attributes present in
each record. It is very easy and efficient to perform searching, if data is stored in sorting
order.
Let A be a list of n elements A1, A2, A3, …An in memory. Sorting A refers to the
operation of rearranging the contents of A so that they are increasing in order, that is, so
that A1 <=A2 <=A3 <=…………….<=An. Since A has n elements, there are n! Ways that
the contents can appear in A. these ways corresponding precisely to the n! Permutations
of 1,2,3, n. accordingly each sorting algorithm must take care of these n! Possibilities.
Programming Time
Execution Time
Number of Comparisons
Memory Utilization
Computational Complexity
Sorting techniques are categorized into 2 types. They are Internal Sorting and External
Sorting.
Internal Sorting: Internal sorting method is used when small amount of data has to be
sorted. In this method , the data to be sorted is stored in the main memory (RAM).Internal
sorting method can access records randomly. EX: Bubble Sort, Insertion Sort, Selection
Sort, Shell sort, Quick Sort, Radix Sort, Heap Sort etc.
External Sorting: Extern al sorting method is used when large amount of data has to be
sorted. In this method, the data to be sorted is stored in the main memory as well as in the
secondary memory such as disk. External sorting methods an access records only in a
sequential order. Ex: Merge Sort, Multi way Mage Sort.
Bubble Sort
Bubble sort is the simplest algorithm of all and also known as exchange sort. It begins by
comparing every pair of adjacent elements of an array and checks if the first element is
greater than the second. This algorithm uses multiple passes and in each pass the first
and second data items are compared. if the first data item is bigger than the second, then
the two items are swapped. Next the items in second and third position are compared and
if the first one is larger than the second, then they are swapped, otherwise no change in
their order. This process continues for each successive pair of data items until all items
are sorted.
After the first iteration, are the elements in the ascending order?
No. This means we will have to keep repeating this set of actions again and again until the
entire array of elements are sorted. But then, how many times will we have to iterate a
given array such that the array is completely sorted?
In general, it takes (n-1) iterations in order to sort a given array using the bubble sort
algorithm, where n is the number of elements in the array.
So in this case, since there are 5 elements, it takes (5-1), i.e 4 iterations to completely sort
the array.
Thus as shown in the illustration above, at the end of the first pass, if the array is to
be sorted in ascending order, the largest element is placed at the end of the list. For
the second pass, the second largest element is placed at the second last position in
the list and so on.
In case of bubble sort, n-1 comparisons will be done in the 1st iteration, n-2 in 2nd
iteration, n-3 in the 3rd iteration and so on. So the total number of comparisons will
be,
Hence the time complexity of Bubble Sort is O(n2). This shows that too many
comparisons are performed in order to sort an array. Hence this might cause
inefficiency as the size of the array grows.
Implementation Bubble Sort
#include<iostream>
int main ()
inti, j,temp,pass=0;
inta[10] = {10,2,9,14,43,25,18,1,5,45};
cout<<a[i]<<"\t";
cout<<endl;
temp = a[i];
a[i] = a[j];
a[j] = temp;
pass++;
}
cout<<a[i]<<"\t";
return 0;
}
Sample Output
= N(N-1)/2
The best-case occurs when the given array is already sorted. In such a case, in one
iteration (i.e using n comparisons), we can check and see if were making any swaps in
our first iteration. If were not, we know that the list is sorted, and we can stop
iterating.
Selection Sort
As the name itself suggests, the selection sort technique first selects the smallest
element in the array and swaps it with the first element in the array.
In selection sort, the smallest value among the unsorted elements of the array is
selected in every pass and inserted to its appropriate position into the array.
First, find the smallest element of the array and place it on the first position. Then,
find the second smallest element of the array and place it on the second position. The
process continues until we get the sorted array. The array with n elements is sorted
by using n-1 pass of selection sort algorithm. This happens until there are no
elements left in the unsorted list.
Selection sort works efficiently when the list to be sorted is of small size but its
performance is affected badly as the list to be sorted grows in size.
Hence we can say that selection sort is not advisable for larger lists of data.
Pass 2
Pass 3
Pass 4
The above pictorial representation of selection sort can be summarized in tabular
form:
From the above table, we see that with every pass the next smallest element is put in
its correct position in the sorted array. From the above illustration, we see that in
order to sort an array of 5 elements, four passes were required. This means in
#include<iostream>
using namespace std;
intfindSmallest (int[],int);
int main ()
{
intmyarray[10] = {11,5,2,20,42,53,23,34,101,22};
intpos,temp,pass=0;
cout<<"\n Input list of elements to be Sorted\n";
for(inti=0;i<10;i++)
{
cout<<myarray[i]<<"\t";
}
for(inti=0;i<10;i++)
{
pos = findSmallest (myarray,i);
temp = myarray[i];
myarray[i]=myarray[pos];
myarray[pos] = temp;
pass++;
}
cout<<"\n Sorted list of elements is\n";
for(inti=0;i<10;i++)
{
cout<<myarray[i]<<"\t";
}
cout<<"\nNumber of passes required to sort the array: "<<pass;
return 0;
}
intfindSmallest(intmyarray[],inti)
{
intele_small,position,j;
ele_small = myarray[i];
position = i;
for(j=i+1;j<10;j++)
{
if(myarray[j]<ele_small)
{
ele_small = myarray[j];
position=j;
}
}
return position;
}
Sample Output
Complexity Analysis of Selection Sort
As seen in the pseudocode above for selection sort, we know that selection sort
requires two for loops nested with each other to complete itself. One for loop steps
through all the elements in the array and we find the minimum element index using
another for loop which is nested inside the outer for loop.
The time complexity of the Selection sort remains constant regardless of the input
array‟s initial order. At each step, the algorithm identifies the minimum element and
places it in its correct position. However, the minimum element cannot be determined
until the entire array is traversed.
(n-1)+(n-2)+(n-3) + …. + 1
n(n-1)/2
Another way to analyze the complexity is by observing the number of loops, which in
selection sort is two nested loops. Hence, the overall complexity is proportional to
n^2.
Quick Sort
Quick sort is a sorting algorithm based on the divide and conquer approach. It first
picks up the partition element called „Pivot‟, which divides the list into two sub lists
such that all the elements in the left sub list are smaller than pivot and all the
elements in the right sub list are greater than the pivot. The same process is applied
on the left and right sub lists separately. This process is repeated recursively until
each sub list containing more than one element.
The main task in Quick Sort is to find the pivot that partitions the given list into two
halves, so that the pivot is placed at its appropriate position in the array. The choice
of pivot as a significant effect on the efficiency of Quick Sort algorithm. The simplest
way is to choose the first element as the Pivot. However the first element is not good
choice, especially if the given list is ordered or nearly ordered .For better efficiency
the middle element can be chosen as Pivot.
Initially three elements Pivot, Beg and End are taken, such that both Pivot and Beg
refers to 0th position and End refers to the (n-1)th position in the list. The first pass
terminates when Pivot, Beg and End all refers to the same array element. This
indicates that the Pivot element is placed at its final position. The elements to the left
of Pivot are smaller than this element and the elements to it right are greater.
To understand the Quick Sort algorithm, consider an unsorted array as follows. The
steps to sort the values stored in the array in the ascending order using Quick Sort
are given below.
8 33 6 21 4
Step 1: Initially the index „0‟ in the list is chosen as Pivot and the index
variable Beg and End are initiated with index „0‟ and (n-1) respectively.
Step 2: The scanning of the element starts from the end of the list.
A[Pivot]>A[End]
i.e; 8>4
so they are swapped.
Step 3: Now the scanning of the elements starts from the beginning of the list. Since
A[Pivot]>A[Beg]. So Beg is incremented by one and the list remains unchanged.
Step 4: The element A[Pivot] is smaller than A[Beg].So they are swapped.
Step 5: Again the list is scanned form right to left. Since A[Pivot] is smaller than
A[End], so the value of End is decreased by one and the list remains unchanged.
Step 6: Next the element A[Pivot] is smaller than A[End], the value of End is
increased by one. and the list remains unchanged.
Step 8: Now the list is scanned from left to right. Since A[Pivot]>A[Beg],value of Beg
is increased by one and the list remains unchanged.
At this point the variable Pivot, Beg, End all refers to same element, the first pass is
terminated and the value 8 is placed at its appropriate position. The elements to its
left are smaller than 8 and the elements to its right are greater than 8.The same
process is applied on left and right sub lists.
ALGORITHM
Stop
Step 4: Decrement j
until A[ j]>Pivot
Stop
Step 5: if i<j interchange A[i] with A[j].
Step 6: Repeat steps 3,4,5 until i>j i.e: i crossed j.
Step 7: Exchange the Pivot element with element placed at j, which is correct place for
Pivot.
Disadvantages:
It is somewhat complex method for sorting.
It is little hard to implement than other sorting methods
It does not perform well in the case of small group of elements.
Best Case
To find the location of an element that splits the array into two parts, O(n)
operations are required.
This is because every element in the array is compared to the partitioning
element.
After the division, each section is examined separately.
If the array is split approximately in half (which is not usually), then there will be
log2n splits.
Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).
The time taken by quick sort to sort an array depends on the input array and
partition strategy or method.
Worst Case
The Merge Sort algorithm is based on the fact that it is easier and faster to sort two
smaller arrays than one large array.
It follows the principle of “Divide and Conquered”. In this sorting the list is first divided
into two halves. The left and right sub lists obtained are recursively divided into two
sub lists until each sub list contains not more than one element. The sub list
containing only one element do not require any sorting. After that merge the two
sorted sub lists to form a combined list and recursively applies the merging process
till the sorted array is achieved.
13 42 36 20 63 23 12
Step 1: First divide the combined list into two sub lists as follows.
Step 2: Now Divide the left sub list into smaller sub list
Step 3: Similarly divide the sub lists till one element is left in the sub list.
Step 4: Next sort the elements in their appropriate positions and then
combined the sub lists.
Step 5: Now these two sub lists are again merged to give the following sorted
sub list of size 4.
Step 6: After sorting the left half of the array, containing the same process
for the right sub list also. Then the sorted array of right half of the list is as
follows.
Step 7: Finally the left and right halves of the array are merged to give the
sorted array as follows.
Advantages:
Merge sort is stable sort
It is easy to understand
It gives better performance.
Disadvantages:
It requires extra memory space
Copy of elements to temporary array
It requires additional array
It is slow process.
Complexity Analysis of Merge Sort
Also, we perform a single step operation to find out the middle of any subarray,
i.e. O(1).
And to merge the subarrays, made by dividing the original array of n elements, a
running time of O(n) will be required.
Hence the total time for mergeSort function will become n(log n + 1), which gives
us a time complexity of O(n*log n).