0% found this document useful (0 votes)
11 views

Sorting Updated 31 Oct 2023

The document provides information about sorting algorithms in data structures. It discusses that sorting is a technique to organize data by arranging records in ascending or descending order. Various sorting techniques like bubble sort, selection sort, quick sort, merge sort and insertion sort are described along with their time and space complexities. Bubble sort and selection sort are explained in detail through pseudocode, illustrations and analysis of their O(n^2) time complexity.

Uploaded by

Shweta Rai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Sorting Updated 31 Oct 2023

The document provides information about sorting algorithms in data structures. It discusses that sorting is a technique to organize data by arranging records in ascending or descending order. Various sorting techniques like bubble sort, selection sort, quick sort, merge sort and insertion sort are described along with their time and space complexities. Bubble sort and selection sort are explained in detail through pseudocode, illustrations and analysis of their O(n^2) time complexity.

Uploaded by

Shweta Rai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

ALIGARH MUSLIM UNIVERSITY

Department of Computer Science


Course: MCA
CAMS 1001: Data Structure using C++
Academic Session 2023-24
Teacher :Dr Shafiqul Abidin

Sorting of Algorithms

Sorting is a technique of organizing the data. It is a process of arranging the records,


either in ascending or descending order. Sort methods are very important in Data
structures.

Sorting can be performed on any one or combination of one or more attributes present in
each record. It is very easy and efficient to perform searching, if data is stored in sorting
order.

Let A be a list of n elements A1, A2, A3, …An in memory. Sorting A refers to the
operation of rearranging the contents of A so that they are increasing in order, that is, so
that A1 <=A2 <=A3 <=…………….<=An. Since A has n elements, there are n! Ways that
the contents can appear in A. these ways corresponding precisely to the n! Permutations
of 1,2,3, n. accordingly each sorting algorithm must take care of these n! Possibilities.

Ex: suppose an array DATA contains 8elements as follows:


DATA: 70, 30,40,10,80,20,60,50.
After sorting DATA must appear in memory as follows:
DATA: 10 20 30 40 50 60 70 80
Since DATA consists of 8 elements, there are 8!=40320 ways that the numbers
10,20,30,40,50,60,70,80 can appear in DATA.
The factors to be considered while choosing sorting techniques are:

 Programming Time
 Execution Time
 Number of Comparisons
 Memory Utilization
 Computational Complexity

Sorting techniques are categorized into 2 types. They are Internal Sorting and External
Sorting.

Internal Sorting: Internal sorting method is used when small amount of data has to be
sorted. In this method , the data to be sorted is stored in the main memory (RAM).Internal
sorting method can access records randomly. EX: Bubble Sort, Insertion Sort, Selection
Sort, Shell sort, Quick Sort, Radix Sort, Heap Sort etc.
External Sorting: Extern al sorting method is used when large amount of data has to be
sorted. In this method, the data to be sorted is stored in the main memory as well as in the
secondary memory such as disk. External sorting methods an access records only in a
sequential order. Ex: Merge Sort, Multi way Mage Sort.

There are many algorithms available to sort.


 Bubble Sort
 Selection Sort
 Quick Sort
 Merge Sort
 Insertion Sort
 Heap Sort

Bubble Sort
Bubble sort is the simplest algorithm of all and also known as exchange sort. It begins by
comparing every pair of adjacent elements of an array and checks if the first element is
greater than the second. This algorithm uses multiple passes and in each pass the first
and second data items are compared. if the first data item is bigger than the second, then
the two items are swapped. Next the items in second and third position are compared and
if the first one is larger than the second, then they are swapped, otherwise no change in
their order. This process continues for each successive pair of data items until all items
are sorted.
After the first iteration, are the elements in the ascending order?

No. This means we will have to keep repeating this set of actions again and again until the
entire array of elements are sorted. But then, how many times will we have to iterate a
given array such that the array is completely sorted?

In general, it takes (n-1) iterations in order to sort a given array using the bubble sort
algorithm, where n is the number of elements in the array.

So in this case, since there are 5 elements, it takes (5-1), i.e 4 iterations to completely sort
the array.

The general algorithm for bubble sort

Here is how it looks:

We take an array of size 5 and illustrate the bubble sort algorithm.


Pass 1 (Iteration 1)
Pass 2 (Iteration 2)
Pass 3 (Iteration 3)
Now entire array is sorted.

The above pictorial representation can be summarized in tabular form:


As shown in above table, with every pass, the largest element bubbles up to the last
thereby sorting the list with every pass. As mentioned in the introduction, each
element is compared to its adjacent element and swapped with one another if they
are not in order.

Thus as shown in the illustration above, at the end of the first pass, if the array is to
be sorted in ascending order, the largest element is placed at the end of the list. For
the second pass, the second largest element is placed at the second last position in
the list and so on.

Why is bubble sort inefficient?

In case of bubble sort, n-1 comparisons will be done in the 1st iteration, n-2 in 2nd
iteration, n-3 in the 3rd iteration and so on. So the total number of comparisons will
be,

= (n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1 =n(n-1)/2 = [ n2 n]

Hence the time complexity of Bubble Sort is O(n2). This shows that too many
comparisons are performed in order to sort an array. Hence this might cause
inefficiency as the size of the array grows.
Implementation Bubble Sort

// CSD, AMU - Implementation Bubble Sort

#include<iostream>

using namespace std;

int main ()

inti, j,temp,pass=0;

inta[10] = {10,2,9,14,43,25,18,1,5,45};

cout<<"Input list ...\n";

for(i = 0; i<10; i++) {

cout<<a[i]<<"\t";

cout<<endl;

for(i = 0; i<10; i++) {

for(j = i+1; j<10; j++)

if(a[j] < a[i]) {

temp = a[i];

a[i] = a[j];

a[j] = temp;

pass++;
}

cout<<"Sorted Element List ...\n";

for(i = 0; i<10; i++) {

cout<<a[i]<<"\t";

cout<<"\nNumber of passes taken to sort the list:"<<pass<<endl;

return 0;

}
Sample Output

Complexity Analysis of Bubble Sort


From the pseudo code and the illustration that we have seen above, in bubble sort,
we make N-1 comparisons in the first pass, N-2 comparisons in the second pass and
so on.

Hence the total number of comparisons in bubble sort is:


Sum = (N-1) + (N-2) + (N-3)+ … + 3 + 2 + 1

= N(N-1)/2

= O(n2) => Time complexity of bubble sort technique

 Space complexity: O(1)


 The worst-case time complexity: O(n2)
 Average case time complexity: O(n2)
 Best case time complexity: O(n)

The best-case occurs when the given array is already sorted. In such a case, in one
iteration (i.e using n comparisons), we can check and see if were making any swaps in
our first iteration. If were not, we know that the list is sorted, and we can stop
iterating.
Selection Sort

As the name itself suggests, the selection sort technique first selects the smallest
element in the array and swaps it with the first element in the array.

In selection sort, the smallest value among the unsorted elements of the array is
selected in every pass and inserted to its appropriate position into the array.

First, find the smallest element of the array and place it on the first position. Then,
find the second smallest element of the array and place it on the second position. The
process continues until we get the sorted array. The array with n elements is sorted
by using n-1 pass of selection sort algorithm. This happens until there are no
elements left in the unsorted list.

Selection sort is quite a straightforward sorting technique as the technique only


involves finding the smallest element in every pass and placing it in the correct
position.

Selection sort works efficiently when the list to be sorted is of small size but its
performance is affected badly as the list to be sorted grows in size.

Hence we can say that selection sort is not advisable for larger lists of data.

The general algorithm for selection sort


Pass 1

Pass 2
Pass 3

Pass 4
The above pictorial representation of selection sort can be summarized in tabular
form:

From the above table, we see that with every pass the next smallest element is put in

its correct position in the sorted array. From the above illustration, we see that in

order to sort an array of 5 elements, four passes were required. This means in

general, to sort an array of N elements, we need N-1 passes in total.


Implementation Selection Sort

//CSD, AMU - > Implementation of Selection Sort

#include<iostream>
using namespace std;
intfindSmallest (int[],int);
int main ()
{
intmyarray[10] = {11,5,2,20,42,53,23,34,101,22};
intpos,temp,pass=0;
cout<<"\n Input list of elements to be Sorted\n";
for(inti=0;i<10;i++)
{
cout<<myarray[i]<<"\t";
}
for(inti=0;i<10;i++)
{
pos = findSmallest (myarray,i);
temp = myarray[i];
myarray[i]=myarray[pos];
myarray[pos] = temp;
pass++;
}
cout<<"\n Sorted list of elements is\n";
for(inti=0;i<10;i++)
{
cout<<myarray[i]<<"\t";
}
cout<<"\nNumber of passes required to sort the array: "<<pass;
return 0;
}
intfindSmallest(intmyarray[],inti)
{
intele_small,position,j;
ele_small = myarray[i];
position = i;
for(j=i+1;j<10;j++)
{
if(myarray[j]<ele_small)
{
ele_small = myarray[j];
position=j;
}

}
return position;
}
Sample Output
Complexity Analysis of Selection Sort

As seen in the pseudocode above for selection sort, we know that selection sort
requires two for loops nested with each other to complete itself. One for loop steps
through all the elements in the array and we find the minimum element index using
another for loop which is nested inside the outer for loop.

The time complexity of the Selection sort remains constant regardless of the input
array‟s initial order. At each step, the algorithm identifies the minimum element and
places it in its correct position. However, the minimum element cannot be determined
until the entire array is traversed.

The number of comparisons in selection sort can be calculated as the sum of

(n-1)+(n-2)+(n-3) + …. + 1

This is approximately equal to

n(n-1)/2

or roughly n^2. This results in a time complexity of O(n^2).

Another way to analyze the complexity is by observing the number of loops, which in
selection sort is two nested loops. Hence, the overall complexity is proportional to
n^2.
Quick Sort

Quick sort is a sorting algorithm based on the divide and conquer approach. It first
picks up the partition element called „Pivot‟, which divides the list into two sub lists
such that all the elements in the left sub list are smaller than pivot and all the
elements in the right sub list are greater than the pivot. The same process is applied
on the left and right sub lists separately. This process is repeated recursively until
each sub list containing more than one element.

Working of Quick Sort:

The main task in Quick Sort is to find the pivot that partitions the given list into two
halves, so that the pivot is placed at its appropriate position in the array. The choice
of pivot as a significant effect on the efficiency of Quick Sort algorithm. The simplest
way is to choose the first element as the Pivot. However the first element is not good
choice, especially if the given list is ordered or nearly ordered .For better efficiency
the middle element can be chosen as Pivot.

Initially three elements Pivot, Beg and End are taken, such that both Pivot and Beg
refers to 0th position and End refers to the (n-1)th position in the list. The first pass
terminates when Pivot, Beg and End all refers to the same array element. This
indicates that the Pivot element is placed at its final position. The elements to the left
of Pivot are smaller than this element and the elements to it right are greater.
To understand the Quick Sort algorithm, consider an unsorted array as follows. The
steps to sort the values stored in the array in the ascending order using Quick Sort
are given below.

8 33 6 21 4

Step 1: Initially the index „0‟ in the list is chosen as Pivot and the index
variable Beg and End are initiated with index „0‟ and (n-1) respectively.

Step 2: The scanning of the element starts from the end of the list.
A[Pivot]>A[End]
i.e; 8>4
so they are swapped.

Step 3: Now the scanning of the elements starts from the beginning of the list. Since
A[Pivot]>A[Beg]. So Beg is incremented by one and the list remains unchanged.

Step 4: The element A[Pivot] is smaller than A[Beg].So they are swapped.
Step 5: Again the list is scanned form right to left. Since A[Pivot] is smaller than
A[End], so the value of End is decreased by one and the list remains unchanged.

Step 6: Next the element A[Pivot] is smaller than A[End], the value of End is
increased by one. and the list remains unchanged.

Step 7: A[Pivot>>A[End] so they are swapped.

Step 8: Now the list is scanned from left to right. Since A[Pivot]>A[Beg],value of Beg
is increased by one and the list remains unchanged.
At this point the variable Pivot, Beg, End all refers to same element, the first pass is
terminated and the value 8 is placed at its appropriate position. The elements to its
left are smaller than 8 and the elements to its right are greater than 8.The same
process is applied on left and right sub lists.

The general algorithm for quick sort

ALGORITHM

Step 1: Select first element of array as Pivot


Step 2: Initialize i and j to Beg and End elements respectively
Step 3: Increment i until A[i]>Pivot.

Stop
Step 4: Decrement j
until A[ j]>Pivot
Stop
Step 5: if i<j interchange A[i] with A[j].
Step 6: Repeat steps 3,4,5 until i>j i.e: i crossed j.
Step 7: Exchange the Pivot element with element placed at j, which is correct place for
Pivot.

Advantages of Quick Sort:


 This is fastest sorting technique among all.
 It efficiency is also relatively good.
 It requires small amount of memory

Disadvantages:
 It is somewhat complex method for sorting.
 It is little hard to implement than other sorting methods
 It does not perform well in the case of small group of elements.

Complexity Analysis of Quick Sort

Best Case
 To find the location of an element that splits the array into two parts, O(n)
operations are required.
 This is because every element in the array is compared to the partitioning
element.
 After the division, each section is examined separately.
 If the array is split approximately in half (which is not usually), then there will be
log2n splits.
 Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).

 The time taken by quick sort to sort an array depends on the input array and
partition strategy or method.

Order of Quick Sort = O(nlog2n)

Worst Case

 Quick Sort is sensitive to the order of input data.


 It gives the worst performance when elements are already in the ascending
order.
 It then divides the array into sections of 1 and (n-1) elements in each call.
 Then, there are (n-1) divisions in all.
 Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).

Order of Quick Sort in worst case = O(n2)


Merge Sort

The Merge Sort algorithm is based on the fact that it is easier and faster to sort two
smaller arrays than one large array.

It follows the principle of “Divide and Conquered”. In this sorting the list is first divided
into two halves. The left and right sub lists obtained are recursively divided into two
sub lists until each sub list contains not more than one element. The sub list
containing only one element do not require any sorting. After that merge the two
sorted sub lists to form a combined list and recursively applies the merging process
till the sorted array is achieved.

Consider the following merge sort example::

13 42 36 20 63 23 12

Step 1: First divide the combined list into two sub lists as follows.

Step 2: Now Divide the left sub list into smaller sub list

Step 3: Similarly divide the sub lists till one element is left in the sub list.
Step 4: Next sort the elements in their appropriate positions and then
combined the sub lists.

Step 5: Now these two sub lists are again merged to give the following sorted
sub list of size 4.

Step 6: After sorting the left half of the array, containing the same process
for the right sub list also. Then the sorted array of right half of the list is as
follows.

Step 7: Finally the left and right halves of the array are merged to give the
sorted array as follows.
Advantages:
 Merge sort is stable sort
 It is easy to understand
 It gives better performance.

Disadvantages:
 It requires extra memory space
 Copy of elements to temporary array
 It requires additional array
 It is slow process.
Complexity Analysis of Merge Sort

As we have already learned in Binary Search that whenever we divide a number


into half in every step, it can be represented using a logarithmic function, which
is log n and the number of steps can be represented by log n + 1(at most)

Also, we perform a single step operation to find out the middle of any subarray,
i.e. O(1).

And to merge the subarrays, made by dividing the original array of n elements, a
running time of O(n) will be required.

Hence the total time for mergeSort function will become n(log n + 1), which gives
us a time complexity of O(n*log n).

Best Case Time Complexity [Big-omega]: O(n*log n)

Worst Case Time Complexity [ Big-O ]: O(n*log n)

Average Time Complexity [Big-theta]: O(n*log n)

You might also like