0% found this document useful (0 votes)
1 views

Unit2 Algorithms

The document provides an overview of searching and sorting algorithms, specifically Linear Search and Binary Search, detailing their descriptions, time complexities, and applications. It also covers Bubble Sort and Quick Sort, explaining their algorithms, advantages, and disadvantages. Each algorithm is illustrated with examples and code snippets to demonstrate their implementation.

Uploaded by

Chaudhri Upeksha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Unit2 Algorithms

The document provides an overview of searching and sorting algorithms, specifically Linear Search and Binary Search, detailing their descriptions, time complexities, and applications. It also covers Bubble Sort and Quick Sort, explaining their algorithms, advantages, and disadvantages. Each algorithm is illustrated with examples and code snippets to demonstrate their implementation.

Uploaded by

Chaudhri Upeksha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Searching and Sorting Algorithms


Searching Algorithms
Searching algorithms are used to retrieve information stored within a data structure (e.g.,
array, linked list) based on specific criteria or key.
1. Linear Search (Sequential Search)
o Description: Linear search scans through each element of the array
sequentially until the desired element is found or the entire array is checked.
o Time Complexity: O(n), where n is the number of elements in the array.
o When to use: Small datasets or unsorted arrays.
Steps:
o Start from the first element.
o Compare each element with the target.
o Return the index if found, or indicate failure at the end.
2. Binary Search
o Description: Binary search works on a sorted array. It repeatedly divides the
search space in half, comparing the middle element with the target value.
o Time Complexity: O(log n), where n is the number of elements in the array.
o When to use: Large, sorted datasets.
Steps:
o Find the middle element of the array.
o If the target is equal to the middle element, return the index.
o If the target is smaller, search the left half; if larger, search the right half.

Linear Search:-
The linear search algorithm is defined as a sequential search algorithm that starts at one end
and goes through each element of a list until the desired element is found; otherwise, the
search continues till the end of the dataset.

ASSI.PRO.UPEKSHA CHAUDHRI 1
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Algorithm for Linear Search Algorithm:


The algorithm for linear search can be broken down into the following steps:
• Start: Begin at the first element of the collection of elements.
• Compare: Compare the current element with the desired element.
• Found: If the current element is equal to the desired element, return true or index to
the current element.
• Move: Otherwise, move to the next element in the collection.
• Repeat: Repeat steps 2-4 until we have reached the end of collection.
• Not found: If the end of the collection is reached without finding the desired
element, return that the desired element is not in the array.
How Does Linear Search Algorithm Work?
In Linear Search Algorithm,
• Every element is considered as a potential match for the key and checked for the
same.
• If any element is found equal to the key, the search is successful and the index of that
element is returned.
• If no element is found equal to the key, the search yields “No match found”.
For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30
Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).
• Comparing key with first element arr[0]. SInce not equal, the iterator moves to the
next element as a potential match.

ASSI.PRO.UPEKSHA CHAUDHRI 2
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Comparing key with next element arr[1]. SInce not equal, the iterator moves to the
next element as a potential match.

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is
found (here 2).

Example:-
#include <stdio.h>

int main() {
int n, i, search, array[100];

ASSI.PRO.UPEKSHA CHAUDHRI 3
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

printf("Enter the number of elements in the array: ");


scanf("%d", &n);

printf("Enter %d integers:\n", n);


for (i = 0; i < n; i++) {
scanf("%d", &array[i]);
}

printf("Enter the value to find: ");


scanf("%d", &search);

for (i = 0; i < n; i++) {


if (array[i] == search) {
printf("%d is present at location %d.\n", search, i + 1);
break;
}
}
if (i == n) {
printf("%d is not present in the array.\n", search);
}

return 0;
}

Time Complexity:
• Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)

ASSI.PRO.UPEKSHA CHAUDHRI 4
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Worst Case: In the worst case, the key might be present at the last index i.e.,
opposite to the end from which the search has started in the list. So the worst-case
complexity is O(N) where N is the size of the list.
• Average Case: O(N)
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable
is used.
Applications of Linear Search Algorithm:
• Unsorted Lists: When we have an unsorted array or list, linear search is most
commonly used to find any element in the collection.
• Small Data Sets: Linear Search is preferred over binary search when we have small
data sets with
• Searching Linked Lists: In linked list implementations, linear search is commonly used
to find elements within the list. Each node is checked sequentially until the desired
element is found.
• Simple Implementation: Linear Search is much easier to understand and implement
as compared to Binary Search or Ternary Search.
Advantages of Linear Search Algorithm:
• Linear search can be used irrespective of whether the array is sorted or not. It can be
used on arrays of any data type.
• Does not require any additional memory.
• It is a well-suited algorithm for small datasets.
Disadvantages of Linear Search Algorithm:
• Linear search has a time complexity of O(N), which in turn makes it slow for large
datasets.
• Not suitable for large arrays.

Binary Search:-
Binary Search Algorithm is a searching algorithm used in a sorted array by repeatedly
dividing the search interval in half. The idea of binary search is to use the information that
the array is sorted and reduce the time complexity to O(log N).

ASSI.PRO.UPEKSHA CHAUDHRI 5
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Binary search is a search algorithm used to find the position of a target value within
a sorted array. It works by repeatedly dividing the search interval in half until the target
value is found or the interval is empty. The search interval is halved by comparing the target
element with the middle value of the search space.
Binary Search Algorithm
Below is the step-by-step algorithm for Binary Search:
• Divide the search space into two halves by finding the middle index “mid”.
• Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be used as the next
search space.
o If the key is smaller than the middle element, then the left side is used for
next search.
o If the key is larger than the middle element, then the right side is used for
next search.
• This process is continued until the key is found or the total search space is exhausted.

How does Binary Search Algorithm work?


To understand the working of binary search, consider the following illustration:
Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target = 23.

1.

ASSI.PRO.UPEKSHA CHAUDHRI 6
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

2.

3.

4.

ASSI.PRO.UPEKSHA CHAUDHRI 7
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Time Complexity:
o Best Case: O(1)
o Average Case: O(log N)
o Worst Case: O(log N)
• Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary space
will be O(logN).
Applications of Binary Search Algorithm
• Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the
optimal hyperparameters for a model.
• It can be used for searching in computer graphics such as algorithms for ray tracing
or texture mapping.
• It can be used for searching a database.
Advantages of Binary Search
• Binary search is faster than linear search, especially for large arrays.
• More efficient than other searching algorithms with a similar time complexity, such
as interpolation search or exponential search.
• Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.
Disadvantages of Binary Search
• The array should be sorted.
• Binary search requires that the data structure being searched be stored in contiguous
memory locations.
• Binary search requires that the elements of the array be comparable, meaning that
they must be able to be ordered.
Example:- binary search

#include <stdio.h>

int main() {

ASSI.PRO.UPEKSHA CHAUDHRI 8
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

int n, i, search, array[100];


int first, last, middle;

printf("Enter the number of elements in the array: ");


scanf("%d", &n);

printf("Enter %d integers in ascending order:\n", n);


for (i = 0; i < n; i++) {
scanf("%d", &array[i]);
}

printf("Enter the value to find: ");


scanf("%d", &search);

first = 0;
last = n - 1;
middle = (first + last) / 2;

while (first <= last) {


if (array[middle] < search) {
first = middle + 1;
} else if (array[middle] == search) {
printf("%d is present at location %d.\n", search, middle + 1);
break;
} else {
last = middle - 1;
}
middle = (first + last) / 2;
}

ASSI.PRO.UPEKSHA CHAUDHRI 9
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

if (first > last) {


printf("%d is not present in the array.\n", search);
}

return 0;
}

Internal Sorting Algorithms: Bubble Sort:-


Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in the wrong order. This algorithm is not suitable for large data
sets as its average and worst-case time complexity is quite high.
Bubble Sort Algorithm
In Bubble Sort algorithm,
• traverse from left and compare adjacent elements and the higher one is placed at
right side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on until
the data is sorted.
Input: arr[] = {6, 0, 3, 5}
First Pass: The largest element is placed in its correct position, i.e., the end of the array.

Second Pass:
Place the second largest element at correct position

ASSI.PRO.UPEKSHA CHAUDHRI 10
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Third Pass:
Place the remaining two elements at their correct positions.

• Total no. of passes: n-1


• Total no. of comparisons: n*(n-1)/2
Time Complexity: O(N2)
Auxiliary Space: O(1)
Advantages of Bubble Sort:
• Bubble sort is easy to understand and implement.
• It does not require any additional memory space.
• It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.
Disadvantages of Bubble Sort:
• Bubble sort has a time complexity of O(N2) which makes it very slow for large data
sets.

ASSI.PRO.UPEKSHA CHAUDHRI 11
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Bubble sort is a comparison-based sorting algorithm, which means that it requires a


comparison operator to determine the relative order of elements in the input data
set. It can limit the efficiency of the algorithm in certain cases.

Example of bubble sort:-


#include <stdio.h>

void main() {
int array[100], n, i, j, temp;

printf("Enter number of elements: ");


scanf("%d", &n);

printf("Enter %d integers: ", n);


for (i = 0; i < n; i++)
scanf("%d", &array[i]);

for (i = 0; i < n - 1; i++) {


for (j = 0; j < n - i - 1; j++) {
if (array[j] > array[j + 1]) {
temp = array[j];
array[j] = array[j + 1];
array[j + 1] = temp;
}
}
}

printf("Sorted array: ");

ASSI.PRO.UPEKSHA CHAUDHRI 12
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

for (i = 0; i < n; i++)


printf("%d ", array[i]);
}
Quick Sort:-
QuickSort is a sorting algorithm based on the Divide and Conquer that picks an element as a
pivot and partitions the given array around the picked pivot by placing the pivot in its correct
position in the sorted array.
QuickSort Algorithm work?
There are mainly three steps in the algorithm.
1. Choose a pivot
2. Partition the array around pivot. After partition, it is ensured that all elements are smaller
than all right and we get index of the end point of smaller elements. The left and right may
not be sorted individually.
3. Recursively call for the two partitioned left and right subarrays.
4. We stop recursion when there is only one element is left.
Consider: arr[] = {10, 80, 30, 90, 40}.
• Compare 10 with the pivot and as it is less than pivot arrange it accordingly.

• Compare 80 with the pivot. It is greater than pivot.

ASSI.PRO.UPEKSHA CHAUDHRI 13
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Compare 30 with pivot. It is less than pivot so arrange it accordingly.

• Compare 90 with the pivot. It is greater than the pivot.

• Arrange the pivot in its correct position.

#include <stdio.h>
#include <stdlib.h>

// Function to swap two elements


void swap(int* a, int* b) {
int t = *a;
*a = *b;
*b = t;

ASSI.PRO.UPEKSHA CHAUDHRI 14
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

int partition(int arr[], int low, int high) {


int pivot = arr[high];

int i = low - 1;
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return i + 1;
}
void quickSort(int arr[], int low, int high) {

if (low < high) {

int pi = partition(arr, low, high);

// Recursion calls for smaller elements


// and greater or equals elements
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

ASSI.PRO.UPEKSHA CHAUDHRI 15
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// Function to print an array


void printArray(int arr[], int size) {
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

int main() {
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr) / sizeof(arr[0]);

quickSort(arr, 0, n - 1);

printf("Sorted Array\n");
printArray(arr, n);
return 0;
}
Time Complexity:
• Best Case : Ω (N log (N))
The best-case scenario for quicksort occur when the pivot chosen at the each step
divides the array into roughly equal halves.
In this case, the algorithm will make balanced partitions, leading to efficient Sorting.
• Average Case: θ ( N log (N))
Quicksort’s average-case performance is usually very good in practice, making it one
of the fastest sorting Algorithm.
• Worst Case: O(N ^ 2)
The worst-case Scenario for Quicksort occur when the pivot at each step consistently
results in highly unbalanced partitions. When the array is already sorted and the
pivot is always chosen as the smallest or largest element. To mitigate the worst-case
Scenario, various techniques are used such as choosing a good pivot (e.g., median of

ASSI.PRO.UPEKSHA CHAUDHRI 16
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

three) and using Randomized algorithm (Randomized Quicksort ) to shuffle the


element before sorting.
• Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we consider
the recursive stack space then, in the worst case quicksort could make O ( N ).
Advantages of Quick Sort:
• It is a divide-and-conquer algorithm that makes it easier to solve problems.
• It is efficient on large data sets.
• It has a low overhead, as it only requires a small amount of memory to function.
• It is Cache Friendly as we work on the same array to sort and do not copy data to any
auxiliary array.
• Fastest general purpose algorithm for large data when stability is not required.
• It is tail recursive and hence all the tail call optimization can be done.
Disadvantages of Quick Sort:
• It has a worst-case time complexity of O(N 2 ), which occurs when the pivot is chosen
poorly.
• It is not a good choice for small data sets.
• It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here
we are swapping elements according to the pivot’s position (without considering
their original positions).

Straight Selection Sort:-


Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting
the smallest (or largest) element from the unsorted portion of the list and moving it to the
sorted portion of the list.
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion
of the list and swaps it with the first element of the unsorted part. This process is repeated
for the remaining unsorted portion until the entire list is sorted.
Selection Sort Algorithm work?
Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
First pass:

ASSI.PRO.UPEKSHA CHAUDHRI 17
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• For the first position in the sorted array, the whole array is traversed from index 0 to
4 sequentially. The first position where 64 is stored presently, after traversing whole
array it is clear that 11 is the lowest value.
• Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list.

Second Pass:
• For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.
• After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.

Third Pass:
• Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
• While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.

ASSI.PRO.UPEKSHA CHAUDHRI 18
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Fourth pass:
• Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
• As 25 is the 4th lowest value hence, it will place at the fourth position.

Fifth Pass:
• At last the largest value present in the array automatically get placed at the last
position in the array
• The resulted array is the sorted array.

ASSI.PRO.UPEKSHA CHAUDHRI 19
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Example:-

#include <stdio.h>

void main() {
int arr[5] = {64, 25, 12, 22, 11};
int n = 5;
int i, j, minIndex, temp;

// Print original array


printf("Original array: \n");
for (i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");

// Selection sort algorithm


for (i = 0; i < n-1; i++) {
// Assume the current position as minimum
minIndex = i;
// Find the minimum element in the remaining unsorted array
for (j = i+1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
// Swap the found minimum element with the current position
temp = arr[minIndex];

ASSI.PRO.UPEKSHA CHAUDHRI 20
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

arr[minIndex] = arr[i];
arr[i] = temp;
}

// Print sorted array


printf("Sorted array: \n");
for (i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

Complexity Analysis of Selection Sort


Time Complexity: The time complexity of Selection Sort is O(N2) as there are two nested
loops:
• One loop to select an element of Array one by one = O(N)
• Another loop to compare that element with every other Array element = O(N)
• Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N2)
Auxiliary Space: O(1) as the only extra memory used is for temporary variables while
swapping two values in Array. The selection sort never makes more than O(N) swaps and can
be useful when memory writing is costly.
Advantages of Selection Sort Algorithm
• Simple and easy to understand.
• Works well with small datasets.
Disadvantages of the Selection Sort Algorithm
• Selection sort has a time complexity of O(n^2) in the worst and average case.
• Does not work well on large datasets.
• Does not preserve the relative order of items with equal keys which means it is not
stable.

ASSI.PRO.UPEKSHA CHAUDHRI 21
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Heap Sort:-
Heap sort is a comparison-based sorting technique based on Binary Heap Data Structure. It
can be seen as an optimization over selection sort where we first find the max (or min)
element and swap it with the last (or first). We repeat the same process for the remaining
elements. In Heap Sort, we use Binary Heap so that we can quickly find and move the max
element (In O(Log n) instead of O(n)) and hence achieve the O(n Log n) time complexity.
Heap Sort Algorithm
First convert the array into a max heap using heapify, Then one by one delete the root node
of the Max-heap and replace it with the last node and heapify. Repeat this process while size
of heap is greater than 1.
• Build a max heap from the given input array.
• Repeat the following steps until the heap contains only one element:
o Swap the root element of the heap (which is the largest element) with the
last element of the heap.
o Remove the last element of the heap (which is now in the correct position).
o Heapify the remaining elements of the heap.
• The sorted array is obtained by reversing the order of the elements in the input array.

Transform the array into max heap:


• To transform a heap into a max-heap, the parent node should always be greater than
or equal to the child nodes
o Here, in this example, as the parent node 4 is smaller than the child
node 10, thus, swap them to build a max-heap.

ASSI.PRO.UPEKSHA CHAUDHRI 22
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Perform heap sort: Remove the maximum element in each step (i.e., move it to the end
position and remove that) and then consider the remaining elements and transform it into a
max heap.
• Delete the root element (10) from the max heap. In order to delete this node, try to
swap it with the last node, i.e. (1). After removing the root element, again heapify it
to convert it into max heap.

ASSI.PRO.UPEKSHA CHAUDHRI 23
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Now when the root is removed once again it is sorted. and the sorted array will be
like arr[] = {1, 3, 4, 5, 10} .

ASSI.PRO.UPEKSHA CHAUDHRI 24
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// Heap Sort in C

#include <stdio.h>

// Function to swap the position of two elements

void swap(int* a, int* b)


{
int temp = *a;
*a = *b;
*b = temp;
}
void heapify(int arr[], int N, int i)
{
// Find largest among root,
// left child and right child

// Initialize largest as root


int largest = i;

ASSI.PRO.UPEKSHA CHAUDHRI 25
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

int left = 2 * i + 1;

int right = 2 * i + 2;

if (left < N && arr[left] > arr[largest])

largest = left;

if (right < N && arr[right] > arr[largest])

largest = right;

if (largest != i) {

swap(&arr[i], &arr[largest]);

// Recursively heapify the affected


// sub-tree
heapify(arr, N, largest);
}
}

// Main function to do heap sort


void heapSort(int arr[], int N)
{

// Build max heap


for (int i = N / 2 - 1; i >= 0; i--)

ASSI.PRO.UPEKSHA CHAUDHRI 26
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

heapify(arr, N, i);

// Heap sort
for (int i = N - 1; i >= 0; i--) {

swap(&arr[0], &arr[i]);

heapify(arr, i, 0);
}
}

void printArray(int arr[], int N)


{
for (int i = 0; i < N; i++)
printf("%d ", arr[i]);
printf("\n");
}

int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int N = sizeof(arr) / sizeof(arr[0]);

heapSort(arr, N);
printf("Sorted array is\n");
printArray(arr, N);
}

ASSI.PRO.UPEKSHA CHAUDHRI 27
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// This code is contributed by _i_plus_plus_.


Complexity Analysis of Heap Sort
Time Complexity: O(n log n)
Auxiliary Space: O(log n), due to the recursive call stack. However, auxiliary space can be
O(1) for iterative implementation.
Advantages of Heap Sort
• Efficient Time Complexity: Heap Sort has a time complexity of O(n log n) in all cases.
This makes it efficient for sorting large datasets. The log n factor comes from the
height of the binary heap, and it ensures that the algorithm maintains good
performance even with a large number of elements.
• Memory Usage – Memory usage can be minimal (by writing an iterative heapify()
instead of a recursive one). So apart from what is necessary to hold the initial list of
items to be sorted, it needs no additional memory space to work
• Simplicity – It is simpler to understand than other equally efficient sorting algorithms
because it does not use advanced computer science concepts such as recursion.
Disadvantages of Heap Sort
• Costly : Heap sort is costly as the constants are higher compared to merge sort even
if the time complexity is O(n Log n) for both.
• Unstable : Heap sort is unstable. It might rearrange the relative order.
• Inefficient: Heap Sort is not very efficient because of the high constants in the time
complexity.

Simple Insertion Sort:-


Insertion sort is a simple sorting algorithm that works by iteratively inserting each element
of an unsorted list into its correct position in a sorted portion of the list. It is a stable
sorting algorithm, meaning that elements with equal values maintain their relative order in
the sorted output.
Insertion Sort Algorithm:
Insertion sort is a simple sorting algorithm that works by building a sorted array one
element at a time. It is considered an ” in-place ” sorting algorithm, meaning it doesn’t
require any additional memory space beyond the original array.
To achieve insertion sort, follow these steps:

ASSI.PRO.UPEKSHA CHAUDHRI 28
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• We start with second element of the array as first element in the array is assumed to
be sorted.
• Compare second element with the first element and check if the second element is
smaller then swap them.
• Move to the third element and compare it with the second element, then the first
element and swap as necessary to put it in the correct position among the first three
elements.
• Continue this process, comparing each element with the ones before it and swapping
as needed to place it in the correct position among the sorted elements.
• Repeat until the entire array is sorted.
Working of Insertion Sort Algorithm:
Consider an array having elements : {23, 1, 10, 5, 2}

Initial:
• Current element is 23
• The first element in the array is assumed to be sorted.
• The sorted part until 0th index is : [23]
First Pass:
• Compare 1 with 23 (current element with the sorted part).
• Since 1 is smaller, insert 1 before 23 .

ASSI.PRO.UPEKSHA CHAUDHRI 29
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• The sorted part until 1st index is: [1, 23]


Second Pass:
• Compare 10 with 1 and 23 (current element with the sorted part).
• Since 10 is greater than 1 and smaller than 23 , insert 10 between 1 and 23 .
• The sorted part until 2nd index is: [1, 10, 23]
Third Pass:
• Compare 5 with 1 , 10 , and 23 (current element with the sorted part).
• Since 5 is greater than 1 and smaller than 10 , insert 5 between 1 and 10
• The sorted part until 3rd index is : [1, 5, 10, 23]
Fourth Pass:
• Compare 2 with 1, 5, 10 , and 23 (current element with the sorted part).
• Since 2 is greater than 1 and smaller than 5 insert 2 between 1 and 5 .
• The sorted part until 4th index is: [1, 2, 5, 10, 23]
Final Array:
• The sorted array is: [1, 2, 5, 10, 23]
Example:-

#include <stdio.h>

void main() {
int n, i, j, temp;
int arr[100]; // Adjust size as needed

// Reading number of elements


printf("Enter number of elements: ");
scanf("%d", &n);

// Reading elements into array


printf("Enter %d integers:\n", n);

ASSI.PRO.UPEKSHA CHAUDHRI 30
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

for(i = 0; i < n; i++) {


scanf("%d", &arr[i]);
}

// Insertion Sort algorithm


for(i = 1; i < n; i++) {
temp = arr[i];
j = i - 1;
while(j >= 0 && arr[j] > temp) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = temp;
}

// Printing sorted array


printf("Sorted array:\n");
for(i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
}

Shell Sort:-
Shell sort is mainly a variation of Insertion Sort. In insertion sort, we move elements only
one position ahead. When an element has to be moved far ahead, many movements are
involved. The idea of ShellSort is to allow the exchange of far items. In Shell sort, we make
the array h-sorted for a large value of h. We keep reducing the value of h until it becomes 1.
An array is said to be h-sorted if all sublists of every h’th element are sorted.
Algorithm:

ASSI.PRO.UPEKSHA CHAUDHRI 31
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Step 1 − Start
Step 2 − Initialize the value of gap size, say h.
Step 3 − Divide the list into smaller sub-part. Each must have equal intervals to h.
Step 4 − Sort these sub-lists using insertion sort.
Step 5 – Repeat this step 2 until the list is sorted.
Step 6 – Print a sorted list.
Step 7 – Stop.
// C++ implementation of Shell Sort
#include <iostream>
using namespace std;

/* function to sort arr using shellSort */


int shellSort(int arr[], int n)
{
// Start with a big gap, then reduce the gap
for (int gap = n/2; gap > 0; gap /= 2)
{
for (int i = gap; i < n; i += 1)
{
int temp = arr[i];

int j;
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
arr[j] = arr[j - gap];

// put temp (the original a[i]) in its correct location


arr[j] = temp;
}
}
return 0;
}

ASSI.PRO.UPEKSHA CHAUDHRI 32
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

void printArray(int arr[], int n)


{
for (int i=0; i<n; i++)
cout << arr[i] << " ";
}

int main()
{
int arr[] = {12, 34, 54, 2, 3}, i;
int n = sizeof(arr)/sizeof(arr[0]);

cout << "Array before sorting: \n";


printArray(arr, n);

shellSort(arr, n);

cout << "\nArray after sorting: \n";


printArray(arr, n);

return 0;
}
Output
Array before sorting:
12 34 54 2 3
Array after sorting:
2 3 12 34 54

ASSI.PRO.UPEKSHA CHAUDHRI 33
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Time Complexity: Time complexity of the above implementation of Shell sort is O(n2). In the
above implementation, the gap is reduced by half in every iteration. There are many other
ways to reduce gaps which leads to better time complexity. See this for more details.
Worst Case Complexity
The worst-case complexity for shell sort is O(n2)
Best Case Complexity
When the given array list is already sorted the total count of comparisons of each interval is
equal to the size of the given array.
So best case complexity is Ω(n log(n))
Average Case Complexity
The Average Case Complexity: O(n*log n)~O(n1.25)
Space Complexity
The space complexity of the shell sort is O(1).

External Sorting Algorithms :


Merge Sort:-
Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It works by
recursively dividing the input array into smaller subarrays and sorting those subarrays then
merging them back together to obtain the sorted array.

How does Merge Sort work?


Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows
the divide-and-conquer approach to sort a given array of elements.
Here’s a step-by-step explanation of how merge sort works:
1. Divide: Divide the list or array recursively into two halves until it can no more be
divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.

ASSI.PRO.UPEKSHA CHAUDHRI 34
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

3. Merge: The sorted subarrays are merged back together in sorted order. The process
continues until all elements from both subarrays have been merged.
Illustration of Merge Sort:
Let’s sort the array or list [38, 27, 43, 10] using Merge Sort

ASSI.PRO.UPEKSHA CHAUDHRI 35
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Let’s look at the working of above example:


Divide:
• [38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
• [38, 27] is divided into [38] and [27] .
• [43, 10] is divided into [43] and [10] .
Conquer:
• [38] is already sorted.
• [27] is already sorted.
• [43] is already sorted.
• [10] is already sorted.
Merge:
• Merge [38] and [27] to get [27, 38] .
• Merge [43] and [10] to get [10,43] .
• Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]
Therefore, the sorted list is [10, 27, 38, 43] .
Example:-
// C program for Merge Sort
#include <stdio.h>
#include <stdlib.h>

// Merges two subarrays of arr[].


ASSI.PRO.UPEKSHA CHAUDHRI 36
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// First subarray is arr[l..m]


// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

// Create temp arrays


int L[n1], R[n2];

// Copy data to temp arrays L[] and R[]


for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];

// Merge the temp arrays back into arr[l..r


i = 0;
j = 0;
k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;

ASSI.PRO.UPEKSHA CHAUDHRI 37
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

}
k++;
}

// Copy the remaining elements of L[],


// if there are any
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

// Copy the remaining elements of R[],


// if there are any
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}

// l is for left index and r is right index of the


// sub-array of arr to be sorted
void mergeSort(int arr[], int l, int r)
{
if (l < r) {
int m = l + (r - l) / 2;

// Sort first and second halves

ASSI.PRO.UPEKSHA CHAUDHRI 38
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

merge(arr, l, m, r);
}
}

// Function to print an array


void printArray(int A[], int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", A[i]);
printf("\n");
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);

printf("Given array is \n");


printArray(arr, arr_size);

mergeSort(arr, 0, arr_size - 1);

printf("\nSorted array is \n");


printArray(arr, arr_size);

ASSI.PRO.UPEKSHA CHAUDHRI 39
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

return 0;
}
Output:-
Given array is
12 11 13 5 6 7

Sorted array is
5 6 7 11 12 13
Complexity Analysis of Merge Sort:
• Time Complexity:
o Best Case: O(n log n), When the array is already sorted or nearly sorted.
o Average Case: O(n log n), When the array is randomly ordered.
o Worst Case: O(n log n), When the array is sorted in reverse order.
• Auxiliary Space: O(n), Additional space is required for the temporary array used
during merging.
Applications of Merge Sort:
• Sorting large datasets
• External sorting (when the dataset is too large to fit in memory)
• Inversion counting
• Merge Sort and its variations are used in library methods of programming languages.
For example its variation TimSort is used in Python, Java Android and Swift. The main
reason why it is preferred to sort non-primitive types is stability which is not there in
QuickSort. For example Arrays.sort in Java uses QuickSort while Collections.sort uses
MergeSort.
• It is a preferred algorithm for sorting Linked lists.
• It can be easily parallelized as we can independently sort subarrays and then merge.
• The merge function of merge sort to efficiently solve the problems like union and
intersection of two sorted arrays.
Advantages of Merge Sort:
• Stability : Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.

ASSI.PRO.UPEKSHA CHAUDHRI 40
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Guaranteed worst-case performance: Merge sort has a worst-case time complexity


of O(N logN) , which means it performs well even on large datasets.
• Simple to implement: The divide-and-conquer approach is straightforward.
• Naturally Parallel : We independently merge subarrays that makes it suitable for
parallel processing.
Disadvantages of Merge Sort:
• Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in
applications where memory usage is a concern.
• Slower than QuickSort in general. QuickSort is more cache friendly because it works
in-place.

Radix Sort:-
Radix Sort is a linear sorting algorithm that sorts elements by processing them digit by digit.
It is an efficient sorting algorithm for integers or strings with fixed-size keys.
Radix Sort Algorithm
The key idea behind Radix Sort is to exploit the concept of place value. It assumes that
sorting numbers digit by digit will eventually result in a fully sorted list. Radix Sort can be
performed using different variations, such as Least Significant Digit (LSD) Radix Sort or Most
Significant Digit (MSD) Radix Sort.
To perform radix sort on the array [170, 45, 75, 90, 802, 24, 2, 66], we follow these steps:

Step 1: Find the largest element in the array, which is 802. It has three digits, so we will
iterate three times, once for each significant place.

ASSI.PRO.UPEKSHA CHAUDHRI 41
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

Step 2: Sort the elements based on the unit place digits (X=0). We use a stable sorting
technique, such as counting sort, to sort the digits at each significant place. It’s important to
understand that the default implementation of counting sort is unstable i.e. same keys can
be in a different order than the input array. To solve this problem, We can iterate the input
array in reverse order to build the output array. This strategy helps us to keep the same keys
in the same order as they appear in the input array.
Sorting based on the unit place:
• Perform counting sort on the array based on the unit place digits.
• The sorted array based on the unit place is [170, 90, 802, 2, 24, 45, 75, 66].

How does Radix Sort Algorithm work | Step 2


Step 3: Sort the elements based on the tens place digits.
Sorting based on the tens place:
• Perform counting sort on the array based on the tens place digits.
• The sorted array based on the tens place is [802, 2, 24, 45, 66, 170, 75, 90].

How does Radix Sort Algorithm work | Step 3


Step 4: Sort the elements based on the hundreds place digits.
Sorting based on the hundreds place:
ASSI.PRO.UPEKSHA CHAUDHRI 42
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

• Perform counting sort on the array based on the hundreds place digits.
• The sorted array based on the hundreds place is [2, 24, 45, 66, 75, 90, 170, 802].

How does Radix Sort Algorithm work | Step 4


Step 5: The array is now sorted in ascending order.
The final sorted array using radix sort is [2, 24, 45, 66, 75, 90, 170, 802].

How does Radix Sort Algorithm work | Step 5


#include <stdio.h>

// A utility function to get the maximum


// value in arr[]
int getMax(int arr[], int n) {
int mx = arr[0];
for (int i = 1; i < n; i++)
if (arr[i] > mx)
mx = arr[i];
return mx;
}
ASSI.PRO.UPEKSHA CHAUDHRI 43
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// A function to do counting sort of arr[]


// according to the digit represented by exp
void countSort(int arr[], int n, int exp) {
int output[n]; // Output array
int count[10] = {0}; // Initialize count array as 0

// Store count of occurrences in count[]


for (int i = 0; i < n; i++)
count[(arr[i] / exp) % 10]++;

// Change count[i] so that count[i] now


// contains actual position of this digit
// in output[]
for (int i = 1; i < 10; i++)
count[i] += count[i - 1];

// Build the output array


for (int i = n - 1; i >= 0; i--) {
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
count[(arr[i] / exp) % 10]--;
}

// Copy the output array to arr[],


// so that arr[] now contains sorted
// numbers according to current digit
for (int i = 0; i < n; i++)
arr[i] = output[i];
}

ASSI.PRO.UPEKSHA CHAUDHRI 44
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

// The main function to sort arr[] of size


// n using Radix Sort
void radixSort(int arr[], int n) {

// Find the maximum number to know


// the number of digits
int m = getMax(arr, n);

// Do counting sort for every digit


// exp is 10^i where i is the current
// digit number
for (int exp = 1; m / exp > 0; exp *= 10)
countSort(arr, n, exp);
}

// A utility function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++)
printf("%d ", arr[i]);
printf("\n");
}

// Driver code
int main() {
int arr[] = {170, 45, 75, 90, 802, 24, 2, 66};
int n = sizeof(arr) / sizeof(arr[0]);

// Function call

ASSI.PRO.UPEKSHA CHAUDHRI 45
BZ GROW MORE INSTITUTE OF MSC(CA&IT)

radixSort(arr, n);
printArray(arr, n);
return 0;
}
Time Complexity:
• Radix sort is a non-comparative integer sorting algorithm that sorts data with integer
keys by grouping the keys by the individual digits which share the same significant
position and value. It has a time complexity of O(d * (n + b)), where d is the number
of digits, n is the number of elements, and b is the base of the number system being
used.
• In practical implementations, radix sort is often faster than other comparison-based
sorting algorithms, such as quicksort or merge sort, for large datasets, especially
when the keys have many digits. However, its time complexity grows linearly with the
number of digits, and so it is not as efficient for small datasets.

ASSI.PRO.UPEKSHA CHAUDHRI 46

You might also like