Selection-Heap Sort Algo Detail
Selection-Heap Sort Algo Detail
The skip list is an extended version of the linked list. It allows the user to search, remove,
and insert the element very quickly. It consists of a base list that includes a set of elements
which maintains the link hierarchy of the subsequent elements.
The lowest layer of the skip list is a common sorted linked list, and the top layers of the
skip list are like an "express line" where the elements are skipped.
The lower layer is a common line that links all nodes, and the top layer is an express line
that links only the main nodes, as you can see in the diagram.
Suppose you want to find 47 in this example. You will start the search from the first node
of the express line and continue running on the express line until you find a node that is
equal a 47 or more than 47.
You can see in the example that 47 does not exist in the express line, so you search for a
node of less than 47, which is 40. Now, you go to the normal line with the help of 40, and
search the 47, as shown in the diagram.
Note: Once you find a node like this on the "express line", you go from this node to a
"normal lane" using a pointer, and when you search for the node in the normal line.
Example 1: Create a skip list, we want to insert these following keys in the empty
skip list.
1. 6 with level 1.
2. 29 with level 1.
3. 22 with level 4.
4. 9 with level 3.
5. 17 with level 1.
6. 4 with level 2.
Ans:
Ans:
Advantages of the Skip list
1. If you want to insert a new node in the skip list, then it will insert the node very fast because
there are no rotations in the skip list.
2. The skip list is simple to implement as compared to the hash table and the binary search
tree.
3. It is very simple to find a node in the list because it stores the nodes in sorted form.
4. The skip list algorithm can be modified very easily in a more specific structure, such as
indexable skip lists, trees, or priority queues.
5. The skip list is a robust and reliable list.
In selection sort, the smallest value among the unsorted elements of the array is selected
in every pass and inserted to its appropriate position into the array. It is also the simplest
algorithm. It is an in-place comparison sorting algorithm. In this algorithm, the array is
divided into two parts, first is sorted part, and another one is the unsorted part. Initially,
the sorted part of the array is empty, and unsorted part is the given array. Sorted part is
placed at the left, while the unsorted part is placed at the right.
In selection sort, the first smallest element is selected from the unsorted array and placed
at the first position. After that second smallest element is selected and placed in the
second position. The process continues until the array is entirely sorted.
The average and worst-case complexity of selection sort is O(n2), where n is the number
of items. Due to this, it is not suitable for large data sets.
Algorithm
1. SELECTION SORT(arr, n)
2. Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
3. Step 2: CALL SMALLEST(arr, i, n, pos)
4. Step 3: SWAP arr[i] with arr[pos]
5. [END OF LOOP]
6. Step 4: EXIT
7. SMALLEST (arr, i, n, pos)
8. Step 1: [INITIALIZE] SET SMALL = arr[i]
9. Step 2: [INITIALIZE] SET pos = i
10. Step 3: Repeat for j = i+1 to n
11. if (SMALL > arr[j])
12. SET SMALL = arr[j]
13. SET pos = j
14. [END OF if]
15. [END OF LOOP]
16. Step 4: RETURN pos
To understand the working of the Selection sort algorithm, let's take an unsorted array. It
will be easier to understand the Selection sort via an example.
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found
that 8 is the smallest value.
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted
array.
For the second position, where 29 is stored presently, we again sequentially scan the rest
of the items of unsorted array. After scanning, we find that 12 is the second lowest
element in the array that should be appeared at second position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in
the sorted array. So, after two iterations, the two smallest values are placed at the
beginning in a sorted way.
The same process is applied to the rest of the array elements. Now, we are showing a
pictorial representation of the entire sorting process.
Now, the array is completely sorted.
1. Time Complexity
2. Space Complexity
1. #include <stdio.h>
2.
3. void selection(int arr[], int n)
4. {
5. int i, j, small;
6.
7. for (i = 0; i < n-1; i++) // One by one move boundary of unsorted subarray
8. {
9. small = i; //minimum element in unsorted array
10.
11. for (j = i+1; j < n; j++)
12. if (arr[j] < arr[small])
13. small = j;
14. // Swap the minimum element with the first element
15. int temp = arr[small];
16. arr[small] = arr[i];
17. arr[i] = temp;
18. }
19. }
20.
21. void printArr(int a[], int n) /* function to print the array */
22. {
23. int i;
24. for (i = 0; i < n; i++)
25. printf("%d ", a[i]);
26. }
27.
28. int main()
29. {
30. int a[] = { 12, 31, 25, 8, 32, 17 };
31. int n = sizeof(a) / sizeof(a[0]);
32. printf("Before sorting array elements are - \n");
33. printArr(a, n);
34. selection(a, n);
35. printf("\nAfter sorting array elements are - \n");
36. printArr(a, n);
37. return 0;
38. }
Output:
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer
approach to sort the elements. It is one of the most popular and efficient sorting
algorithm. It divides the given list into two equal halves, calls itself for the two halves and
then merges the two sorted halves. We have to define the merge() function to perform
the merging.
The sub-lists are divided again and again into halves until the list cannot be divided
further. Then we combine the pair of one element lists into two-element lists, sorting them
in the process. The sorted two-element pairs is merged into the four-element lists, and so
on until we get the sorted list.
18M
349
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and end is
the last element of the array.
To understand the working of the merge sort algorithm, let's take an unsorted array. It
will be easier to understand the merge sort via an example.
According to the merge sort, first divide the given array into two equal halves. Merge sort
keeps dividing the list into equal parts until it cannot be further divided.
As there are eight elements in the given array, so it is divided into two arrays of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide them into
new arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.
In combining, first compare the element of each array and then combine them into
another array in sorted order.
So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in
the list of two values, put 8 first followed by 25. Then compare 32 and 17, sort them and
put 17 first followed by 32. After that, compare 40 and 42, and place them sequentially.
In the next iteration of combining, now compare the arrays with two data values and
merge them into an array of found values in sorted order.
Now, there is a final merging of the arrays. After the final merging of above arrays, the
array will look like -
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array
is already sorted. The best-case time complexity of merge sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of merge sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of merge sort is O(n*logn).
2. Space Complexity
Stable YES
o The space complexity of merge sort is O(n). It is because, in merge sort, an extra
variable is required for swapping.
1. #include <stdio.h>
2.
3. /* Function to merge the subarrays of a[] */
4. void merge(int a[], int beg, int mid, int end)
5. {
6. int i, j, k;
7. int n1 = mid - beg + 1;
8. int n2 = end - mid;
9.
10. int LeftArray[n1], RightArray[n2]; //temporary arrays
11.
12. /* copy data to temp arrays */
13. for (int i = 0; i < n1; i++)
14. LeftArray[i] = a[beg + i];
15. for (int j = 0; j < n2; j++)
16. RightArray[j] = a[mid + 1 + j];
17.
18. i = 0; /* initial index of first sub-array */
19. j = 0; /* initial index of second sub-array */
20. k = beg; /* initial index of merged sub-array */
21.
22. while (i < n1 && j < n2)
23. {
24. if(LeftArray[i] <= RightArray[j])
25. {
26. a[k] = LeftArray[i];
27. i++;
28. }
29. else
30. {
31. a[k] = RightArray[j];
32. j++;
33. }
34. k++;
35. }
36. while (i<n1)
37. {
38. a[k] = LeftArray[i];
39. i++;
40. k++;
41. }
42.
43. while (j<n2)
44. {
45. a[k] = RightArray[j];
46. j++;
47. k++;
48. }
49. }
50.
51. void mergeSort(int a[], int beg, int end)
52. {
53. if (beg < end)
54. {
55. int mid = (beg + end) / 2;
56. mergeSort(a, beg, mid);
57. mergeSort(a, mid + 1, end);
58. merge(a, beg, mid, end);
59. }
60. }
61.
62. /* Function to print the array */
63. void printArray(int a[], int n)
64. {
65. int i;
66. for (i = 0; i < n; i++)
67. printf("%d ", a[i]);
68. printf("\n");
69. }
70.
71. int main()
72. {
73. int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
74. int n = sizeof(a) / sizeof(a[0]);
75. printf("Before sorting array elements are - \n");
76. printArray(a, n);
77. mergeSort(a, 0, n - 1);
78. printf("After sorting array elements are - \n");
79. printArray(a, n);
80. return 0;
81. }
Output:
Quick Sort Algorithm
In this article, we will discuss the Quicksort Algorithm. The working procedure of Quicksort
is also simple. This article will be very helpful and interesting to students as they might
face quicksort as a question in their examinations. So, it is important to discuss the topic.
Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used
sorting algorithm that makes n log n comparisons in average case for sorting an array of
n elements. It is a faster and highly efficient sorting algorithm. This algorithm follows the
divide and conquer approach. Divide and conquer is a technique of breaking down the
algorithms into subproblems, then solving the subproblems, and combining the results
back together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into
two sub-arrays such that each element in the left sub-array is less than or equal to the
pivot element and each element in the right sub-array is larger than the pivot element.
10 Sec
History of Java
Quicksort picks an element as pivot, and then it partitions the given array around the
picked pivot element. In quick sort, a large array is divided into two arrays in which one
holds values that are smaller than the specified value (Pivot), and another array holds the
values that are greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will
continue until the single element remains in the sub-array.
Choosing the pivot
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is
typical to determine a good pivot. Some of the ways of choosing a pivot are as follows -
o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
o Select median as the pivot element.
Algorithm
Algorithm:
Partition Algorithm:
The partition algorithm rearranges the sub-arrays in a place.
To understand the working of quick sort, let's take an unsorted array. It will make the
concept more clear and understandable.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves
to right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm
starts from left and moves to right.
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves
one position to right as -
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot]
and a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24,
a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to
left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot]
and a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts
from left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are
left side of element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right
sub-arrays. After sorting gets done, the array will be -
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst
case. We will also see the space complexity of quicksort.
1. Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the
middle element or near to the middle element. The best-case time complexity of quicksort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time complexity
of quicksort is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either
greatest or smallest element. Suppose, if the pivot element is always the last element of
the array, the worst case would occur when the given array is sorted already in ascending
or descending order. The worst-case time complexity of quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such
as Merge sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely
occurs because by changing the choice of pivot, it can be implemented in different ways.
Worst case in quicksort can be avoided by choosing the right pivot element.
2. Space Complexity
Stable NO
Implementation of quicksort
Now, let's see the programs of quicksort in different programming languages.
1. #include <stdio.h>
2. /* function that consider last element as pivot,
3. place the pivot at its exact position, and place
4. smaller elements to left of pivot and greater
5. elements to right of pivot. */
6. int partition (int a[], int start, int end)
7. {
8. int pivot = a[end]; // pivot element
9. int i = (start - 1);
10.
11. for (int j = start; j <= end - 1; j++)
12. {
13. // If current element is smaller than the pivot
14. if (a[j] < pivot)
15. {
16. i++; // increment index of smaller element
17. int t = a[i];
18. a[i] = a[j];
19. a[j] = t;
20. }
21. }
22. int t = a[i+1];
23. a[i+1] = a[end];
24. a[end] = t;
25. return (i + 1);
26. }
27.
28. /* function to implement quick sort */
29. void quick(int a[], int start, int end) /* a[] = array to be sorted, start = Starting index, end = Endi
ng index */
30. {
31. if (start < end)
32. {
33. int p = partition(a, start, end); //p is the partitioning index
34. quick(a, start, p - 1);
35. quick(a, p + 1, end);
36. }
37. }
38.
39. /* function to print an array */
40. void printArr(int a[], int n)
41. {
42. int i;
43. for (i = 0; i < n; i++)
44. printf("%d ", a[i]);
45. }
46. int main()
47. {
48. int a[] = { 24, 9, 29, 14, 19, 27 };
49. int n = sizeof(a) / sizeof(a[0]);
50. printf("Before sorting array elements are - \n");
51. printArr(a, n);
52. quick(a, 0, n - 1);
53. printf("\nAfter sorting array elements are - \n");
54. printArr(a, n);
55.
56. return 0;
57. }
Output:
In the above example, the primary column is input. The remaining columns show the list
after successive sorts on increasingly significant digits position.
Complexity Analysis of Radix Sort O(m.n).
However, if we glance at these 2 values, the size of the keys is comparatively small in
comparison to the number of keys. as an example, if we've six-digit keys, we might have
1,000,000 totally different records.
Here, we tend to see that the size of the keys isn't important, and this algorithm is of linear
quality O(n)
Algorithm
Radix_sort (list, n)
shift = 1
for loop = 1 to keysize do
for entry = 1 to n do
bucketnumber = (list[entry].key / shift) mod 10
append (bucket[bucketnumber], list[entry])
list = combinebuckets()
shift = shift * 10
Example
This is a C Program to implement Radix Sort.
#include<stdio.h>
int get_max (int a[], int n){
int max = a[0];
for (int i = 1; i < n; i++)
if (a[i] > max)
max = a[i];
return max;
}
void radix_sort (int a[], int n){
int bucket[10][10], bucket_cnt[10];
int i, j, k, r, NOP = 0, divisor = 1, lar, pass;
lar = get_max (a, n);
while (lar > 0){
NOP++;
lar /= 10;
}
for (pass = 0; pass < NOP; pass++){
for (i = 0; i < 10; i++){
bucket_cnt[i] = 0;
}
for (i = 0; i < n; i++){
r = (a[i] / divisor) % 10;
bucket[r][bucket_cnt[r]] = a[i];
bucket_cnt[r] += 1;
}
i = 0;
for (k = 0; k < 10; k++){
for (j = 0; j < bucket_cnt[k]; j++){
a[i] = bucket[k][j];
i++;
}
}
divisor *= 10;
printf ("After pass %d : ", pass + 1);
for (i = 0; i < n; i++)
printf ("%d ", a[i]);
printf ("\n");
}
}
int main (){
int i, n, a[10];
printf ("Enter the number of items to be sorted: ");
scanf ("%d", &n);
printf ("Enter items: ");
for (i = 0; i < n; i++){
scanf ("%d", &a[i]);
}
radix_sort (a, n);
printf ("Sorted items : ");
for (i = 0; i < n; i++)
printf ("%d ", a[i]);
printf ("\n");
return 0;
}
Output
Enter number of items to be sorted 6
Enter items:567 789 121 212 563 562
After pass 1 : 121 212 562 563 567 789
After pass 2 : 212 121 562 563 567 789
After pass 3 : 121 212 562 563 567 789
Sorted items : 121 212 562 563 567 789
Radix Sort Complexity
Time Complexity
Best O(n+k)
Worst O(n+k)
Average O(n+k)
Stability Yes
For the radix sort that uses counting sort as an intermediate stable sort, the
time complexity is O(d(n+k)) .
Here, d is the number cycle and O(n+k) is the time complexity of counting sort.
Thus, radix sort has linear time complexity which is better than O(nlog n) of
comparative sorting algorithms.
If we take very large digit numbers or the number of other bases like 32-bit
and 64-bit numbers then it can perform in linear time however the
intermediate sort takes large space.
This makes radix sort space inefficient. This is the reason why this sort is not
used in software libraries.
Radix Sort Applications
Radix sort is implemented in