0% found this document useful (0 votes)
6 views

CPT212-07-Sorting_Efficient

Sorting_Efficient

Uploaded by

lyhqwerty617
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

CPT212-07-Sorting_Efficient

Sorting_Efficient

Uploaded by

lyhqwerty617
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Efficient Sorting

Algorithms
TAN TIEN PING
Introduction
2 efficient sorting algorithms will be discussed.
◦ Merged sort
◦ Quick sort

These sorting algorithms used divide and conquer strategy.


Divide and Conquer
A general algorithm design paradigm
◦ Divide the problem into several subproblems that are similar to the original problem, but smaller in
size.
◦ Solve the subproblems recursively by further division into smaller subproblems.
◦ Combine these solutions to create a solution to the original problem.

3
Divide and Conquer
The approach generally calls 2 procedures
◦ Divide:
◦ input parameter I, and output parameters I1, I2,… , Im
◦ m may depends on Im
◦ Combine:
◦ Input parameter J1, J2,…, Jm, and output J

4
Merge Sort
Merge sort of an array S of n elements, consists of 3 steps.
◦ Divide: partition the array S to 2 sub-arrays S1 and S2 of n/2 elements each
◦ Recur: recursively sort S1 and S2
◦ Conquer: merge s1 and s2 into a sorted sub array.

5
7 5 6 2 8 1 3 4

7 5 6 2 8 1 3 4

S1 S2 mergeSort(S, C)

7 5 6 2 8 1 3 4

S1 S2 S1 S2 mergeSort(S, C)

7 5 6 2 8 1 3 4
partition() = O(1)
S1 S2 S1 S2 S1 S2 S1 S2 - partition() takes constant time, no matter
how big is the list.

To be continue ….
5 7 6 2 8 1 3 4

S1 S2 S1 S2 S1 S2 S1 S2
Algorithm merge(S1, S2){
merge(S1, S2) List result = new List();
5 7 2 6 1 8 3 4 while(! S1.isEmpty() && ! S2.isEmpty()){
if(S1.first() <= S2.first())
S1 S2 S1 S2
merge(S1, S2) result.append(S1.removeFirst());
else
2 5 6 7 1 3 4 8
result.append(S2.removeFirst());
S1 S2 }
if (S1.isEmpty())
merge(S1, S2) result.append(S1);
if (S2.isEmpty())
1 2 3 4 5 6 7 8 result.append(S2);
}
Look at how merge(S1, S2) works…

Algorithm merge(S1, S2){


List result = new List();
2 5 6 7 1 3 4 8 while(! S1.isEmpty() && ! S2.isEmpty()){
result is a list S1 if(S1.first() <= S2.first())
S2
created and empty
S1.first() S2.first() result.append(S1.removeFirst());
else
2 > 1
result.append(S2.removeFirst());
1 2 5 6 7 3 4 8 }
if (S1.isEmpty())
result S1 S2
result.append(S2);
if (S2.isEmpty())
S1.first() S2.first()
result.append(S1);
}
2 < 3
1 2 5 6 7 3 4 8
result S1 S2
Look at how merge(S1, S2) works…

Algorithm merge(S1, S2){


3 4 8 List result = new List();
1 2 5 6 7
while(! S1.isEmpty() && ! S2.isEmpty()){
result S1 S2 if(S1.first() <= S2.first())

S1.first() S2.first() result.append(S1.removeFirst());


else
5 > 3
result.append(S2.removeFirst());
1 2 3 5 6 7 4 8 }
if (S1.isEmpty())
result S1 S2
result.append(S1);
if (S2.isEmpty())
S1.first() S2.first()
result.append(S2);
5 > 4 }

1 2 3 4 5 6 7 8
Complete the rest on your own 
result S1 S2
When best case and worst case happens…

It happens in the merge() step. Consider the sublists in scenario 1 and 2:

5 7 2 6 6 7 2 5

S1 S2 S1 S2

2 5 6 7 2 5 6 7

S1 S1

scenario 1 (worst case scenario) scenario 2 (best case scenario)

- number of comparisons: 3 - number of comparisons: 2


Number of comparisons Number of comparisons
5 7 6 2 8 1 3 4 (BEST CASE) (WORST CASE)

S1 S2 S1 S2 S1 S2 S1 S2
merge(S1, S2) 4 comparisons in total 4 comparisons in total

5 7 2 6 1 8 3 4

S1 S2 S1 S2
merge(S1, S2) 4 comparisons in total 6 comparisons in total

2 5 6 7 1 3 4 8

S1 S2
merge(S1, S2)
4 comparisons in total 7 comparisons in total
1 2 3 4 5 6 7 8
So, merge() = O(n), and we know partition() = O(1)
General number of comparisons (best case) in each level: N/2
merge() + partition() = O(n) + O(1) = O(n)
(worst case) in each level: n-1
n
n O(n)

n/2 n/2 O(n)


n
n/4 n/4 n/4 n/4
n/2 n/2 O(n)
log n

n O(n)
1 1 1 1 1 1 1 1 1 1 1
n/2 n/2

n/4 n/4 n/4 n/4 Each level O(n), there are log n levels, so complexity O(n log n)
Merge Sort
Merge sort consists of partition() and merge()
Complexity of the partition() = O(1)
Complexity of merge() = O(n)
Complexity of partition() and merge() = O(1) + O(n) = O(n)
There are log n levels, so O(n log n)

Merge sort does not sort in place, it requires additional storage.


Quick Sort
Quick Sort
Proposed by Charles Antony Richard Hoare in 1959 while in Soviet Union.
In Unix, it is the default sorting library function.
In C standary library: qsort.
Quick Sort
Quick sort as follows:
◦ Choose a value from the array/list as the pivot.
◦ Lomuto partition scheme: choose the first/last value in the
array.
◦ Randomly choose a value from the array.
◦ Hoare partition scheme.
◦ Divide the array/list to 2: left an right. If a value is less than or
equal to the pivot, put it to the left of pivot. If a value is more
than the pivot, put it to the right.
◦ Recursively carried out the steps.
4 5 6 2 8 1 3 7 Algorithm quick(Array, left, right){
if (left > right)
return;
4 5 6 2 8 1 3 7 (choose pivot) else{
pivot = partition(Array, left, right);
quick(Array, left, middle-1);
quick(Array, middle+1, right);
3 2 1 4 8 6 5 7
(divide to left/right) }

}
3 2 1 4 8 6 5 7 (choose pivot)

1 2 3 4 8 6 5 7 (divide to left/right)

1 2 3 4 5 6 5 7 (choose pivot)

1 2 3 4 5 6 7 8
Algorithm partition(Array, left, right){ 4 5 6 2 8 1 3 7
pivot = Array[left];
i = left;
for (j = left+1; j<right; j++){ i=0 j=1
if (Array[j] < pivot){
i = i + 1; 4 5 6 2 8 1 3 7
swap(Array[i], Array[j]);
}
}
swap(Array[i], Array[left]); i=0 j=2
}
Look at how the partition method works closer with the 4 5 6 2 8 1 3 7
following array…
4 5 6 2 8 1 3 7 i=1 j=3

4 2 6 5 8 1 3 7
When the algorithm start, left is assigned the index of the
first item, while right is assigned the size of the array.
left = 0 and right = 8. swap(i, j)
i=1 j=3
p = 4, i = 0, j=1…7
Main idea: j finds value smaller than pivot & swap with i
4 2 6 5 8 1 3 7 4 2 1 3 8 6 5 7

swap(i, j)
i=1 j=4 i=3 j=6

4 2 6 5 8 1 3 7 4 2 1 3 8 6 5 7

Array[j] < pivot


i=2 j=5 i=3 j=7

4 2 1 5 8 6 3 7 4 2 1 3 8 6 5 7

swap(i, j)
i=2 j=5 i=3 j=8

4 2 1 5 8 6 3 7 3 2 1 4 8 6 5 7

i=3 j=6
Quick Sort (Best Case)
Quick sort best case time complexity happens if the pivot divides the array to 2 lists that is nearly
equal in size.
If this happens, the array will finish dividing after log n times.
n

n/2 n/2
n
n n/4 n/4 n/4 n/4
n/2 n/2 O(n) log n
n/2 n/2 O(n)
n/4 n/4 n/4 n/4 O(n)

1 1 1 1 1 1 1 1 1 1 1
Quick Sort (Best Case)
If you have an array of size 16.
It will compares 15 times, divide the list to 2 sublists with 7 and 8 elements.
◦ Total comparison = 15

Sublist with 7 elements will compare 6 times, and divide to 2 sublists with 3 elements; while sublist with 8
elements will compare 7 times, and divide to 2 sublists with 3 elements and 4 elements.
◦ Total comparison = 13

Sublist with 3 elements will compare 2 times each; sublist with 4 elements will compare 3 times.
◦ Total comparison = 9

At each level i, there are about n-2i comparisons. Complexity of partition() = O(n)
There are log n levels.
The best case complexity of quick sort is O(n log n)
Quick Sort (Worst Case)
Worst case complexity happens when the pivot has the minimum or maximum value, and the
pivot 2 sublists, one with n-1 elements, another sublist is empty.

n
Number of comparisons
= n-1 + n-2 + n-3 + … + 1 0 n-1

= n (n-1)/2 0 n-2

0 n-3
Worst case complexity = O(n )
2

0 n-4


Quick Sort: Pivot Selection
Pivot selection is important for quick sort.
An improper selection of pivot will end up with the worst case scenario, that gives us O(n 2).
The way pivot is selected from the first/last element in an array will produce O(n 2) when the list
is already sorted/reversed sorted/near sorted.
Possible solution:
◦ Randomly choose a value
◦ Choose the median value from among values taken from front, middle and end of an array.
Quick Sort: Conclusions
Most of the work done in partitioning.
Average case complexity: O(n log n)
Worst case complexity: O(n2)
Sort in place, does not requires additional storage.
A lot of overhead involves, not worth it if the array to be sorted is small.
Sorting Algorithms Comparisons

You might also like