Quicksort
Quicksort
3 Quicksort
Quicksort is the other important sorting algorithm that is based on the divide-and-conquer
approach. Quicksort divides its input elements according to their value. A partition is an
arrangement of the array’s elements so that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to the right of A[s] are greater than or equal
to it:
After a partition is achieved, A[s] will be in its final position in the sorted array, and we can
continue sorting the two subarrays to the left and to the right of A[s] independently (e.g., by
the same method). Here, the entire work happens in the division stage, with no work required
to combine the solutions to the subproblems.
ALGORITHM Quicksort(A[l..r])
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
We start by selecting a pivot—an element with respect to whose value we are going to divide
the subarray. There are several different strategies for selecting a pivot; the simplest strategy
is selecting the subarray’s first element: p = A[l].
We will scan the subarray from both ends, comparing the subarray’s elements to the pivot.
The left-to-right scan is denoted by index pointer i, and starts with the second element. Since
we want elements smaller than the pivot to be in the left part of the subarray, this scan skips
over elements that are smaller than the pivot and stops upon encountering the first element
greater than or equal to the pivot. The right-to-left scan, denoted by index pointer j, starts
with the last element of the subarray. Since we want elements larger than the pivot to be in
the right part of the subarray, this scan skips over elements that are larger than the pivot and
stops on encountering the first element smaller than or equal to the pivot.
After both scans stop, three situations may arise, depending on whether or not the scanning
indices have crossed. If scanning indices i and j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j ] and resume the scans by incrementing I and decrementing j,
respectively:
If the scanning indices have crossed over, i.e., i > j, we will have partitioned the
subarray after exchanging the pivot with A[j ]:
Finally, if the scanning indices stop while pointing to the same element, i.e., i = j, the value
they are pointing to must be equal to p. Thus, we have the subarray partitioned, with the split
position s = i = j :
We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the
pivot with A[j ] whenever i ≥ j .
Pseudocode implementing this partitioning procedure is given below.
ALGORITHM HoarePartition(A[l..r])
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
Note that index i can go out of the subarray’s bounds in this pseudocode. Rather than
checking for this possibility every time index i is incremented, we can append to array
A[0..n − 1]a “sentinel” that would prevent index i from advancing beyond position n.
Quicksort’s efficiency:
The number of key comparisons made before a partition is achieved is n + 1 if the scanning
indices cross over and n if they coincide. If all the splits happen in the middle of
corresponding subarrays, we will have the best case.
According to the Master Theorem, Cbest(n) ∈ Θ(n log2 n); solving it exactly for n = 2k yields
Cbest(n) = n log2 n.
So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with
itself, the algorithm will be left with the strictly increasing array A[1..n − 1] to sort. This
sorting of strictly increasing arrays of diminishing sizes will continue until the last one A[n −
2..n − 1] has been processed. The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2))/2 − 3 ∈ Θ(n2).
Its solution, which is much trickier than the worst- and best-case analyses, turns out to be
Thus, on the average, quicksort makes only 39% more comparisons than in the best case.
Moreover, its innermost loop is so efficient that it usually runs faster than mergesort.
Because of quicksort’s importance, there have been persistent efforts over the years to refine
the basic algorithm. Among several improvements discovered by researchers are:
Better pivot selection methods such as randomized quicksort that uses a random
element or the median-of-three method that uses the median of the leftmost,
rightmost, and the middle element of the array
Switching to insertion sort on very small subarrays (between 5 and 15 elements for
most computer systems) or not sorting small subarrays at all and finishing the
algorithm with insertion sort applied to the entire nearly sorted array
Modifications of the partitioning algorithm such as the three-way partition into
segments smaller than, equal to, and larger than the pivot
Disadvantages:
Despite its weaknesses, Quicksort remains highly influential and widely used due to its
efficiency and average-case performance.