The document discusses various sorting algorithms including exchange sorts like bubble sort and quicksort, selection sorts like straight selection sort, and tree sorts like heap sort. For each algorithm, it provides an overview of the approach, pseudocode, analysis of time complexity, and examples. Key algorithms covered are bubble sort (O(n2)), quicksort (average O(n log n)), selection sort (O(n2)), and heap sort (O(n log n)).
Data analysis and algorithms - UNIT 2.pptxsgrishma559
The document summarizes algorithms related to divide and conquer and greedy algorithms. It discusses binary search, finding maximum and minimum, merge sort, quick sort, and Huffman codes. The key steps of divide and conquer algorithms are to divide the problem into subproblems, conquer the subproblems by solving them recursively, and combine the solutions to solve the overall problem. Greedy algorithms make locally optimal choices at each step to find a global optimal solution. Huffman coding assigns variable-length codes to characters based on frequency to compress data.
Searching algorithms are used to find elements within datasets. Sequential search linearly checks each element until a match is found, taking O(n) time on average. Interval search algorithms like binary search target the center of a sorted structure and divide the search space in half at each step, taking O(log n) time on average. Jump search checks fewer elements than linear search by jumping ahead by a fixed number of steps or block size, typically the square root of the list length. Interpolation search may check non-middle indexes based on the searched value, working best for uniformly distributed sorted data.
Quick Sort is a sorting algorithm that partitions an array around a pivot element, recursively sorting the subarrays. It has a best case time complexity of O(n log n) when partitions are evenly divided, and worst case of O(n^2) when partitions are highly imbalanced. While fast, it is unstable and dependent on pivot selection. It is widely used due to its efficiency, simplicity, and ability to be parallelized.
The document discusses sorting algorithms and their time complexities. It describes several common sorting algorithms like bubble sort, selection sort, insertion sort, merge sort, and quick sort. Bubble sort, selection sort and insertion sort have a time complexity of O(n^2) in the worst case. Merge sort and quick sort are more efficient with divide-and-conquer approaches, giving them worst-case time complexities of O(n log n). The document provides pseudocode and explanations of how each algorithm works.
The document discusses various searching and sorting algorithms that use the divide and conquer approach. It describes linear search, binary search, and merge sort algorithms. Linear search has a time complexity of O(n) as it must examine each element to find the target. Binary search has a time complexity of O(log n) as it divides the search space in half each iteration. Merge sort also has a time complexity of O(n log n) as it divides the list into single elements and then merges them back together in sorted order.
The document discusses various searching and sorting algorithms. It describes linear search, binary search, selection sort, bubble sort, and heapsort. For each algorithm, it provides pseudocode examples and analyzes their performance in terms of number of comparisons required in the worst case. Linear search requires N comparisons in the worst case, while binary search requires log N comparisons. Selection sort and bubble sort both require approximately N^2 comparisons, while heapsort requires 1.5NlogN comparisons.
Chapter 8 advanced sorting and hashing for printAbdii Rashid
Shell sort improves on insertion sort by first sorting elements that are already partially sorted. It does this by using a sequence of increment values to sort sublists within the main list. The time complexity of shell sort is O(n^3/2).
Quicksort uses a divide and conquer approach. It chooses a pivot element and partitions the list into two sublists based on element values relative to the pivot. The sublists are then recursively sorted. The average time complexity of quicksort is O(nlogn) but it can be O(n^2) in the worst case.
Mergesort follows the same divide and conquer strategy as quicksort. It recursively divides the list into halves until single elements
Module 2_ Divide and Conquer Approach.pptxnikshaikh786
The document describes the divide and conquer approach and analyzes the complexity of several divide and conquer algorithms, including binary search, merge sort, quicksort, and finding minimum and maximum values. It explains the general divide and conquer method involves three steps: 1) divide the problem into subproblems, 2) solve the subproblems, and 3) combine the solutions to solve the original problem. It then provides detailed explanations and complexity analyses of specific divide and conquer algorithms.
This document provides an overview of several advanced sorting algorithms: Shell sort, Quick sort, Heap sort, and Merge sort. It describes the key ideas, time complexities, and provides examples of implementing each algorithm to sort sample data sets. Shell sort improves on insertion sort by sorting elements in a two-dimensional array. Quick sort uses a pivot element and partitions elements into left and right subsets. Heap sort uses a heap data structure and sorts by swapping elements. Merge sort divides the list recursively and then merges the sorted halves.
Sorting arranges data in a specific order by comparing elements according to a key value. The main sorting methods are bubble sort, selection sort, insertion sort, quicksort, mergesort, heapsort, and radix sort. Hashing maps data to table indexes using a hash function to provide direct access, with the potential for collisions. Common hash functions include division, mid-square, and folding methods.
Sorting arranges data in a specific order by comparing elements according to a key value. The main sorting methods are bubble sort, selection sort, insertion sort, quicksort, mergesort, heapsort, and radix sort. Hashing maps data to table indexes using a hash function to provide direct access, with the potential for collisions. Common hash functions include division, mid-square, and folding methods.
IRJET- A Survey on Different Searching AlgorithmsIRJET Journal
The document summarizes and compares several common search algorithms:
- Binary search has the best average time complexity of O(log n) but only works on sorted data. Linear search has average time complexity of O(n) and works on any data but is less efficient.
- Hybrid search combines linear and binary search to search unsorted arrays more efficiently than linear search. Interpolation search is an improvement on binary search that may search in different locations based on the search key value.
- Jump search works on sorted data by jumping in blocks of size sqrt(n) and doing a linear search within blocks. It has better average performance than linear search but only works on sorted data.
This document describes the binary search algorithm. It begins with an introduction that defines binary search as a searching algorithm used on a sorted array that repeatedly divides the search interval in half. It then provides an overview of the algorithm, explaining that it divides the search space, compares the middle element to the target, and recursively searches either the left or right half. The document also includes pseudocode for an implementation of binary search and analyzes its time and space complexity.
Quicksort is an efficient sorting algorithm that uses a divide-and-conquer approach. It works by partitioning an array around a pivot value, and then recursively sorting the subarrays. In the best case when the array is partitioned evenly, quicksort runs in O(n log n) time. In the worst case of a sorted or reverse sorted array, it runs in O(n^2) time. Using a median-of-three approach for pivot selection helps avoid worst case behavior in typical cases.
Algorithms and Data Structures for Sorting Numerical DataPratik Parmar
ubble Sort is a simple, intuitive sorting algorithm. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
This slide explains three (3) basic sorting algorithms with codes on github. Bubble sort, Selection sort and insertion sort.
visit https://github.com/EngrMikolo/BasicSortingAlgorithms to checkout the codes
The document provides an overview of sorting algorithms including bubble sort, selection sort, insertion sort, and divide and conquer algorithms like merge sort and quick sort. It discusses the time and space complexity of various sorting algorithms and how they work. Internal sorting methods like bubble sort, selection sort, and insertion sort have a worst-case time complexity of O(n2) while divide and conquer algorithms like merge sort and quick sort have better time complexities.
The document discusses various searching and sorting algorithms that use the divide and conquer approach. It describes linear search, binary search, and merge sort algorithms. Linear search has a time complexity of O(n) as it must examine each element to find the target. Binary search has a time complexity of O(log n) as it divides the search space in half each iteration. Merge sort also has a time complexity of O(n log n) as it divides the list into single elements and then merges them back together in sorted order.
The document discusses various searching and sorting algorithms. It describes linear search, binary search, selection sort, bubble sort, and heapsort. For each algorithm, it provides pseudocode examples and analyzes their performance in terms of number of comparisons required in the worst case. Linear search requires N comparisons in the worst case, while binary search requires log N comparisons. Selection sort and bubble sort both require approximately N^2 comparisons, while heapsort requires 1.5NlogN comparisons.
Chapter 8 advanced sorting and hashing for printAbdii Rashid
Shell sort improves on insertion sort by first sorting elements that are already partially sorted. It does this by using a sequence of increment values to sort sublists within the main list. The time complexity of shell sort is O(n^3/2).
Quicksort uses a divide and conquer approach. It chooses a pivot element and partitions the list into two sublists based on element values relative to the pivot. The sublists are then recursively sorted. The average time complexity of quicksort is O(nlogn) but it can be O(n^2) in the worst case.
Mergesort follows the same divide and conquer strategy as quicksort. It recursively divides the list into halves until single elements
Module 2_ Divide and Conquer Approach.pptxnikshaikh786
The document describes the divide and conquer approach and analyzes the complexity of several divide and conquer algorithms, including binary search, merge sort, quicksort, and finding minimum and maximum values. It explains the general divide and conquer method involves three steps: 1) divide the problem into subproblems, 2) solve the subproblems, and 3) combine the solutions to solve the original problem. It then provides detailed explanations and complexity analyses of specific divide and conquer algorithms.
This document provides an overview of several advanced sorting algorithms: Shell sort, Quick sort, Heap sort, and Merge sort. It describes the key ideas, time complexities, and provides examples of implementing each algorithm to sort sample data sets. Shell sort improves on insertion sort by sorting elements in a two-dimensional array. Quick sort uses a pivot element and partitions elements into left and right subsets. Heap sort uses a heap data structure and sorts by swapping elements. Merge sort divides the list recursively and then merges the sorted halves.
Sorting arranges data in a specific order by comparing elements according to a key value. The main sorting methods are bubble sort, selection sort, insertion sort, quicksort, mergesort, heapsort, and radix sort. Hashing maps data to table indexes using a hash function to provide direct access, with the potential for collisions. Common hash functions include division, mid-square, and folding methods.
Sorting arranges data in a specific order by comparing elements according to a key value. The main sorting methods are bubble sort, selection sort, insertion sort, quicksort, mergesort, heapsort, and radix sort. Hashing maps data to table indexes using a hash function to provide direct access, with the potential for collisions. Common hash functions include division, mid-square, and folding methods.
IRJET- A Survey on Different Searching AlgorithmsIRJET Journal
The document summarizes and compares several common search algorithms:
- Binary search has the best average time complexity of O(log n) but only works on sorted data. Linear search has average time complexity of O(n) and works on any data but is less efficient.
- Hybrid search combines linear and binary search to search unsorted arrays more efficiently than linear search. Interpolation search is an improvement on binary search that may search in different locations based on the search key value.
- Jump search works on sorted data by jumping in blocks of size sqrt(n) and doing a linear search within blocks. It has better average performance than linear search but only works on sorted data.
This document describes the binary search algorithm. It begins with an introduction that defines binary search as a searching algorithm used on a sorted array that repeatedly divides the search interval in half. It then provides an overview of the algorithm, explaining that it divides the search space, compares the middle element to the target, and recursively searches either the left or right half. The document also includes pseudocode for an implementation of binary search and analyzes its time and space complexity.
Quicksort is an efficient sorting algorithm that uses a divide-and-conquer approach. It works by partitioning an array around a pivot value, and then recursively sorting the subarrays. In the best case when the array is partitioned evenly, quicksort runs in O(n log n) time. In the worst case of a sorted or reverse sorted array, it runs in O(n^2) time. Using a median-of-three approach for pivot selection helps avoid worst case behavior in typical cases.
Algorithms and Data Structures for Sorting Numerical DataPratik Parmar
ubble Sort is a simple, intuitive sorting algorithm. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
This slide explains three (3) basic sorting algorithms with codes on github. Bubble sort, Selection sort and insertion sort.
visit https://github.com/EngrMikolo/BasicSortingAlgorithms to checkout the codes
The document provides an overview of sorting algorithms including bubble sort, selection sort, insertion sort, and divide and conquer algorithms like merge sort and quick sort. It discusses the time and space complexity of various sorting algorithms and how they work. Internal sorting methods like bubble sort, selection sort, and insertion sort have a worst-case time complexity of O(n2) while divide and conquer algorithms like merge sort and quick sort have better time complexities.
INTRO TO STATISTICS
INTRO TO SPSS INTERFACE
CLEANING MULTIPLE CHOICE RESPONSE DATA WITH EXCEL
ANALYZING MULTIPLE CHOICE RESPONSE DATA
INTERPRETATION
Q & A SESSION
PRACTICAL HANDS-ON ACTIVITY
Title: A Quick and Illustrated Guide to APA Style Referencing (7th Edition)
This visual and beginner-friendly guide simplifies the APA referencing style (7th edition) for academic writing. Designed especially for commerce students and research beginners, it includes:
✅ Real examples from original research papers
✅ Color-coded diagrams for clarity
✅ Key rules for in-text citation and reference list formatting
✅ Free citation tools like Mendeley & Zotero explained
Whether you're writing a college assignment, dissertation, or academic article, this guide will help you cite your sources correctly, confidently, and consistent.
Created by: Prof. Ishika Ghosh,
Faculty.
📩 For queries or feedback: [email protected]
World war-1(Causes & impacts at a glance) PPT by Simanchala Sarab(BABed,sem-4...larencebapu132
This is short and accurate description of World war-1 (1914-18)
It can give you the perfect factual conceptual clarity on the great war
Regards Simanchala Sarab
Student of BABed(ITEP, Secondary stage)in History at Guru Nanak Dev University Amritsar Punjab 🙏🙏
Exploring Substances:
Acidic, Basic, and
Neutral
Welcome to the fascinating world of acids and bases! Join siblings Ashwin and
Keerthi as they explore the colorful world of substances at their school's
National Science Day fair. Their adventure begins with a mysterious white paper
that reveals hidden messages when sprayed with a special liquid.
In this presentation, we'll discover how different substances can be classified as
acidic, basic, or neutral. We'll explore natural indicators like litmus, red rose
extract, and turmeric that help us identify these substances through color
changes. We'll also learn about neutralization reactions and their applications in
our daily lives.
by sandeep swamy
How to Manage Opening & Closing Controls in Odoo 17 POSCeline George
In Odoo 17 Point of Sale, the opening and closing controls are key for cash management. At the start of a shift, cashiers log in and enter the starting cash amount, marking the beginning of financial tracking. Throughout the shift, every transaction is recorded, creating an audit trail.
Odoo Inventory Rules and Routes v17 - Odoo SlidesCeline George
Odoo's inventory management system is highly flexible and powerful, allowing businesses to efficiently manage their stock operations through the use of Rules and Routes.
Geography Sem II Unit 1C Correlation of Geography with other school subjectsProfDrShaikhImran
The correlation of school subjects refers to the interconnectedness and mutual reinforcement between different academic disciplines. This concept highlights how knowledge and skills in one subject can support, enhance, or overlap with learning in another. Recognizing these correlations helps in creating a more holistic and meaningful educational experience.
Understanding P–N Junction Semiconductors: A Beginner’s GuideGS Virdi
Dive into the fundamentals of P–N junctions, the heart of every diode and semiconductor device. In this concise presentation, Dr. G.S. Virdi (Former Chief Scientist, CSIR-CEERI Pilani) covers:
What Is a P–N Junction? Learn how P-type and N-type materials join to create a diode.
Depletion Region & Biasing: See how forward and reverse bias shape the voltage–current behavior.
V–I Characteristics: Understand the curve that defines diode operation.
Real-World Uses: Discover common applications in rectifiers, signal clipping, and more.
Ideal for electronics students, hobbyists, and engineers seeking a clear, practical introduction to P–N junction semiconductors.
The ever evoilving world of science /7th class science curiosity /samyans aca...Sandeep Swamy
The Ever-Evolving World of
Science
Welcome to Grade 7 Science4not just a textbook with facts, but an invitation to
question, experiment, and explore the beautiful world we live in. From tiny cells
inside a leaf to the movement of celestial bodies, from household materials to
underground water flows, this journey will challenge your thinking and expand
your knowledge.
Notice something special about this book? The page numbers follow the playful
flight of a butterfly and a soaring paper plane! Just as these objects take flight,
learning soars when curiosity leads the way. Simple observations, like paper
planes, have inspired scientific explorations throughout history.
Multi-currency in odoo accounting and Update exchange rates automatically in ...Celine George
Most business transactions use the currencies of several countries for financial operations. For global transactions, multi-currency management is essential for enabling international trade.
How to Set warnings for invoicing specific customers in odooCeline George
Odoo 16 offers a powerful platform for managing sales documents and invoicing efficiently. One of its standout features is the ability to set warnings and block messages for specific customers during the invoicing process.
3. Linear Search
• Linear search is used to search a key element from multiple elements.
• Algorithm:
• Step 1: Traverse the array.
• Step 2: Match the key element with array element.
• Step 3: If key element is found, return the index position of the array
element.
• Step 4: If key element is not found, return -1.
4. Linear Search
• How Linear Search Works?
• The following steps are followed to search for an element k = 1 in the
list below.
5. Linear Search
• How Linear Search Works?
• Start from the first element, compare k with each element x.
6. Linear Search
• How Linear Search Works?
• If x == k, return the index.
• Else, return not found.
• Time Complexity: O(n)
7. Linear Search
public class LinearSearchExample{
public static int linearSearch(int[] arr, int key){
for(int i=0;i<arr.length;i++){
if(arr[i] == key){
return i; } }
return -1; }
public static void main(String args[]){
int[] a1= {10,20,30,50,70,90};
int key = 50;
System.out.println(key+" is found at index: "+linearSearch(a1, key)); } }
8. Binary Search
• Binary Search is a searching algorithm used in a sorted array by
repeatedly dividing the search interval in half.
• The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(Log n).
9. Binary Search
• Algorithm:
• Begin with the mid element of the whole array as a search key.
• If the value of the search key is equal to the item then return an index
of the search key.
• Or if the value of the search key is less than the item in the middle of
the interval, narrow the interval to the lower half.
• Otherwise, narrow it to the upper half.
• Repeatedly check from the second point until the value is found or
the interval is empty.
10. Binary Search
Pseudocode:
binarySearch(arr, x, low, high)
repeat till low = high
mid = (low + high)/2
if (x == arr[mid])
return mid
else if (x > arr[mid]) // x is on the right side
low = mid + 1
else // x is on the left side
high = mid - 1
12. Binary Search
• Step-by-step Binary Search Algorithm: We basically ignore half of the
elements just after one comparison.
• Compare x with the middle element.
• If x matches with the middle element, we return the mid index.
• Else If x is greater than the mid element, then x can only lie in the
right half subarray after the mid element. So we recur for the right
half.
• Else (x is smaller) recur for the left half.
14. Interpolation Search
• Interpolation search is an improvement over binary search for
uniformly distributed data.
• Binary search halves the search space on each step regardless of the
data distribution, thus it's time complexity is always O(log(n)).
• On the other hand, interpolation search time complexity varies
depending on the data distribution.
• It is faster than binary search for uniformly distributed data with the
time complexity of O(log(log(n))). However, in the worst-case
scenario, it can perform as poor as O(n).
15. Interpolation Search
• Similar to binary search, interpolation search can only work on a
sorted array.
• It places a probe in a calculated position on each iteration.
• If the probe is right on the item we are looking for, the position will
be returned; otherwise, the search space will be limited to either the
right or the left side of the probe.
16. Interpolation Search
• The probe position calculation is the only difference between binary search and
interpolation search.
• probe: the new probe position will be assigned to this parameter.
• lowEnd: the index of the leftmost item in the current search space.
• highEnd: the index of the rightmost item in the current search space.
• data[]: the array containing the original search space.
• item: the item that we are looking for.
17. Interpolation Search
• Let's say we want to find the position of 84 in the array below:
• The array's length is 8, so initially highEnd = 7 and lowEnd = 0 (because array's
index starts from 0, not 1).
• In the first step, the probe position formula will result in probe = 5:
18. Interpolation Search
• Because 84 (the item we are looking for) is greater than 73 (the current probe
position item), the next step will abandon the left side of the array by assigning
lowEnd = probe + 1.
• Now the search space consists of only 84 and 101. The probe position formula
will set probe = 6 which is exactly the 84's index:
• Since the item we were looking for is found, position 6 will be returned.
21. 21
MergeSort
Merge sort is a sorting algorithm that follows the divide-and-conquer
approach. It works by recursively dividing the input array into smaller
subarrays and sorting those subarrays then merging them back together to
obtain the sorted array.
In simple terms, we can say that the process of merge sort is to divide the
array into two halves, sort each half, and then merge the sorted halves back
together. This process is repeated until the entire array is sorted.
22. 22
Divide-and-Conquer
• Divide the problem into a number of sub-problems
• Similar sub-problems of smaller size
• Conquer the sub-problems
• Solve the sub-problems recursively
• Sub-problem size small enough solve the problems in straightforward manner
• Combine the solutions of the sub-problems
• Obtain the solution for the original problem
23. 23
MergeSort Algorithm
MergeSort is a recursive sorting procedure that
uses at most O(n*log(n)) comparisons.
To sort an array of n elements, we perform the
following steps in sequence:
If n < 2 then the array is already sorted.
Otherwise, n > 1, and we perform the following
three steps in sequence:
1. Sort the left half of the the array using MergeSort.
2. Sort the right half of the the array using MergeSort.
3. Merge the sorted left and right halves.
24. 24
Merge-Sort Tree
An execution of merge-sort is depicted by a binary tree
– each node represents a recursive call of merge-sort and stores
unsorted sequence before the execution and its partition
sorted sequence at the end of the execution
– the root is the initial call
– the leaves are calls on subsequences of size 0 or 1
7 2 9 4 2 4 7 9
7 2 2 7 9 4 4 9
7 7 2 2 9 9 4 4
35. Complexity Analysis of Merge Sort:
•Time Complexity:
• Best Case: O(n log n), When the array is already sorted or nearly
sorted.
• Average Case: O(n log n), When the array is randomly ordered.
• Worst Case: O(n log n), When the array is sorted in reverse order.
36. Applications, advantages and disadvantages
• Applications of Merge Sort:
• Sorting large datasets
• External sorting (when the dataset is too large to fit in memory)
• Inversion counting ( SS SAVED)
• It is a preferred algorithm for sorting Linked lists.
• It can be easily parallelized as we can independently sort subarrays
and then merge.
37. Continued
• Advantages of Merge Sort:
• Stability : Merge sort is a stable sorting algorithm, which means it maintains the relative
order of equal elements in the input array.
• Simple to implement: The divide-and-conquer approach is straightforward.
• Disadvantages of Merge Sort:
• Space complexity: Merge sort requires additional memory to store the merged sub-arrays
during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
• Slower than QuickSort in general. QuickSort is more cache friendly because it works in-
place.
38. Merge Algorithm
• The basic merging algorithms takes
• Two input arrays, A[] and B[],
• An output array C[]
• And three counters aptr, bptr and cptr. (initially set to the beginning of their
respective arrays)
• The smaller of A[aptr] and B[bptr] is copied to the next entry in C i.e.
C[cptr].
• The appropriate counters are then advanced.
• When either of input list is exhausted, the remainder of the other list is
copied to C.
61. 61
Merge- Pseudocode
Alg.: MERGE(A, p, q, r)
1. Compute n1 and n2
2. Copy the first n1 elements into L[1 . . n1 + 1] and the next n2
elements into R[1 . . n2 + 1]
3. L[n1 + 1] ← ; R[n2 + 1] ←
4. i 1; j 1
← ←
5. for k p
← to r
6. do if L[ i ] ≤ R[ j ]
7. then A[k] L[ i ]
←
8. i i + 1
←
9. else A[k] R[ j ]
←
p q
7
5
4
2
6
3
2
1
r
q + 1
L
R
1 2 3 4 5 6 7 8
6
3
2
1
7
5
4
2
p r
q
n1
n2
64. Quick Sort
•Fastest known sorting algorithm in practice
•Average case: O(N log N)
•Worst case: O(N2
)
• But the worst case can be made exponentially
unlikely.
•Another divide-and-conquer recursive
algorithm, like merge sort.
65. QuickSort Design
• Follows the divide-and-conquer paradigm.
• Divide: Partition (separate) the array A[p..r] into two (possibly
empty) subarrays A[p..q–1] and A[q+1..r].
• Each element in A[p..q–1] < A[q].
• A[q] < each element in A[q+1..r].
• Index q is computed as part of the partitioning procedure.
• Conquer: Sort the two subarrays by recursive calls to quicksort.
• Combine: The subarrays are sorted in place – no work is
needed to combine them.
• How do the divide and combine steps of quicksort compare
with those of merge sort?
66. Quicksort
• If the number of elements in S is 0 or 1, then return
(base case).
• Divide step:
• Pick any element (pivot) v in S
• Partition S – {v} into two disjoint groups
S1 = {x S – {v} | x <= v}
S2 = {x S – {v} | x v}
• Conquer step: recursively sort S1 and S2
• Combine step: the sorted S1 (by the time returned
from recursion), followed by v, followed by the sorted
S2 (i.e., nothing extra needs to be done)
v
v
S1 S2
S
To simplify, we may assume that we don’t have repetitive elements,
So to ignore the ‘equality’ case!
69. Pseudocode
Quicksort(A, p, r)
if p < r then
q := Partition(A, p, r);
Quicksort(A, p, q – 1);
Quicksort(A, q + 1, r)
Partition(A, p, r)
x, i := A[r], p – 1;
for j := p to r – 1 do
if A[j] x then
i := i + 1;
A[i] A[j]
A[i + 1] A[r];
return i + 1
5
A[p..r]
A[p..q – 1] A[q+1..r]
5 5
Partition
5
70. Issues To Consider
•How to pick the pivot?
• Many methods (discussed later)
•How to partition?
• Several methods exist.
• The one we consider is known to give good results and
to be easy and efficient.
• We discuss the partition strategy first.
71. Partitioning Strategy
• For now, assume that pivot = A[(left+right)/2].
• We want to partition array A[left .. right].
• First, get the pivot element out of the way by swapping it
with the last element (swap pivot and A[right]).
• Let i start at the first element and j start at the next-to-last
element (i = left, j = right – 1)
pivot i j
5 7 4 6 3 12 19 5 7 4 6
3 12
19
swap
72. Partitioning Strategy
• Want to have
• A[k] pivot, for k < i
• A[k] pivot, for k > j
• When i < j
• Move i right, skipping over elements smaller than the pivot
• Move j left, skipping over elements greater than the pivot
• When both i and j have stopped
• A[i] pivot
• A[j] pivot A[i] and A[j] should now be swapped
i j
5 7 4 6
3 12
19
i j
5 7 4 6
3 12
19
i j
pivot pivot
73. Partitioning Strategy (2)
• When i and j have stopped and i is to the left of j (thus legal)
• Swap A[i] and A[j]
• The large element is pushed to the right and the small element is
pushed to the left
• After swapping
• A[i] pivot
• A[j] pivot
• Repeat the process until i and j cross
swap
i j
5 7 4 6
3 12
19
i j
5 3 4 6
7 12
19
74. Partitioning Strategy (3)
• When i and j have crossed
• swap A[i] and pivot
• Result:
• A[k] pivot, for k < i
• A[k] pivot, for k > i
i j
5 3 4 6
7 12
19
i
j
5 3 4 6
7 12
19
i
j
5 3 4 6 7 12 19
swap A[i] and pivot
Break!
75. Picking the Pivot
• There are several ways to pick a pivot.
• Objective: Choose a pivot so that we will get 2
partitions of (almost) equal size.
76. Picking the Pivot (2)
• Use the first element as pivot
• if the input is random, ok.
• if the input is presorted (or in reverse order)
• all the elements go into S2 (or S1).
• this happens consistently throughout the recursive calls.
• results in O(N2
) behavior (we analyze this case later).
• Choose the pivot randomly
• generally safe,
• but random number generation can be expensive and does not
reduce the running time of the algorithm.
77. Picking the Pivot (3)
• Use the median of the array (ideal pivot)
• The N/2 th largest element
• Partitioning always cuts the array into roughly half
• An optimal quick sort (O(N log N))
• However, hard to find the exact median
• Median-of-three partitioning
• eliminates the bad case for sorted input.
78. Median of Three Method
• Compare just three elements: the leftmost, rightmost and
center
• Swap these elements if necessary so that
• A[left] = Smallest
• A[right] = Largest
• A[center] = Median of three
• Pick A[center] as the pivot.
• Swap A[center] and A[right – 1] so that the pivot is at the second last position (why?)
79. Median of Three: Example
pivot
5 6 4
6
3 12 19
2 13 6
5 6 4 3 12 19
2 6 13
A[left] = 2, A[center] = 13,
A[right] = 6
Swap A[center] and A[right]
5 6 4 3 12 19
2 13
pivot
6
5 6 4 3 12
19
2 13
Choose A[center] as pivot
Swap pivot and A[right – 1]
We only need to partition A[ left + 1, …, right – 2 ]. Why?
80. Quicksort for Small Arrays
• For very small arrays (N<= 20), quicksort does not perform
as well as insertion sort
• A good cutoff range is N=10
• Switching to insertion sort for small arrays can save about
15% in the running time
81. Mergesort vs Quicksort
• Both run in O(n*logn)
• Mergesort – always.
• Quicksort – on average
• Compared with Quicksort, Mergesort has less number of
comparisons but larger number of moving elements
• In Java, an element comparison is expensive but moving elements is
cheap. Therefore, Mergesort is used in the standard Java library for
generic sorting