0% found this document useful (0 votes)
21 views

Unit III Dsa

Uploaded by

druma.soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Unit III Dsa

Uploaded by

druma.soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Searching And Sorting

Rishabh Baid
BCA NOTES
Searching and Sorting

Contents
• Searching
o Sequential Search
o Binary Search
 Test Yourself #1
• Sorting
o Selection Sort
 Test Yourself #2
o Insertion Sort
 Test Yourself #3
o Merge Sort
 Test Yourself #4
 Test Yourself #5
o Quick Sort
 Test Yourself #6
o Sorting Summary
• Answers to Self-Study Questions

Searching
Consider searching for a given value v in an array of size N. There are 2 basic approaches:
sequential search and binary search.

Sequential Search
Sequential search involves looking at each value in turn (i.e., start with the value in array[0], then
array[1], etc). The algorithm quits and returns true if the current value is v; it quits and returns
false if it has looked at all of the values in the array without finding v. Here's the code:

public static boolean sequentialSearch(Object[] A, Object v) {


for (int k = 0; k < A.length; k++) {
if (A[k].equals(v)) return true;
}
return false;
}

If the values are in sorted order, then the algorithm can sometimes quit and return false without
having to look at all of the values in the array: v is not in the array if the current value is greater
than v. Here's the code for this version:

public static boolean sortedSequentialSearch(Comparable[] A, Comparable


v) {
// precondition: A is sorted (in ascending order)
for (int k = 0; k < A.length; k++) {
if (A[k].equals(v)) return true;
if (A[k].compareTo(v) > 0) return false;

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
}
return false;
}

The worst-case time for a sequential search is always O(N).

Binary Search
When the values are in sorted order, a better approach than the one given above is to use binary
search. The algorithm for binary search starts by looking at the middle item x. If x is equal to v,
it quits and returns true. Otherwise, it uses the relative ordering of x and v to eliminate half of the
array (if v is less than x, then it can't be stored to the right of x in the array; similarly, if it is
greater than x, it can't be stored to the left of x). Once half of the array has been eliminated, the
algorithm starts again by looking at the middle item in the remaining half. It quits when it finds v
or when the entire array has been eliminated.

Here's the code for binary search:

public static boolean binarySearch(Comparable[] A, Comparable v) {


// precondition: A is sorted (in ascending order)
return binarySearchAux(A, 0, A.length - 1, v);
}

private static boolean binarySearchAux(Comparable[] A, int low, int


high, int v) {
// precondition: A is sorted (in ascending order)
// postcondition: return true iff v is in an element of A in the range
// A[low] to A[high]
if (low > high) return false;
int middle = (low + high) / 2;
if (A[middle].equals(v)) return true;
if (v.compareTo(A[middle]) < 0) {
// recursively search the left part of the array
return binarySearchAux(A, low, middle-1, v);
}
else {
// recursively search the right part of the array
return binarySearchAux(A, middle+1, high, v);
}
}

The worst-case time for binary search is proportional to log2 N: the number of times N can be
divided in half before there is nothing left. Using big-O notation, this is O(log N). Note that
binary search in an array is basically the same as doing a lookup in a perfectly balanced binary-
search tree (the root of a balanced BST is the middle value). In both cases, if the current value is
not the one we're looking for, we can eliminate half of the remaining values.

TEST YOURSELF #1

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Why isn't it a good idea to use binary search to find a value in a sorted linked list of values?

solution

Sorting
Consider sorting the values in an array A of size N. Most sorting algorithms involve what are
called comparison sorts; i.e., they work by comparing values. Comparison sorts can never have
a worst-case running time less than O(N log N). Simple comparison sorts are usually O(N2); the
more clever ones are O(N log N).

Three interesting issues to consider when thinking about different sorting algorithms are:

• Does an algorithm always take its worst-case time?


• What happens on an already-sorted array?
• How much space (other than the space for the array itself) is required?

We will discuss four comparison-sort algorithms:

1. selection sort
2. insertion sort
3. merge sort
4. quick sort

Selection sort and insertion sort have worst-case time O(N2). Quick sort is also O(N2) in the
worst case, but its expected time is O(N log N). Merge sort is O(N log N) in the worst case.

Selection Sort

The idea behind selection sort is:

1. Find the smallest value in A; put it in A[0].


2. Find the second smallest value in A; put it in A[1].
3. etc.

The approach is as follows:

• Use an outer loop from 0 to N-1 (the loop index, k, tells which position in A to fill next).
• Each time around, use a nested loop (from k+1 to N-1) to find the smallest value (and its
index) in the unsorted part of the array.
• Swap that value with A[k].

Note that after i iterations, A[0] through A[i-1] contain their final values (so after N iterations,
A[0] through A[N-1] contain their final values and we're done!)

Here's the code for selection sort:

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
public static void selectionSort(Comparable[] A) {
int j, k, minIndex;
Comparable min;
int N = A.length;

for (k = 0; k < N; k++) {


min = A[k];
minIndex = k;
for (j = k+1; j < N; j++) {
if (A[j].compareTo(min) < 0) {
min = A[j];
minIndex = j;
}
}
A[minIndex] = A[k];
A[k] = min;
}
}

And here's a picture illustrating how selection sort works:

What is the time complexity of selection sort? Note that the inner loop executes a different
number of times each time around the outer loop, so we can't just multiply N * (time for inner
loop). However, we can notice that:

• 1st iteration of outer loop: inner executes N - 1 times


• 2nd iteration of outer loop: inner executes N - 2 times
• ...
• Nth iteration of outer loop: inner executes 0 times

This is our old favorite sum:


N-1 + N-2 + ... + 3 + 2 + 1 + 0
BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
which we know is O(N2).

What if the array is already sorted when selection sort is called? It is still O(N2); the two loops
still execute the same number of times, regardless of whether the array is sorted or not.

TEST YOURSELF #2

It is not necessary for the outer loop to go all the way from 0 to N-1. Describe a small change to
the code that avoids a small amount of unnecessary work.

Where else might unnecessary work be done using the current code? (Hint: think about what
happens when the array is already sorted initially.) How could the code be changed to avoid that
unnecessary work? Is it a good idea to make that change?

solution

Insertion Sort
The idea behind insertion sort is:

1. Put the first 2 items in correct relative order.


2. Insert the 3rd item in the correct place relative to the first 2.
3. Insert the 4th item in the correct place relative to the first 3.
4. etc.

As for selection sort, a nested loop is used; however, a different invariant holds: after the ith time
around the outer loop, the items in A[0] through A[i-1] are in order relative to each other (but are
not necessarily in their final places). Also, note that in order to insert an item into its place in the
(relatively) sorted part of the array, it is necessary to move some values to the right to make
room.

Here's the code:

public static void insertionSort(Comparable[] A) {


int k, j;
Comparable tmp;
int N = A.length;

for (k = 1; k < N, k++) {


tmp = A[k];
j = k - 1;
while ((j > = 0) && (A[j].compareTo(tmp) > 0)) {
A[j+1] = A[j]; // move one value over one place to the right
j--;
}
A[j + 1] = tmp; // insert kth value in correct place relative to
previous
// values
}

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
}

Here's a picture illustrating how insertion sort works on the same array used above for selection
sort:

What is the time complexity of insertion sort? Again, the inner loop can execute a different
number of times for every iteration of the outer loop. In the worst case:

• 1st iteration of outer loop: inner executes 1 time


• 2nd iteration of outer loop: inner executes 2 times
• 3rd iteration of outer loop: inner executes 3 times
• ...
• N-1st iteration of outer loop: inner executes N-1 times

So we get:
1 + 2 + ... + N-1
which is still O(N2).

TEST YOURSELF #3

1. What is the running time for insertion sort when:


1. the array is already sorted in ascending order?
2. the array is already sorted in descending order?
2. On each iteration of its outer loop, insertion sort finds the correct place to insert the next
item, relative to the ones that are already in sorted order. It does this by searching back
through those items, one at a time. Would insertion sort be speeded up if instead it used
binary search to find the correct place to insert the next item?

Solution

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Merge Sort
As mentioned above, merge sort takes time O(N log N), which is quite a bit better than the two
O(N2) sorts described above (for example, when N=1,000,000, N2=1,000,000,000,000, and N
log2 N = 20,000,000; i.e., N2 is 50,000 times larger than N log N!).

The key insight behind merge sort is that it is possible to merge two sorted arrays, each
containing N/2 items to form one sorted array containing N items in time O(N). To do this merge,
you just step through the two arrays, always choosing the smaller of the two values to put into
the final array (and only advancing in the array from which you took the smaller value). Here's a
picture illustrating this merge process:

Now the question is, how do we get the two sorted arrays of size N/2? The answer is to use
recursion; to sort an array of length N:

1. Divide the array into two halves.

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
2. Recursively, sort the left half.
3. Recursively, sort the right half.
4. Merge the two sorted halves.

The base case for the recursion is when the array to be sorted is of length 1 -- then it is already
sorted, so there is nothing to do. Note that the merge step (step 4) needs to use an auxiliary array
(to avoid overwriting its values). The sorted values are then copied back from the auxiliary array
to the original array.

An outline of the code for merge sort is given below. It uses an auxiliary method with extra
parameters that tell what part of array A each recursive call is responsible for sorting.

public static void mergeSort(Comparable[] A) {


mergeAux(A, 0, A.length - 1); // call the aux. function to do all the
work
}

private static void mergeAux(Comparable[] A, int low, int high)


{
// base case
if (low == high) return;

// recursive case
// Step 1: Find the middle of the array (conceptually, divide it in
half)
int mid = (low + high) / 2;
// Steps 2 and 3: Sort the 2 halves of A
mergeAux(A, low, mid);
mergeAux(A, mid+1, high);

// Step 4: Merge sorted halves into an auxiliary array


Comparable[] tmp = new Comparable[high-low+1];
int left = low; // index into left half
int right = mid+1; // index into right half
int pos = 0; // index into tmp

while ((left <= mid) && (right <= high)) {


// choose the smaller of the two values "pointed to" by left, right
// copy that value into tmp[pos]
// increment either left or right as appropriate
// increment pos
...
}
// here when one of the two sorted halves has "run out" of values,
but
// there are still some in the other half; copy all the remaining
values
// to tmp
// Note: only 1 of the next 2 loops will actually execute
while (left <= mid) { ... }
while (right <= high) { ... }

// all values are in tmp; copy them back into A


arraycopy(tmp, 0, A, low, tmp.length);
}

TEST YOURSELF #4

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Fill in the missing code in the mergeSort method.

Solution

Algorithms like merge sort -- that work by dividing the problem in two, solving the smaller
versions, and then combining the solutions -- are called divide and conquer algorithms. Below
is a picture illustrating the divide-and-conquer aspect of merge sort using a new example array.
The picture shows the problem being divided up into smaller and smaller pieces (first an array of
size 8, then two halves each of size 4, etc). Then it shows the "combine" steps: the solved
problems of half size are merged to form solutions to the larger problem. (Note that the picture
illustrates the conceptual ideas -- in an actual execution, the small problems would be solved one
after the other, not in parallel. Also, the picture doesn't illustrate the use of auxiliary arrays
during the merge steps.)

To determine the time for merge sort, it is helpful to visualize the calls made to mergeAux as
shown below (each node represents one call, and is labeled with the size of the array to be sorted
by that call):

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
The height of this tree is O(log N). The total work done at each "level" of the tree (i.e., the work
done by mergeAux excluding the recursive calls) is O(N):

• Step 1 (finding the middle index) is O(1), and this step is performed once in each call; i.e.,
a total of once at the top level, twice at the second level, etc, down to a total of N/2 times
at the second-to-last level (it is not performed at all at the very last level, because there
the base case applies, and mergeAux just returns). So for any one level, the total amount
of work for Step 1 is at most O(N).
• For each individual call, Step 4 (merging the sorted half-graphs) takes time proportional
to the size of the part of the array to be sorted by that call. So for a whole level, the time
is proportional to the sum of the sizes at that level. This sum is always N.

Therefore, the time for merge sort involves O(N) work done at each "level" of the tree that
represents the recursive calls. Since there are O(log N) levels, the total worst-case time is O(N
log N).

TEST YOURSELF #5

What happens when the array is already sorted (what is the running time for merge sort in that
case)?

solution

Quick Sort
Quick sort (like merge sort) is a divide and conquer algorithm: it works by creating two problems
of half size, solving them recursively, then combining the solutions to the small problems to get a
solution to the original problem. However, quick sort does more work than merge sort in the
"divide" part, and is thus able to avoid doing any work at all in the "combine" part!

The idea is to start by partitioning the array: putting all small values in the left half and putting
all large values in the right half. Then the two halves are (recursively) sorted. Once that's done,
there's no need for a "combine" step: the whole array will be sorted! Here's a picture that
illustrates these ideas:

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
The key question is how to do the partitioning? Ideally, we'd like to put exactly half of the values
in the left part of the array, and the other half in the right part; i.e., we'd like to put all values less
than the median value in the left and all values greater than the median value in the right.
However, that requires first computing the median value (which is too expensive). Instead, we
pick one value to be the pivot, and we put all values less than the pivot to its left, and all values
greater than the pivot to its right (the pivot itself is then in its final place).

Here's the algorithm outline:

1. Choose a pivot value.


2. Partition the array (put all value less than the pivot in the left part of the array, then the
pivot itself, then all values greater than the pivot).
3. Recursively, sort the values less than the pivot.
4. Recursively, sort the values greater than the pivot.

Note that, as for merge sort, we need an auxiliary method with two extra parameters -- low and
high indexes to indicate which part of the array to sort. Also, although we could "recurse" all the
way down to a single item, in practice, it is better to switch to a sort like insertion sort when the
number of items to be sorted is small (e.g., 20).

Now let's consider how to choose the pivot item. (Our goal is to choose it so that the "left part"
and "right part" of the array have about the same number of items -- otherwise we'll get a bad
runtime).

An easy thing to do is to use the first value -- A[low] -- as the pivot. However, if A is already
sorted this will lead to the worst possible runtime, as illustrated below:

In this case, after partitioning, the left part of the array is empty, and the right part contains all
values except the pivot. This will cause O(N) recursive calls to be made (to sort from 0 to N-1,
then from 1 to N-1, then from 2 to N-1, etc). Therefore, the total time will be O(N2).

Another option is to use a random-number generator to choose a random item as the pivot. This
is OK if you have a good, fast random-number generator.

A simple and effective technique is the "median-of-three": choose the median of the values in
A[low], A[high], and A[(low+high)/2]. Note that this requires that there be at least 3 items in the
array, which is consistent with the note above about using insertion sort when the piece of the
array to be sorted gets small.

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Once we've chosen the pivot, we need to do the partitioning. (The following assumes that the
size of the piece of the array to be sorted is at least 3.) The basic idea is to use two "pointers"
(indexes) left and right. They start at opposite ends of the array and move toward each other until
left "points" to an item that is greater than the pivot (so it doesn't belong in the left part of the
array) and right "points" to an item that is smaller than the pivot. Those two "out-of-place" items
are swapped, and we repeat this process until left and right cross:

1. Choose the pivot (using the "median-of-three" technique); also, put the smallest of the 3
values in A[low], put the largest of the 3 values in A[high], and put the pivot in A[high-1].
(Putting the smallest value in A[low] prevents "right" from falling off the end of the array
in the following steps.)
2. Initialize: left = low+1; right = high-2
3. Use a loop with the condition:

while (left <= right)

The loop invariant is:

all items in A[low] to A[left-1] are <= the pivot


all items in A[right+1] to A[high] are >= the pivot

Each time around the loop:

left is incremented until it "points" to a value > the pivot


right is decremented until it "points" to a value < the pivot
if left and right have not crossed each other,
then swap the items they "point" to.

4. Put the pivot into its final place.

Here's the actual code for the partitioning step (the reason for returning a value will be clear
when we look at the code for quick sort itself):
private static int partition(Comparable[] A, int low, int high) {
// precondition: A.length >= 3

int pivot = medianOfThree(A, low, high); // this does step 1


int left = low+1; right = high-2;
while ( left <= right ) {
while (A[left].compareTo(pivot) < 0) left++;
while (A[right].compareTo(pivot) > 0) right--;
if (left <= right) {
swap(A, left, right);
left++;
right--;
}
}
swap(A, left, high-1); // step 4
return right;
}
After partitioning, the pivot is in A[right+1], which is its final place; the final task is to sort the
values to the left of the pivot, and to sort the values to the right of the pivot. Here's the code for

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
quick sort (so that we can illustrate the algorithm, we use insertion sort only when the part of the
array to be sorted has less than 3 items, rather than when it has less than 20 items):
public static void quickSort(Comparable[] A) {
quickAux(A, 0, A.length-1);
}

private static void quickAux(Comparable[] A, int low, int high) {


if (high-low < 2) insertionSort(A, low, high);
else {
int right = partition(A, low, high);
quickAux(A, low, right);
quickAux(A, right+2, high);
}
}

Note: It is important to handle duplicate values efficiently. In particular, it is not a good idea to
put all values strictly less than the pivot into the left part of the array, and all values greater than
or equal to the pivot into the right part of the array. The code given above for partitioning
handles duplicates correctly at the expense of some "extra" swaps when both left and right are
"pointing" to values equal to the pivot.

Here's a picture illustrating quick sort:

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
What is the time for Quick Sort?

• If the pivot is always the median value, then the calls form a balanced binary tree (like
they do for merge sort).
• In the worst case (the pivot is the smallest or largest value) the calls form a "linear" tree.
• In any case, the total work done at each level of the call tree is O(N) for partitioning.

So the total time is:

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
• worst-case: O(N2)
• in practice: O(N log N)

Note that quick sort's worst-case time is worse than merge sort's. However, an advantage of
quick sort is that it does not require extra storage, as merge sort does.

TEST YOURSELF #6

What happens when the array is already sorted (what is the running time for quick sort in that
case, assuming that the "median-of-three" method is used to choose the pivot)?

solution

Sorting Summary
• Selection Sort:
o N passes
on pass k: find the kth smallest item, put it in its final place
2
o always O(N )
• Insertion Sort:
o N passes
on pass k: insert the kth item into its proper position relative to the items to its left
2
o worst-case O(N )
o given an already-sorted array: O(N)
• Merge Sort:
o recursively sort the first N/2 items
recursively sort the last N/2 items
merge (using an auxiliary array)
o always O(N log N)
• Quick Sort:
o choose a pivot value
partition the array:

left part has items <= pivot


right part has items >= pivot

recursively sort the left part


recursively sort the right part

o worst-case O(N2)
o expected O(N log N)

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Radix Sort Radix Sort is a clever and intuitive little sorting algorithm. Radix Sort
puts the elements in order by comparing the digits of the numbers. I will explain with
an example.

RADIX-SORT(A,d)

1 for i - 1 to d

2 do use a stable sort to sort Array A on digit i

Consider the following 9 numbers:

493 812 715 710 195 437 582 340 385

We should start sorting by comparing and ordering the one's digits:

Digit Sublist
0 340 710
1
2 812 582
3 493
4
5 715 195 385
6
7 437
8
9

Notice that the numbers were added onto the list in the order that they were found,
which is why the numbers appear to be unsorted in each of the sublists above. Now,
we gather the sublists (in order from the 0 sublist to the 9 sublist) into the main list
again:

340 710 812 582 493 715 195 385 437

Note: The order in which we divide and reassemble the list is extremely
important, as this is one of the foundations of this algorithm. Now, the sublists are
created again, this time based on the ten's digit:

Digit Sublist

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
0
1 710 812 715
2
3 437
4 340
5
6
7
8 582 385
9 493 195

Now the sublists are gathered in order from 0 to 9:

710 812 715 437 340 582 385 493 195

Finally, the sublists are created according to the hundred's digit:

Digit Sublist
0
1 195
2
3 340 385
4 437 493
5 582
6
7 710 715
8 812
9

At last, the list is gathered up again:

195 340 385 437 493 582 710 715 812

And now we have a fully sorted array! Radix Sort is very simple, and a computer
can do it fast. When it is programmed properly, Radix Sort is in fact one of the
fastest sorting algorithms for numbers or strings of letters.

Disadvantages

Still, there are some tradeoffs for Radix Sort that can make it less preferable than
other sorts. The speed of Radix Sort largely depends on the inner basic operations,
and if the operations are not efficient enough, Radix Sort can be slower than
some other algorithms such as Quick Sort and Merge Sort. These operations
include the insert and delete functions of the sublists and the process of isolating
the digit you want. In the example above, the numbers were all of equal length, but
many times, this is not the case. If the numbers are not of the same length, then a

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
test is needed to check for additional digits that need sorting. This can be one of
the slowest parts of Radix Sort, and it is one of the hardest to make efficient.

Radix Sort can also take up more space than other sorting algorithms, since in
addition to the array that will be sorted, you need to have a sublist for each of the
possible digits or letters. If you are sorting pure English words, you will need at
least 26 different sub lists, and if you are sorting alphanumeric words or sentences,
you will probably need more than 40 sub lists in all!

Since Radix Sort depends on the digits or letters, Radix Sort is also much less
flexible than other sorts. For every different type of data, Radix Sort needs to be
rewritten, and if the sorting order changes, the sort needs to be rewritten again. In
short, Radix Sort takes more time to write, and it is very difficult to write a general
purpose Radix Sort that can handle all kinds of data.

For many programs that need a fast sort, Radix Sort is a good choice. Still, there are faster sorts,
which is one reason why Radix Sort is not used as much as some other sorts.

Insertion Sort
Advantages

• Simple to implement
• Efficient on (quite) small data sets
• Efficient on data sets which are already substantially sorted
• More efficient in practice than most other simple O(n2) algorithms such as selection sort
or bubble sort: the average time is n2/4 and it is linear in the best case
• Stable (does not change the relative order of elements with equal keys)
• In-place (only requires a constant amount O(1) of extra memory space)
• It is an online algorithm, in that it can sort a list as it receives it.

Comparisons to other sorts

Insertion sort is very similar to bubble sort. In bubble sort, after k passes through the array, the k
largest elements have bubbled to the top. (Or the k smallest elements have bubbled to the bottom,
depending on which way you do it.) In insertion sort, after k passes through the array, you have a
run of k sorted elements at the bottom of the array. Each pass inserts another element into the
sorted run. So with bubble sort, each pass takes less time than the previous one, but with
insertion sort, each pass may take more time than the previous one.

Some divide-and-conquer algorithms such as quicksort and mergesort sort by recursively


dividing the list into smaller sublists which are then sorted. A useful optimization in practice for
these algorithms is to switch to insertion sort for "small enough" sublists on which insertion sort
outperforms the more complex algorithms. The size of list for which insertion sort has the
advantage varies by environment and implementation, but is typically around 8 to 20 elements

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
Answers to Self-Study Questions for Sorting

Test Yourself #1

Binary search relies on being able to access the kth item in a sequence of values in O(1) time.
This is possible when the values are stored in an array, but not when they're stored in a linked list.
So binary search using a linked list would usually be much slower than sequential search.

Test Yourself #2

1. The loop can be changed to go from 0 to N-2. Once the first N-1 values are in their final
positions in the array, the Nth value is guaranteed to be in its final position, too.
2. The given code swaps A[minIndex] with A[k] even when minIndex == k. That could be
avoided by testing for minIndex == k. However, unless the array is close to being sorted,
that may not be a good idea; doing the test every time around the outer loop might require
more time than the time wasted by doing a few unnecessary swaps.

Test Yourself #3

1. (a) When the array is already sorted, the inner loop in the insertion sort code never
executes, so the time is O(N).

(b) When the array is in reverse sorted order, the inner loop executes the maximum
possible number of times, so the running time is as bad as possible (and is O(N2)).

2. Using binary search to find the right place to insert the next item would (usually) speed
up the sort (as we saw in the film Sorting out Sorting), but would not change the
complexity, which would still be O(N2) in the worst case, since in the worst case (when the item needs to be
inserted at the very beginning of the array each time) it is necessary to move O(N2) values.

Test Yourself #4
while ((left <= mid) && (right <= high)) {
// choose the smaller of the two values "pointed to" by left, right
// copy that value into tmp[pos]
// increment either left or right as appropriate
// increment pos
if (A[left].compareTo(A[right] < 0) {
tmp[pos] = A[left];
left++;

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in
}
else {
tmp[pos] = A[right];
right++;
}
pos++;
}
// here when one of the two sorted halves has "run out" of values, but
// there are still some in the other half; copy all the remaining values
// to tmp
// Note: only 1 of the next 2 loops will actually execute
while (left <= mid) {
A[pos] = A[left];
left++;
pos++;
}
while (right <= high) {
A[pos] = A[right];
right++;
pos++;
}

Test Yourself #5

When the array is already sorted, merge sort still takes O(N log N) time, because it still makes
the same recursive calls, and still goes through both half-size arrays to merge the values.

Test Yourself #6

When the array is already sorted, quick sort takes O(N log N) time, assuming that the "median-
of-three" method is used to choose the pivot. This is because the pivot will always be the median
value, so the two recursive calls will be made using arrays of half size, and so the calls will form
a balanced binary tree as illustrated in the notes.

For more detail’s mail me or follow this link www.bcaraipur.blogspot.in

BCA NOTES For more detail’s mail me or follow this link www.bcaraipur.blogspot.in

You might also like