Unit 1
Unit 1
Introduction:
Let us discuss various case studies to develop a deep understanding of the term Algorithm:
(i) Dictionary: - Find a word in a dictionary. We may apply the following approaches: -
Approach 1: - We open the dictionary which is given in figure 1.1 from the very first page and
start turning around pages one by one till we find the desired word. This approach is the linear
searching approach. In general, for searching a word from a dictionary through this approach may
take more time. As in the worst case, if he finds the word on the last page. So, he has to turn
around n-1 pages to search the word, which is very high is approximately equal to a total number
of pages.
Approach 2: - For searching the word. We may open the dictionary from the middle point, as the
dictionary has words arranged in order. We can compare the desired word from the word on the
opened page. If the first letter occurrence of the desired word is lesser in alphabetical order, then
search in the first half else, search in the second half of the dictionary. So, the problem size is
divided in half. Doing this recursively, he can easily search the word. In this approach, the number
of comparisons will be less than approach 1.
An algorithm is any well-defined computational procedure that takes some value, or set of values,
as input and produces some value, or set of values, as output.
OR
An algorithm is thus a sequence of computational steps that transform the input into the output.
NOTE
An algorithm is a step-by-step procedure to solve a problem. For example, in the above
diagram, the preparation of pizza takes a finite set for completion. In the same manner,
any real-world problem will take any required finite number of steps.
In other words, a set of rules/instructions that specify how a work is to be executed step-
by-step in order to achieve the desired results is referred to as an algorithm.
For executing these steps, some cost is incurred, which we calculate in terms of Time and
Space.
A simple definition in respect of input and output relation is given below. Here inputs
constraints are converted to final output using some steps known as an algorithm.
Example 1: Let’s understand an example of a person brushing his teeth in context of an algorithm.
In this example, we have written an algorithm in natural language.
Step 1: Take the brush.
Step 2: Apply paste on it
Step 3: Start Brushing
Step 4: Clean
Step 5: Wash
Step 6: Stop
1.3. How to write algorithm:
After understanding the meaning of algorithm, now we will discuss about various approaches of
writing algorithm by considering an example for addition of two numbers.
1. Natural Language: We can write the algorithm in natural language but the problem in this
is that there can be ambiguity for e.g. let’s see the statement “Sarah gave a bath to her dog
wearing a pink t-shirt”. Ambiguity: Is the dog wearing the pink t-shirt?
Method 1. Algorithm in Natural language
Step 1: Start
Step 2: Declare variables a, b and sum.
Step 3: Read values for a and b.
Step 4: Add a and b and assign the result to a variable sum.
Step 5: Display sum
Step 6: Stop
2. Flow chart: we can also write an algorithm using flowcharts but the problem is that
whenever there is a need of change in algorithm, for modification in flowcharts we have to
learn specialized tools for changing flowchart.
Method 2. Algorithm in terms of flowchart.
3. Programming Language: We can write algorithm in programming language but the problem
is that we need to have all the details and internal language about programming language
software, operating system.
int main()
{ int a, b, sum;
printf("Enter two numbers to add\n");
scanf("%d%d", &a, &b);
sum = a + b;
printf("Sum of the numbers = %d\n", sum);
return 0; }
4. Pseudo code: This method is the largest applicable one. It’s very simple as its syntax less.
After studying the definition of an algorithm as well as its methods of writing, let's focus on the
important characteristics of an algorithm.
Unambiguous − The Algorithm should be written in such a manner that its intent should be clear.
Input – There should be an initial condition that determines the starting position.
Output − There should exist a point where we can reach the end state that is the desired goal.
Feasibility – By feasibility, we mean that it should be solvable within a given set of resources.
Independent − an algorithm should only describe the step-by-step procedure, and it does not
need to discuss programming techniques.
Solving Everyday Problems: We apply various algorithms in our daily life for example
brushing our teeth, making sandwiches, etc.
Recommendation System: Recommendation algorithm for Facebook and Google search,
searching large aadhar based data set to come in this category.
Finding Route / Shortest Path: To search for any information, Google recommends using a
recommendation-based algorithm. To move from one place to another in minimum time,
we use the shortest path algorithm.
Recognizing Genetic Matching: To recognize the DNA structure of humans, we need to
identify 100000 genes of human DNA for determining the sequences of the 3 billion
chemical base pairs that make up the human DNA.
E-Commerce Categories: The day-to-day electronic commerce activities are hugely
dependent on algorithm, we use to do online shopping, and for this, we provide our details
like bank details, electronic card details, and for this, we need a good Algorithm to predict
day to day changes in users’ choice and provide them product/services based on their
choice.
The factors that we need to consider while determining the quality of a good Algorithm are:
Time: The amount of time it takes an algorithm to run as a function of the length of the input
is known as time requirement. The number of operations to be done by the algorithm is
indicated by the length of input.
Memory: The amount of memory utilised by the algorithm (including the inputs) to execute
and create the output is referred to its memory requirement.
Accuracy: There can be more than one solution to the given problem, but the one which is
the most optimal is termed as accurate.
Again, in the following manner, the quality of the procedure for preparing pizza can be
assessed:-
The time it takes to prepare pizza for one person should be between that of a first-time pizza
maker (worst case time) and that of a highly skilled chef (best case time).
The amount of space necessary to pack a pizza in a box should be optimised such that it fits
perfectly in the box without minimum wastage of space.
Pizza can be prepared in a variety of ways. However, the method that optimises the cook's
motions in the kitchen, using fewer ingredients to produce the appropriate level of quality
pizza with the least amount of money and time, will be the most accurate.
ii) Variable Part: -Variable part that consists of the space needed by component
variable whose size is dependent on the particular problem. This part includes
space needed by referenced variables and the recursion stack space
Example:
Algorithm fact(int n)
Memory taken by the Algorithm fact
{ is dependent upon the value of n.
If (n==0)
return 1;
return n* fact(n-1);
}
The time complexity T (P) taken by a program P is the sum of the compile time and run
(execution) time. Compiler time does not depend on the instance characteristics. Also we
assume that a compiled program will be run several times without recompilation. That is
why we focus only on Run (Execution) Time.
Example 1: Given the set of instruction (Pseudo code), Compute time complexity
Algorithm sum(a,n) 0 1 0
{ 0 1 0
s=0.0; 0 1 0
s=s+a[i]; 1 n n
return s; 0 1 0
} 0 1 0
Time Complexity: outer loop(i loop) executes n times(1….n) and inner loop(j loop) executes n
times. The complexity of code is n(outer loop) * n(inner loop)= order of (n 2).
Space Complexity: it takes storage for n, sum, i and j. total 4 spaces in memory. No matter value
of n is 1 or 100 or any value. Space is fixed for any value of n then the space complexity of above
code is order of (1). Order of (1) is used for constant time complexity.
Method 2:
int sum =0; Time complexity =
order of n
int fun1(n)
{
If(n==1)
return 1;
sum = n+ fun1(n-1);
return sum; Space complexity =
} order of n
Time complexity: time complexity of above code is equal to number of functions calling
Time complexity=5 (=n). order of n.
Space Complexity: For above function (n+1) memory is required(n times for function
calling and temporary storage of data generated by function call and 1 storage for value
of n). So we can say that complexity of above code is order of (n+1). In complexity we
ignore constant terns or we take only highest power term.
Space complexity=order of (n).
Method 3:
Time complexity =
int fun2(int n) order of n
{
int sum=0;
for(int i=1;i<=n;i++)
sum=sum + i; Space Complexity =
return sum; order of 1 (constant)
}
Time complexity: for loop executes n times (from 1 to n). time complexity is order of n.
Space complexity: Space required for value of n, sum and i (constant space). Space complexity is
order of (1).
Method 4:
int fun3(int n) Time complexity =
{ order of n
Return n*(n+1)/2;
}
Space Complexity =
order of 1 (constant)
Time complexity: this code executes only 1 statement (return n * (n+1)/2). No loop no function
calling. No matter what is the value of n every time only 1 statement will execute. The time
complexity is order of (1). (For constant time execution)
Space complexity: it will take space only for value of n (constant). Space complexity of above code
is order of (1).
Method 5:
int fun2(int arr[], int n)
{ Time Complexity =
order of n
Int sum=0;
for(int i=0;i<n;i++)
sum=sum +arr[i]; Space Complexity =
return sum; order of n
}
Time complexity: for loop executes n times (from 1 to n). time complexity is order of n.
Space complexity: space required for value of n, integer type array of size n, sum and i (array
size is variable so space required is also variable). Space complexity is order of (n).
For some change in input, the function grows as a function of logarithmic. This function is defined
as follows:
for(i=1;i<=n;2*i)
loop will execute in below sequence. Assume
{
that n=25
printf (“%d”, *);
}
I 2 4 8 16 32
Now we can see that i increases in 21, 22, 23, ……manner and the value of 2imust be less or equal to
n.
2i<= n
Take log both side
i log 2<= log n
i <= (log n)/ (log 2)
i<= log2n
Then the complexity of above example is order of log 2n.
c. Linear Function f(n) = n
The Linear Function grows in a constant order. In another word it, it is a function which grows as a
function whose graph can be plotted linearly.
}
f. Cubic Function and Other Polynomials: This function grows faster than its counterpart that is
Linear Search and quadratic equation. If we double the size of the input variable, then the
function will grow eight times; hence the rate of growth will be cubic.
for(i=0;i<n;i++) There are three nested loops.
{ When i=0 then second loop j=0 and third loop execute k=0
for(j=0;j<n;j++) to n-1 times (total n times).
{ Again, when i=0 then second loop j=0, 1, 2, 3……n-1 will
for(k=0;k<n;k++) execute n times and k=0, 1, 2, 3……n-1.
{ Similarly, first loop will execute n times, second loop will
printf(“%d”, *); execute n times and Third loop will execute n times.
} So, the total execution is n* n* n times. So, the complexity
} is order of n3.
} g. Factorial Function f(n) = n!
The running time of this function is worst among all discussed till so far? If we are going to increase
the value of n, then the function will grow using the concept of the Factorial function.
Example: permutations of n elements.
n=1, n! = 1! = 1
Permutation (a) = {a} n=2, n! = 2! = 2*1 = 2
Permutation (ab) = {ab, ba} n=3, n! = 3! = 3*2*1 = 6
Permutation (abc) = {abc, acb, bac, bca, cab,cba}
The growth of function shown in figure 1.6, provides the understanding of the Algorithm's
efficiency in order to understand its performance. The aim is to predict the performance of
different algorithms in order to predict their behaviour with changes in input.
Figure 1.6: Representing the growth of various mathematical functions over input size n
Table 3: With the help of above diagram, we can see growth of functions at some value of n
Function Remark Graph
Constant (C) Always order of 1. No matter
value of C is 1, 10, 100, 1000
or any number
Let's understand the importance for performing the analysis of algorithm with the help of an
example:
Example1: (Dictionary)
Let us assume that we have a dictionary of 1000 pages and we want to search a word “Space”
in dictionary. This word is at 800thpage number.
First method: Turn the pages one by one and after turning 799 pages we got the word “Space”.
In this method we made 799 comparisons.
Second Method: Open the dictionary from the mid (500) now I know that word “Space” is in
second half (501 to 1000), then again, I divide the pages (501-1000) from the mid (750). Now
search the word between pages 750 to 1000. Repeat this step till reach page 800.
Red line represents the page number and number written with line represents the
comparison. With the help of this example, we can say that we need only 6 comparisons to
reach page number 800, but in previous method 799 comparisons was required to reach page
number 800.
Three cases to analyse any problem:
1. Best Case
2. Average Case
3. Worst Case
Problem 1: Consider given array of random elements, we have to categorize best, worst and
average case. In this example, searching of an element is done.
1. Best Case:
Search element 5 in given array and check element one by one.
Start with index 0and found the element at first attempt.
What could be better than this that element found in first attempt? This is best condition to
search any element.
2. Average Case:
Search element 15 in given array and check element one by one.
Start with index 0. First search at 0th index not found.
Search at 1st index not found
Repeat this step till element found.
Element 15 found at 4th index.
Size of array= 10 (0 to 9)
Number of comparisons required= 5 (0, 1, 2, 3, 4)
Almost half of the array size comparison required to search element (n/2 where n is size of array)
This is average case where we found the element after n/2 comparisons.
3. Worst Case:
Search element 4in given array and check element one by one.
Start with index 0. First search at 0th index not found.
Search at 1st index not found
Repeat this step till element found.
Element 4 found at last index.
Size of array= 10 (0 to 9)
Number of comparisons required= 10 (0, 1, 2, ….9)
In this case we have traversed complete array and element found at last index. Maximum
comparison can be made is size of array this is worst case.
As we know that we have millimetre, centimetre, metre or kilometre to measure distance similarly
we want unit for measuring complexity. Asymptotic Notations are the units to measure complexity.
Asymptotic Notation is a way of comparing functions that ignores constant factors and small input
sizes.
Let's understand it with the searching technique. We need to focus on how much time an algorithm
takes with the change in terms of the input size. As the input size increases, we can easily find the
Binary Search will provide better results than Linear Search. So here, the size of input matters to
find the number of comparisons required in a worst-case scenario.
Analogy:- GPS:- As show in in figure 1.7 below , if GPS only knew about highways or interstate
connected highway systems, and not about every small or little road, then it would not be able to
find all routes as accurately as it is finding nowadays. Hence, we can see that the running time of
the function is proportionate to the size of the input.
Figure: 1.7 - Google Map
The other consideration is how fast a function grows with an increase in the size of the input. We
need to include more essential things, and we need to drop the less important ones.
There exist positive constants c and n0 such that 0≤f(n)≤cg(n) for all n≥ n0 ." f(n) is thus O(g(n)).
O (g (n)) = {f(n): there exist positive constants c and n0 such that
0 ≤ f (n) ≤ c g(n) for all n ≥ n0
The above expression can be described as a function f(n) belongs to the set O(g(n)) if there exists a
positive constant c such that it lies between 0 and CG(n), for sufficiently large n.
For any value of n, the running time of an algorithm does not cross the time provided by O (g(n)).
Since it gives the worst-case running time of an algorithm, it is widely used to analyse an algorithm
as we are always interested in the worst-case scenario.
For any value of n, the minimum time required by the Algorithm is given by Omega Ω(g(n))
Analysis: - The above code has a runtime of O(n). the function f(i) is called for at most n times. The
upper bound is n, and the lower bound can be Ω (1) or Ω(log(n)), depending on the value of foo(i).
0≤ 𝐶. 𝒏𝟐 ≤ 𝒏𝟑
For C=1, and n>=1, 𝒏𝟐 ≤ 𝒏𝟑
For C=1, and n=2𝟐𝟐 ≤ 𝟐𝟑
𝟒 <8 (condition is true)
Hence, 𝒏𝟑 = 𝛀(𝒏𝟐 ), ∀𝒏 ≥ 𝟏
2.6.3. Theta notation / Tightly Bound:
Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analysing the average-case
complexity of an algorithm.
There exist positive constants c1, c2, and n0such that c1 g(n)≤f(n)≤ c2 g(n) for all n> n0". f(n) is thus Θ(g(n)).
For a function g(n), Θ(g(n)) is given by the relation:
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
When we say that the function is Θ(n), then we are saying that once n gets large enough, the running
time is at least c1(n) and maximum c2(n).
Hence, 𝑛2 + 3n + 7 = 𝚯(𝑛2 ), ∀𝑛 ≥ 5
𝒏𝟐
Example 2: Prove that − 𝟐𝒏 = 𝚯(𝒏𝟐 )
𝟐
𝟏 2
− ≤ 𝐶2
2 𝑛
1
𝑓𝑜𝑟 𝑛 → ∞, 𝑜𝑟 𝑣𝑒𝑟𝑦 𝑙𝑎𝑟𝑔𝑒 𝑣𝑎𝑙𝑢𝑒 𝐶2 ≥
2
𝒏𝟐
Case 2: Solving inequality, 𝐶1. 𝑛2 ≤ − 2𝑛
2
𝒏𝟐
or, 𝐶1. 𝑛2 ≤ − 2𝑛
2
1 2
𝑜𝑟, 𝐶1 ≤ 2 − 𝑛
1 1
𝑓𝑜𝑟 𝐶1 = 𝑎𝑛𝑑 𝐶2 = 𝑎𝑛𝑑 𝑛 = 8
4 2
1 2 𝟖𝟐 1
.8 ≤ − 2 ∗ 8 ≤ . 82
4 2 2
𝑜𝑟, 16 ≤ 32 − 16 ≤ 32
𝑜𝑟, 16 ≤ 16 ≤ 32 (True condition)
Hence, proved.
4 ≤ 4 ≤ 8 (True)
Let's understand the concept discussed yet with the help of some examples.
Example 1: Compute the running time complexity for the given pseudo code:
Algorithm sum(a,n) 0 1 0
{ 0 1 0
s=0.0; 0 1 0
for i=1 to n do 1 n+1 n+1
s=s+a[i]; 1 n n
return s; 0 1 0
} 0 1 0
Time complexity/ time taken by the above code = 0 + 0 + 0 + n+1 + n + 0 + 0=2n + 1=O
(n)
Example 2: Compute the running time complexity for the given pseudo code:
Algorithm Add(a,b,c,m,n) 0 1 0
{ 0 1 0
c[ i , j ]= a[ i, j]+b[i, j] 1 mn mn
} 0 1 0
Example 3: Compute the running time complexity for the given code:
#include <stdio.h>
int main(void)
{
int sum=0,i;
for(int i=1;i<=n;i=i+2)…... ..n+1 times (n for true condition and 1 for false condition)
{
sum=sum+i; ….......... n time (only for true condition)
}
return 0;
}
Then the complexity of above code is O (n)
2.1 Definition of Recursion:
When a function calls itself, it is called recursion.
A recursive problem is generally divided into smaller parts and then work on those smaller
problems to get the result of the original problem.
It is very important to keep in mind that the recursive solution must terminate or must have
some base condition to end the recursive call.
But if we look more closely in the above solution, then we can find that
10! = 10 * 9! OR
10! = 10 * 9 * 8! OR
n! = 1; if n = 0
n! = n * (n – 1)!; if n > 0
That’s how we can write down the solution of the problems in a recursive manner.
Iteration Method
- Recursion Tree Method
- Master Method
This method uses the technique to iterate or expand the recurrence and express the problem
as a summation of given term n and base case or base condition.
Let’s take some example to understand how we can solve the problems using this method:
Example 1:
T(n) = 1 + T(n – 1) = 1 ; if n = 1
Solution:
T(n) = 1 + 1 + T(n – 2)
T(n) = 2 + 1 + T(n – 3)
= 3 + T(n – 3)
…….
…….
…….
…….
Now, we got a pattern that
= (n – 1) + T(1)
= (n – 1) + 1
=n
Example 2:
T(n) = n + T(n – 1); for n > 1
= 1 ; for n = 1
Solution:
T(n) = n + (n – 1) + (n – 2) + T(n – 3)
…….
…….
…….
…….
…….
Now, we got a pattern that
= n + (n – 1) + (n – 2) + ............................ 2 + T(1)
= n + (n – 1) + (n – 2) + ............................. 2 + 1
= (n * (n + 1) ) / 2
= O (n ^ 2)
Therefore, the solution for the given recursive problem is O(n ^ 2).
Example 3:
T(n) = T(n / 2) + c ; for n > 1
= 1 ; for n = 1
Solution:
…….
…….
…….
…….
…….
Now, we got a pattern that
T(n) = T(n / n) + kc
= T(1) + kc
= 1 + kc -------- (vii)
But, we have compute the solution in terms of n, so let’s take log on both sides of equation (vi)
log n = k log 2
= (log n base 2)
Therefore, the solution for the given recursive problem is O(log n base 2).
Example 4:
T(n) = 2 T(n – 1), if n > 1
= 1, if n = 1
Solution:
T(n) = 2 [2 T(n – 2) ]
T(n) = 4 [2 T(n – 3) ]
So, from equation (iv) and (v), we got a pattern that if we repeat the above iteration till k times,
= 2^ (n – 1) T(1)
= 2^ (n – 1)
Sometimes, there are some recurrence problems for which substitution and iterative method
does not help to find the answer easily. In that case, we have to change the variable to make
the recursive problem easy so that it can be efficiently solved. Sometimes, a little algebraic
manipulation can make an unknown recurrence similar to one we have seen before.
Let’s see these types of problems and check how can we solve such recurrences:
T(2^m) = 2T(2^(m/2)) + m
Now, by using the master theorem, we know the solution of equation (iv)
Hence, the solution of the given recurrence is O(log n log log n).
Example 2: Consider the recurrence:
T(2^m) = 2T(2^(m/2)) + 1
Now, by using the master theorem, we know the solution of equation (iv)
S(m) = O(logm)
Now, substitute the values of m in terms of n, we get
NOTE: From the above examples, we have seen that how easily we have solved the recurrence
which seems to be too difficult in first go as these recurrences contains the terms in root.
Master Method:
There are various methods used for solving recurrences. Every method has some advantages
and disadvantages associated with it. One of the most important methods used for solving the
recurrences is the Master method. Master Method provides the running time complexity for
the recurrences of the type that is abiding by the divide and conquer approach.
End procedure
Solution: In the above function, given a problem of size n and we need to divide the problem
into two sub-problem of the size of n/b. This function can further be interpreted as a recursive
call to the tree, with each node in the tree as an instance of one of the recursivecalls and
its child nodes to the other subsequent calls. In this, a function is divided into thetwo sub-
function of size T (n/b). The cost of dividing the functions into sub-functions will be computed
further. F (n) represents the cost that occurs in dividing the given problem into sub-problem
and then merging it further.
Recurrence for the above mentioned algorithm will be represented as T (n) = a T (n/b) + f (n).
Determine the cost of each node and then the cost at each level.
Calculate the total number of the levels in the recursion tree.
The next step is to calculate the number of nodes at the last level.
Further, calculate the cost of the last level in the recursion tree.
Step-03: The last step is to add the cost obtained at each level of the recursion tree to get the runningtime
complexity of a recurrence.
Similarly, we can calculate the size of the sub-problem at level iSize of sub-
problem at level i = n/2i
Consider at any level-x (last level), size of sub-problem becomes 1. Then-n / 2x = 1
2x = n
Taking log on both sides, we get: x log2 = log n , x = log2n
∴ Hence, the total number of levels in the recursion tree = log2n + 1
Step-02(c): Calculating the number of nodes in the last level:
Solution-
Step-01: Formulate the recursion tree for the given recurrences based on the function givenand the cost
at each level. The given recurrence function works as:
Division of problem of size n into two sub-problems of size n/5 and size 4n/5
Further on the left side, the sub-problem of size n/5 will get divided into 2 sub-problems- size n/52
and size 4n/52.
On the right side, the sub-problem of size 4n/5 will get divided into 2 sub-problems- size4n/52 and
size 42n/52, and so on.
At the last level, the size of sub-problems will further reduce to 1.T (n)
function will be elaborated as follows:
Expansion of T (n) Function to the cost function is as follows:
First, Determine the cost of dividing a problem of size n into its 2 sub-problems of size n/5 and 4n/5
and then merging the solutions back to the size of the problem as n.
Determining the cost of dividing a problem of size n/5 into its 2 sub-problems of size n/25 and 4n/25
and then merging the solutions back to the size of the problem as n/5
It determines the cost of dividing a problem of size 4n/5 into its 2 sub-problems of size 4n/25 and
16n/25 and then merging the solutions back to the size of the problem as 4n/5 and so on.
This is illustrated through the following recursion tree where each node represents the cost of the
corresponding sub-problem-
Cost of level-0 = n
Cost of level-1 = n/5 + 4n/5 = n
Cost of level-2 = n/52 + 4n/52 + 4n/52 + 42n/52 = n
Step-02(b): Determine total number of levels in the recursion tree. We will consider the rightmost subtree as it goes
down to the deepest level-
= nlog5/4n + θ (nlog 2)
5/4
= θ (nlog5/4n)
Problem-03: Solve the given recurrence relation using the recursion tree method-
Problem 04: Solve the given recurrence using recursion tree method:
T (n) = T (n/4) + T (n/2) + cn2
For calculating the value of the function T (n), we have to calculate the sum of the cost of each node at every
level. After calculating the cost of all the levels, we generate the series T (n) = c (n2+ 5(n2)/16 + 25(n2)/256) +-
------
Mentioned series is a geometric progression with a ratio of 5/16.
To get an upper bound, we can apply the formula of the sum to the infinite series. We get the sum as (n2)/ (1
- 5/16), which is O (n2).
(b) Stable sorting and Unstable sorting– In stable sorting algorithm the order of two objects with equal keys in
output remains same after sorting as they appear in the input array to be sorted. e.g. Merge Sort, Insertion Sort,
Bubble Sort, and Binary Tree Sort. If a sorting algorithm, after sorting the contents, changes the sequence of
similar content in which they appear, it is called unstable Sorting. e.g. Quick Sort, Heap Sort, and Selection sort
(c) Internal sorting and External sorting- If the input data is such that it can be adjusted in the main memory at
once or, when all data is placed in-memory it is called internal sorting e.g. Bubble Sort, Insertion Sort, Quick
Sort. If the input data is such that it cannot be adjusted in the memory entirely at once, it needs to be stored in
a hard disk, floppy disk, or any other storage device. This is called external sorting. External sorting algorithms
generally fall into two types, distribution sorting, which resembles quick sort, and external merge sort, which
resembles merge sort. The latter typically uses a hybrid sort-merge strategy
(d) Adaptive Sorting and Non-Adaptive Sorting- A sorting algorithm is said to be adaptive, if it takes advantage
of already 'sorted' elements in the list that is to be sorted. That is, while sorting if the source list has some
element already sorted, adaptive algorithms will take this into account and will try not to re-order them. A non-
adaptive algorithm is one which does not take into account the elements which are already sorted. They try to
force every single element to be re-ordered to confirm their sortedness.
(e) Comparison based and non-comparison based - Algorithms, which sort a list, or an array, based only on the
comparison of pairs of numbers, and not any other information (like what is being sorted, the frequency of
numbers etc.), fall under this category. Elements of an array are compared with each other to find the sorted
array. e.g. Bubble Sort, Selection Sort, Quick Sort, Merge Sort, Insertion Sort. In non-comparison based sorting,
elements of array are not compared with each other to find the sorted array. e.g. Radix Sort, Bucket Sort,
Counting Sort.
Here, in the subsequent portion of this chapter, we will discuss about following Sorting Techniques and their
variants.
1. Bubble Sort
2. Selection Sort
3. Insertion Sort
4. Heap Sort
5. Merge Sort
6. Quick Sort
7. Counting Sort
8. Radix Sort
9. Bucket Sort
The Sorting Techniques can broadly be categorised (based on time complexity) into
- Order of n2 (Bubble Sort, Selection Sort, Insertion Sorts),
- Order of nlog n (Heap Sort, Quick Sort, Merge Sort)
- Order of n (Counting Sort, Bucket Sort and Radix Sort)
The document discusses about various cases related to these sorting and strategies to improve the run time.
Consider a situation when we pour some detergent in the water and shake the water, the bubbles are formed.
The bubbles are formed of different sizes. Largest volume bubbles have a tendency of coming to the water
surface faster than smaller ones.
Bubble sort, which is also known as sinking sort, is a simple Brute force Sorting technique that repeatedly
iterates through the item list to be sorted. The process compares each pair of adjacent items and swaps them if
they are in the wrong order. The algorithm starts at the beginning of the data set. It compares the first two
elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of
adjacent elements to the end of the data set. It then starts again with the first two elements, repeating with
one less comparison than the last pass. Bubble sort can be used to sort a small number of items (where its
asymptotic inefficiency is not a concern).
In Bubble Sort Algorithm largest element is fixed in the first iteration followed by second largest element and so
on so forth. In the process 1st element is compared with 2nd and if previous element is larger than the next one,
elements are exchanged with each other. Then 2nd and 3rd elements are compared and if second is larger than
the 3rd, they are exchanged. The process repeats and finally (n-1)st element gets compared with nth and if
previous one is larger than the next one, they are exchanged. This completes the first iteration. As a result largest
element occupies its appropriate position in the Sorted array i.e. last position.
In the next iteration, the same process is repeated but one less comparison is performed than previous iteration
(as largest number is already in place). As a result of second iteration, second largest number reaches to its
appropriate position in the sorted array.
The similar iterations are repeated n-1 times. The result is the sorted array of n elements.
In bubble sort time complexity for the case when elements are already sorted is θ(n 2) , because fixed number
of iterations need has to be performed.
There are two scenarios possible –
Case 1 - When elements are already sorted: Bubble Sort does not perform any swapping.
Case 2 - When elements are sorted after k-1 passes: In the kth pass no swapping occurs.
A small change in Bubble Sort Algorithm above makes it optimized. If no swap happens in some iteration means
elements are sorted and we should stop comparisons. This can be done by the use of a flag variable.
ALGORITHM Bubble Sort Optimized (A [ ],N)
BEGIN:
FOR i=1 TO N-1 DO
FLAG =1
FOR j= 1 To N-i DO
IF A[j] >A[j+1] THEN
Exchange (A[j], A[j+1])
FLAG=0
IF FLAG ==1 THEN
RETURN
END;
Optimized Bubble Sort Analysis
If elements are already sorted then it takes Ω(N) time because after the first pass flag remains unchanged,
meaning that no swapping occurs. Subsequent passes are not performed. A total of N-1 comparisons are
performed in the first pass and that is all.
If elements become sorted after k-1 passes, kth pass finds no exchanges. It takes θ(N*k) effort
Optimized Bubble Sort (First Pass)
2.2.3. Sort the link list using optimized Bubble Sort (case study)
Normally we use sorting algorithms for array because it is difficult to convert array algorithms into Linked List.
There are two reasons for this -
1-Linked List cannot be traversed back.
2- Elements cannot be accessed directly (traversal is required to reach to the element).
Bubble Sort Algorithm can be easily modified for Linked List due to two reasons -
1 - In Bubble Sort no random access needed
2 - In Bubble Sort move only forward direction.
ALGORITHM Bubble Sort Link list(START)
BEGIN:
Q=NULL
FLAG=TRUE
WHILE TRUE DO
FLAG=FALSE
T=START
WHILE TNext ! = Q DO
IF TInfo>TNextinfo THEN
Exchange (T, TNext)
FLAG=TRUE
T=TNext
Q=T
END;
Algorithm Complexity
If elements are already sorted then it takes Ω(N) time because after the first pass flag remains unchanged,
meaning that no swapping occurs. Subsequent passes are not performed. A total of N-1 comparisons are
performed in the first pass and that is all.
If elements become sorted after k-1 passes, kth pass finds no exchanges. It takes θ(N*k) effort.
Selection sort is nothing but a variation of Bubble Sort because in selection sort swapping occurs only one time
per pass.
In every pass, choose largest or smallest element and swap it with last or first element. If the smallest element
was taken for swap, position of first element gets fixed. The second pass starts with the second element and
smallest element is found out of remaining N-1 Elements and exchanged with the second element. In the third
pass the smallest element is found out of N-2 elements (3rd to Nth element) and exchanged with the third
element and so on so forth. The same is performed for N-1 passes.
Number of comparisons in this Algorithm is just the same as that of Bubble Sort but number of swaps in this
‘N’ (as compared to N*(N-1)/2 Swaps in Bubble Sort.
The First Pass
In first pass
Min =1;
FOR j=2 TO N DO
IF A[j]<A[min] THEN
Min = j;
Exchange (A[1], A[Min])
Total N-1 passes of similar nature
As selection sort takes only O(N) swaps in worst case, it is the best suitable where we need minimum number
of writes on disk. If write operation is costly then selection sort is the obvious choice. In terms of write
operations the best known algorithm till now is cycle sort, but cycle sort is unstable algorithm.
2.4. Insertion Sort
Consider a situation where playing cards are lying on the floor in the arbitrary manner. In case we want these
cards to be sorted, we can choose one of the cards and place that in hand. Every time we pick a card from pile,
we can insert that at the right place in the hand. This way we will have sorted cards in the hand and a card
arbitrarily chosen from lot will be inserted at the right place in the cards in hand.
A, 2, 3, 4, 5, 6, 7 | K, 10, J, 8, 9, Q
Red colored card set is sorted, Black Colored card set unsorted. Try inserting a card from unsorted set in the
sorted set.
A, 2, 3, 4, 5, 6, 7, 8 | K, 10, J, 9, Q
Red colored card set is sorted, Black Colored card set unsorted. Try inserting a card from unsorted set in the
sorted set.
A, 2, 3, 4, 5, 6, 7, 8, 9 | K, 10, J, Q
Red colored card set is sorted, Black Colored card set unsorted. Try inserting a card from unsorted set in the
sorted set.
The process like this continues and finally we can have all the cards sorted in the hand.
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists.
It considers two halves in the same array: Sorted and unsorted. To start with only one element is taken in the
sorted list (first element) and N-1 elements in the unsorted list (2nd to Nth element). It works by taking elements
from the unsorted list one by one and inserting them in their correct position into the sorted list. See the
example below wherein we have taken a small set of numbers containing 23, 1, 10, 5, 2. In the first pass we will
consider the sorted parts to contain only 23 and unsorted part containing 1, 10, 5, 2. A number (1) is picked
from unsorted part and inserted in sorted part at the right place. Now sorted part contains 2 elements. A number
from unsorted part (10) is picked and inserted in the sorted part. And so on so forth until all the elements from
the unsorted part have been picked and inserted in the sorted part. The diagram below shows the step by step
process:
Recursive Approach
ALGORITHM Insertion Sort(A[ ], N)
BEGIN:
IF N<=1 THEN
RETURN;
Insertion Sort(A, N-1)
Key = A [N-1]
J=N
WHILE J >= 1 AND key <A[J-1] DO
IF A [j] > Key THEN
A [j+1] = A [j]
J = J -1
A [j+1] = Key
END;
Shell Sort
Shell sort is a variation of insertion sort. It improves complexity of insertion by dividing it into number of parts
and then apply insertion sort.
It works on two facts about insertion sort-
1-Works better for less number of elements.
2-Elements are less distant towards their final positions.
There is a comparison between Hibard Shell Sort(n1.5) and Donald Shell Sort takes O(n2). It is most widely gap
sequence suggested by Thomas Hibard which is (2k-1 ……. 31 .. 15 .. 7 .. 3 .. 1) which gives time complexity O(n
√n)
2-Donald Knuth Gap Sequence- (1, 4, 13, …)
3- A gap sequence like (64, 32, 16, …) is not good because it increases time complexity.
There is a given Binary Search Tree in which any two of the nodes of the Binary Search Tree are exchanged.
Your task is to fix these nodes of the binary search tree.
Note: The BST will not have duplicates.
Examples:
Input Tree:
15
/ \
12 10
/\
4 35
In the above tree, nodes 35 and 10 must be swapped to fix the tree.
Following is the output tree
15
/ \
12 35
/\
4 10
Input format: Given nodes of binary search tree
Input:
N=5
arr[] = { 1,3,2,4,5}
Output: 1 2 3 4 5
Example 2:
Input:
N=7
arr[] = {7,6,5,4,3,2,1}
Output: 7 6 5 4 3 2 1
Task: Read no input and don’t print anything. Your task is to complete the
function insert() and insertionSort() where insert() takes the array, it's size and an index i
and insertion_Sort() uses insert function to sort the array in ascending order using insertion sort algorithm.
Expected Time Complexity: O(Nlog n).
Expected Auxiliary Space: O(1).
Constraints:
1 <= N <= 1000
1 <= arr[i] <= 1000
A Knock out tournament is organized where first round is the quarter final in which there are 8 teams CSK, Mumbai Indians,
Delhi Capitals , Kolkata Knight Riders, Punjab Kings, Rajasthan Royals, RCB, Sunrisers Hyderabad participated.
After the first round four teams will reach in semi final round where two semi finals will be played.
From the semi Final round, two team will reach in final. Now the winner of the final will take
This analogy resemble the concept of Heap(Max-Heap).
It is a type of binary tree in which all levels are completely filled except possibly the last level .
Also last level might or might not be filled completely. If last level is not full then all the nodes should be filled from the
left.
2.5.3.Heap:
A Binary heap is a complete Binary Tree which makes it suitable to be implemented using array.
A Binary Heap is categorized into either Min-Heap or Max-Heap.
In a Min Binary Heap, the value at the root node must be smallest among all the values present in Binary Heap. This
property of Min-Heap must be true repeatedly for all nodes in Binary Tree.
In a Max Binary Heap the value at the root node must be largest among all the values present in Binary Heap. This property
of Max-Heap must be true repeatedly for all nodes in Binary Tree.
As heap is a complete binary tree therefore height of tree having N nodes will always O(log n).
You can access a parent node or a child nodes in the array with indices below.
A parent node|parent(i) = i / 2
When you look at the node of index 4, the relation of nodes in the tree corresponds to the indices of the array below. If i
= 4, Left Child will be at 2 * 4 that is 8th position and Right Child will be at (2*4 + 1) 9th position.
Also if the index of left child is 4 then index of its parent will be 4/2 =2.
Following two operations are used to construct a heap from an arbitrary array:
a) MaxHeapify–In a given complete binary tree if a node at index k does not fulfill max-heap property while its left
and right subtree are max heap, MaxHeapify arrange node k and all its subtree to satisfy maxheap property.
b) BuildMaxHeap–This method builds a Heap from the given array. So BuildMaxHeap use MaxHeapify function to
build a heap from the
MaxHeapify(A,i,N) is a subroutine.
When it is called, two subtrees rooted at Left(i) and Right(i) are max-heaps, but A[i] may not satisfy the max-heap
property.
MaxHeapify(A,i,N) makes the subtree rooted at A[i] become a max-heap by letting A[i] “float down”.
Example:
c) Start from the first index of non-leaf node whose index is given by n/2 where n is the size of an array.
d) Set current element having index k as largest.
e) The left child index will be 2*k and the right child index will be 2*k + 1 (when array index starts from 1).
If left Child value is greater than current Element (i.e. element at kth index), set leftChildIndex as largest index.
If rightChild value is greater than element at largest index, set rightChildIndex as largest index.
g) Repeat the steps from (c) to (f) until the subtrees are also get heapify.
ALGORITHM MaxHeapify(A[ ], k, N)
BEGIN:
L = Left(k) //Left(k)=2*k
R = Right(k) //Right(k)=2*k+1
largest = L
ELSE
largest = k
largest ← R
IF largest != k THEN
END;
Steps:
MaxHeapify(A, i, N)
END;
Analysis
As we know that time complexity of Heapify operation depends on the height of the tree i.e. H and H should be log N
when there are N nodes in the heap.
The height ’h’ increases as we move upwards along the tree. Build-Heap runs a loop from the index of the last internal
node N/2 with height=1, to the index of root(1) with height = lg(n). Hence, Heapify takes different time for each node,
which is θ(h).
Finally, we have got the max-heap in fig.(f).
2. Insert operation:
a) First update the size of the tree by adding 1 in the given size.
b) Then insert the new element at the end of the tree.
c) Now perform Heapify operation to place the new element at its correct position in the tree and make the tree
either as max heap or min heap.
BEGIN:
N = N + 1;
A[ N ] = item;
ReHeapifyUp( A[ ], N, k)
END;
Algorithm: ReHeapifyUp( A[ ], N, k)
BEGIN:
parentindex = k/ 2;
IF parentindex> 0 THEN
IF A[k]>A[parentindex] THEN
Exchnage(A[k], A[parentindex])
ReHeapifyUp(A, N, parentindex)
END;
Analysis:
3 Delete Operation:
a) First exchange theroot element with the last element in the heap..
b) Then remove the last element by decreasing the size of the tree by 1.
c) Now perform Heapify operation to make the tree either as max heap or min heap.
Method-2 To delete an element at any position from Max Heap.
Following are the steps to delete an element from Max Heap.
Algorithm: DelHeap(A[ ], N)
BEGIN:
Analysis:
Heap Sort is a popular and efficient sorting algorithm to sort the elements.
Step1: In the Max-Heap largest item is always stored at the root node.
Step 2: Now exchange the root element with the last element in the heap.
Step 4: Now performMax-Heapify operation so that highest element should arrive at root. Repeat these steps until all
the items are sorted.
Example:
Step1: In the Max-Heap largest item is always stored at the root node.
Step1: Replace
A[1] with A[last].
. Step 2 Now exchange the root element with the last element
Step4 Now perform Max-Heapify operation so that highest element should arrive at root.
L = 2*k
Call Max-Heapify()
R = L+1 IF L ≤ heap-size(A) AND A[L] > A[ k ] THEN
largest = L
else
largest = k
IF R ≤ heap-size(A) AND A[ R ] > A[ largest ]
THEN
largest = R
IF largest != k THEN
ALGORITHM HeapSort(A[ ], N)
BEGIN:
BuildMaxHeap(A, N)
FOR j = N to 2 STEP – 1 DO
END;
Analysis
Heap Sort has θ(nlog n) time complexities for all the cases ( best case, average case, and worst case).
As heap is the complete binary tree so the height of a complete binary tree containing n elements is log n. During the sorting
step, we exchange the root element with the last element and heapify the root element. For each element, this again takes
log n worst time because we might have to bring the element all the way from the root to the leaf. Since we repeat this n
times, the heapsort step is also nlog n.
Merge sort is based on divide and conquer approach.It kept on dividing the elements present in array until it
cannot be divided further and then we combine all these elements in order to get elements in sorted order.
Analogy 1: King Conquering an entire territory: Suppose you are a king and you wish to conquer an entire
territory, your army is awaiting your orders. So, for a feasible approach rather than attacking the entire territory
at once, find the weaker territory and conquer them one by one. So, ultimately we take a large land, split up
into small problems. Capturing smaller territories is like solving small problems and then capturing as a whole.
Figure 1: Elaboration about the King Conquering the Territory.
Analogy 2. Searching a page number in book: To straightaway go to any page number of a book let say page
423. I do know the fact that page numbers are serialized from 1 to 710. Initially, I would just randomly open the
book at any location. Suppose page 678 opens, I know my page won’t be found there. So, my search would be
limited to the other half of the book. Now, if I pick any page from other half let say 295. That means that portion
of the book is eliminated so I am left with portion of 296- 677 pages. In this way we are reducing our search
space. If for the same brute force is used the navigation from page 1 till end would lead to many iterations. So,
this gives an idea how merge sort can be applied by the same concept of splitting into smaller sub problems.
Analogy 3: Given a loaf of bread, you need to divide it into 1/8th1/8th1/8th pieces, without using any
measuring tape:
Conclusion:-The manner inwhich the bread loaf was broken down in subparts is termed as a Divide procedure. The
moment when it cannot be further sub divided is termed as a conquer and when we join together in order to
receive the original piece is known as a merging procedure. Merge Sort, thereby work on this Divide and Conquer
technology.
In Divide and Conquer approach, we break a problem into sub-problems. When the solution to each sub-
problem is ready, we merge the results obtained from these subproblems to get the solution to the main
problem . Let’s take an example of sorting an array A. Here, the sub problem is to sort the sub problem after
dividing it into two parts. If q, is the mid index, then the array will be recursively divided into two halves that is
from A[p…………q] and then from A[q+1…………r]. We can merge these and combine the list in order to get data
in sorted order.
In various comparison based sorting algorithm, the worst case complexity is O (n 2). This is because an element
is compared multiple times in different passes. For example, In Insertion sorts, while loop runs maximum times
in worst case. If array is already sorted in the decreasing order, then in worst case while loop runs 2+3+4+------
n times in each iteration. Hence, we can say that the running time complexity of Insertion Sort in worst case is
O (n2).
How Merge Sort Works: The Merge Sort algorithm has two procedures. These are calling of Merge function
and merging procedure. Let’s us understand the merging procedure before going towards calling procedure of
Merge Sort.
Step 6: Merge:
Step 7: Recursive Call, Base Case, Merge:
Step 8: Merge:
2.6.5 Algorithmic View of Merge Sort along with the Example Flow:
The first step is to divide the array into two halves- left and right sub arrays.
Repeative recursive calls are made in second steps.
This division will be done till the size of array is 1.
As soon as each sub array contain 1 element, merge procedure is called.
The merge procedure combines these subs sorted arrays to have a final sorted array.
Pseudo code for performing the merge operation on two sorted arrays is as follows:
ALGORITHM Merge(A[ ], p, q, r)
BEGIN:
1. n1 = q-p+1
2. n2 = r-p
3. Create Arrays L[n1+1], R[n2+1]
4. FOR i = 1 TO n1 DO
5. L[i] = A[p+i-1]
6. FOR j = 1 TO n2 DO
7. R[j] = A[q+j]
8. L[n1+1] = ∞
9. R[n2+1] = ∞
10. i = 1
11. j = 1
12. FOR k=p TO r DO
13. IF L[i] <= R[j] THEN
14. A[k]=L[i]
15. i=i+1
16. ELSE A[k] = R[j]
17. j=j+1
END;
Step 1 and Step 2 is used to divide the array into sub-arrays. In Step 3, we will create left and right sub-arrays
along with the memory allocation. From First three steps, running time complexity is constant θ(1). In Step 4
and 5, Independent for loop is used for assigning Array elements to the left sub-array.In Step 6 and 7,
Independent for loop is used for assigning Array elements to the right sub-array. Running Time Complexity of
step 4 to step 7 is θ(n 1+ n 2). Step 8 and Step 9 is used for storing maximum element in left and right sub-array.
Step 10 and Step 11 is used to initialize the variables i and j to 1. Step 12 to Step 17 are used to compare the
elements of the left and right sub-array, keeping the track of i and j variables. After comparison, left and right
sub-array elements are further stored in original array. After this, running time complexity of Merge Procedure
is θ(n).
Let’s take an example where we are sorting the array using this algorithm. Here there would be different steps
which we have shown later. Here we are using one step in order to show how merging procedure take place:
Step-01: Two sub array Left and Right is created, number of elements contained in Left sub array is given by
index i and number of elements contained in Right sub array is given by index j . Number of elements
contained in Left array will be L[i] = q – p + 1 and number of elements in right most array will be R[j] = r – q.
Step-02: Initiallyi , j and k will be 0.As element in right array R[0] is 2 which is than L[0] that is 5. So, A[k] will
hold 2.
Step-03: Now j = 1 , i = 0 and k = 1. Now R[j] will point to 3 and L[i] will point to 5, since 3 is less than 5 A[k]
will hold 3.
Step 4:Nowj = 2, k = 2, i = 0. Since R[j] will point to 4 and L[i] is still at 5 so B[2] will now contains 4.
Step-05:Now i = 1, j points to senitel which stores infinity , k = 3. Now A[k] will be 5.Thus, we have-
Step 6:Now i = 2, j points to senitel which stores infinity , k is at 4 th index. Now A[k] will be holding 7.Thus, we
have-
Step-07:Now i = 3, j points to senitel which stores infinity , k is at 5th index. Now A[k] will be holding 8. Thus, we have--
To sort the entire sequence A = _A[1], A[2], . . . ,A[n], we make the initial call MERGESORT(A, 1, length[A]), where
once again length[A] = n.
In this, p is the first element of the array and r is the last element of the array. In Step 1, we will check the
number of the elements in an array. If number of element is 1 then, it can’t be further divided.. If number of
element is greater than 1 then, it can be further divided. q is the variable which specifies from where the array
is to be divided. Merge sort function (A, p, r) is divided into two sub function Merge Sort Function(A, p, q) and
Merge Sort Function(A,q+1,r). In this, n size problem is divided into two sub problems of n/2 size. This recursive
calling will be done and then perform the merge operation in order to unite two sorted arrays into a single array.
2.6.6. Complexity Analysis of Merge Sort along with Diagrammatic Flow:
The complexity of merge sort can be analyzed using the piece of information given below:-
• Divide: When we find the middle index of the given array and it is a recursive call. So, finding the middle index
each time will take D(n) = Θ(1) operations.
• Conquer: Whenever we solve two sub-problems, of size n/2 Here n varies from 1 to n/2 as it is a recursive
function.
• Combine: Here we are sorting n elements present in array , so C(n) = Θ(n).
Hence the recurrence for the worst-case running time T (n) of merge sort will be:
T (n) = c If n=1
= 2T (n/2) + c n If n is greater than 1
Where, the constant c represents the time required to solve problems of size 1 as well as the time per array
element of the divide and combine steps.
By applying Case 2 of the master method, we simply say that the running time complexity of Merge Sort is O (n
log n).
Here, recursion tree method is used to calculate the complexity of Merge Sort. In the above diagrammatic
representation, we can see how the array is divided in to two sub-arrays. Cost at each level is calculated, which
is same as cn. Hence, we can say that there are (log n+1) level in binary tree. At each and every level the cost is
same as cn.
Therefore, Total Cost= Cost at each level * number of levels in the binary tree
= c n* (log n+1)
= c n log n+ c n
=c n log n
= θ (n log n)
Hence, running time complexity is θ (n log n).
The time complexity of Merge Sort is order of (n*Log n) in all the 3 cases (worst, average and best) as the merge
sort always divides the array into two halves and takes linear time to merge two halves.
2.6.7. Comparison of Merge Sort Running Time Complexity with other sorting algorithm:
1. Comparison of various sorting algorithms on the basis of Time and Space Complexity:
2. Comparison of various sorting algorithms on the basis of its stability and in place
sorting:
2.6.11. Competitive Coding Problems on Merge Sort:Solved
Coding Problem 1:Count of larger elements on right side of each element in an array
Step 1: Problem Statement: Given an array arr[] consisting of N integers, the task is to count the
number of greater elements on the right side of each array element.
Step3: Naive Approach: The simplest approach is to iterate all the array elements using two loops and for
each array element, count the number of elements greater than it on its right side and then print it.
Time Complexity: O (N2) and Auxiliary Space: O(1)
Optimized Approach: The problem can be solved using the concept of Merge sort in descending order. Follow
the step by step algorithm to solve the problem:
1.Initialize an array count[] where count[i] store the respective count of greater elements on the right
for every arr[i]
3:If higher index element is greater than the lower index element then, the entire higher index element
will be greater than all the elements after that lower index.
4:Since the left part is already sorted, add the count of elements after the lower index element to the
count[] array for the lower index.
Step 1: Problem Statement:Given an array arr[] of length N and an integer K, the task is to count the number
of pairs (i, j) such that i < j and arr[i] > K * arr[j].
Step3: Naive Approach: The simplest approach to solve the problem is to traverse traverse an array and for
every index, find numbers having indices greater than it, such that the element in it when multiplied by K is less
than the element at the current index.
1. Initialize a variable, say count, with 0 to count the total number of required pairs.
2. Traverse the array from left to right.
3. For each possible index, say i, traverse the indices i + 1 to N – 1 and increase the value of count by 1 if
any element, say arr[j], is found such that arr[j] * K is less than arr[i].
4. After complete traversal of the array, print countas the required count of pairs.
Optimized Approach: The idea is to use the concept of merge sortand then count pairs according to the given
conditions.
1. Initialize a variable, say answer, to count the number of pairs satisfying the given condition.
2. Repeatedly partition the array into two equal halves or almost equal halves until one element is left
in each partition.
3. Call a recursive function that counts the number of times the condition arr[i] > K * arr[j] and i < j is
satisfied after merging the two partitions.
4. Perform it by initializing two variables, say i and j, for the indices of the first and second half
respectively.
5. Increment j till arr[i] > K * arr[j] and j < size of the second half. Add (j – (mid + 1)) to the answer
and increment i.
6. After completing the above steps, print the value of answer as the required number of pairs.
Time Complexity:O (N*log N) and Auxiliary Space:O (N)
Coding Problem 3:Count sub-sequences for every array element in which they are the
maximum
Step 1: Problem Statement:Given an array arr[] consisting of N unique elements, the task is to generate an
array B[] of length N such that B[i] is the number of subsequencesin which arr[i] is the maximum element.
Input:arr[] = {2, 3, 1}
Output: {2, 4, 1}
Explanation: Subsequences in which arr[0] ( = 2) is maximum are {2}, {2, 1}.
Subsequences in which arr[1] ( = 3) is maximum are {3}, {1, 3, 2}, {2, 3}, {1, 3}.
Subsequence in which arr[2] ( = 1) is maximum is {1}.
Step 3: Optimized Approach: The problem can be solved by observing that all the subsequences where an
element, arr[i], will appear as the maximum will contain all the elements less than arr[i]. Therefore, the total
number of distinct subsequences will be 2(Number of elements less than arr[i]).
1. Sort the array arr[] indices with respect to their corresponding values present in the given array and
store that indices in array indices[], wherearr[indices[i]] <arr[indices[i+1]].
2. Initialize an integer, subsequence with 1 to store the number of possible subsequences.
3. Iterate N times with pointer over the range [0, N-1] using a variable, i.
a. B[indices[i]] is the number of subsequences in which arr[indices[i]] is maximum i.e., 2i, as there
will be i elements less than arr[indices[i]].
b. Store the answer for B[indices[i]] as B[indices[i]] = subsequence.
c. Update subsequenceby multiplying it by 2.
4. Print the elements of the arrayB[] as the answer.
Coding Problem 1:Count sub-sequences which contains both the maximum and minimum
array element.
Difficulty Level: Moderate
Problem Statement: Given an array arr[] consisting of N integers, the task is to find the number of subsequences
which contain the maximum as well as the minimum element present in the given array.
Coding Problem 2: Partition array into two sub-arrays with every element in the right sub-
array strictly greater than every element in left sub-array
Difficulty Level: Moderate
Problem Statement: Given an array arr[] consisting of N integers, the task is to partition the array into two non-
empty sub-arrays such that every element present in the right sub-array is strictly greater than every element
present in the left sub-array. If it is possible to do so, then print the two resultant sub-arrays. Otherwise, print
“Impossible”.
Coding Problem 3: Count Ways to divide C in two parts and add to A and B to make A strictly
greater than B
Difficulty Level: Moderate
Problem Statement: Given three integers A, B and C, the task is to count the number of ways to divide C into
two parts and add to A and B such that A is strictly greater than B.
Quick Sort is another sorting algorithm following the approach of Divide and Conquer. Another name of quick sort
is Partition exchange sort because of the reason, it selects a pivot element and does the array elements
partitioning as per that pivot. Placing element smaller than pivot to the left and greater than pivot to the right.
Quick Sort is also known as Selection exchange sort because in selection sort we select the position and find an
element for that position whereas in Quick Sort we select the element and finding the position. As compared to
Merge Sort if the size of input is small, quick sort runs faster.
1. Commercial applications generally prefer quick sort, since it runs fast and no additional requirement of memory.
2. Medical monitoring.
3. Monitoring & control in industrial & Research plants handling dangerous material.
4. Search for information
5. Operations research
6. Event-driven simulation
7. Numerical computations
8. Combinatorial search
9. When there is a limitation on memory then randomised quicksort can be used. Merge sort is not an in-place
sorting algorithm and requires some extra space.
10. Quicksort is part of the standard C library and is the most frequently used tool - especially for arrays. Example:
Spread sheets and database program.
Generally, in merge sort we use to divide the array into two-halves but in quick sort division is done on the basis
of pivot element which could be either first, last element or any random element.
Steps to divide an array into sub-arrays on the basis of the Pivot Element:
Divide: Initially, we divide Array A into two sub-arrays A[p……q-1] and A [q+1……..r]. A[q] returns the sorted array
element. In line 2 we get q. Every element in A[p…………q-1] are less than or equal to A[q]. Every element in
A[q+1……………r] is greater than A[q].
Conquer: Recursively call QuickSort.
Combine: Nothing done in combine.
Partitioning Algorithm:
ALGORITHM Partition(A[ ], p, r)
BEGIN:
x = A[r]
i = p-1
FOR j = p TO r-1 DO
IF A[j] < = x THEN
i = i+1
ExchangeA[i] with A[j]
Exchange A[i+1] with A[r]
RETURN i+1
END;
Case 1: a) When the input array is sorted in ascending order. Such a case experiences an
unbalanced array split, with (n-1) elements on one side of an array and one as a sorted element
(pivot element).
Case 1: b) When the input array is sorted in descending order. Given example below:
Here, worst case complexity of QuickSort can be explained either by the Recurrence relation or by iterative
approach.
Step 1: Recurrence Method: Following Recurrence will be obtained for the unbalanced partitioning which occurs
when the array is either in ascending order or descending order, that is, T(n) = T(n-1) + O(n). With the help of this
recurrence we can simply say that running time complexity will be O(n2).
Step 2: Iteration Method: In this method, we will compute the cost at each level of the tree. For example, at 0 th
level the cost is n, at 1st level the cost is (n-1), and at 2nd level the cost is (n-2) and so on. On the basis of this we
derive the cost at all the levels of the tree, which is equal to [n + (n-1) + (n-2) + (n-3) + …………… +2] = (n(n+1)/2 -
1] = O(n2) Worst case complexity.
Case 2: When the input array splits from the middle and partition occurs evenly from the
middle at each level.
Suppose we have an array of 15 numbers A[1:15]. The best case is possible if:
At each and every level when input array is divided into two equal halves, so, we can simply say that cost at each level of the
binary tree is n and the total number of levels is log n+1. Now, we can say that the running time complexity in the best case
of quick sort will be equal to cost at each level multiplied by number of levels == n * (log n+1). Hence, we can conclude the
complexity will be Ω(nlog n). Recurrence derived from the above mentioned approach is as follows: T(n)= 2T(n/2) + Θ(n)
where 2T(n/2) is the cost for dividing the array into sub-arrays and n is the cost of merging the two sub-arrays. We can solve
this Recurrence either by the Master Method or by Recursion Tree Method. After applying any of the method we can obtain
best case complexity as O(nlog n).
On the basis of this we generate the recurrence for the average case as T(n)= T(n/3) + T(2n/3) + θ(n) which results in the
complexity as order of (nlog n). This is only one of the recurrences generated for computing the average case complexity.
There may be various other recurrences which can symbolise the average case complexity of a quick sort, such as, T(n)=
T(n/4) + T(3n/4) + θ(n), T(n)= T(n/5) + T(4n/5) + θ (n), T(n)= T(n/6) + T(5n/6) + θ (n) and so on.
Let us suppose the problem size n=36. By applying QuickSort algorithm we can partition these 36 elements into 3 ways on
the basis of best case, worst case and average case. In best case, the equal elements are available on both the sides. In worst
case, maximum elements appear from one side, whereas in average case, the number of elements that appear on the left
side may range from 8-16 and number of element that appear on right may range from 20-28 and vice-versa. On the basis of
this division of elements on left and right hand side we can derive various types of Recurrences which reflects the average
case complexity of the Quick Sort.
The auxiliary space required by the Quick Sort Algorithm is O (n) for the call stack in the worst case.
1. Three way Quick Sort: In Three Way Quick Sort, Array A= [1….n] is divided in 3 parts: a) Sub-array
[1…..i] elements less than pivot. b) Sub-array [i+1…..j] elements equal to pivot. c) Sub-array [j+1…..r]
elements greater than pivot. Running Time Complexity is θ(n) and Space Complexity is θ(1).
Partitioning Algorithm of Three Way Quick Sort:
ALGORITHM Partition (arr[ ], left, right, i, j)
BEGIN:
IF right – left <= 1THEN
If arr[right] <arr[left] THEN
Swap arr[right] and arr[left]
i = left
j = right
RETURN
Mid = left
pivot = arr[right]
WHILE mid <= right DO
IF arr[mid] < pivot THEN
Swap arr[left], arr[mid]
Left=left+1
Mid=Mid+1
ELSE
IF arr[mid] = pivot THEN
mid=mid+1
ELSE
Swap arr[mid] and arr[right]
right =right - 1
i = left – 1
j = mid
END;
Three Way Quick Sort Algorithms:
ALGORITHM QuickSort (arr[ ], left, right)
BEGIN:
IF left >= right THEN
RETURN
Define i and j
Partition(arr, left, right, i, j)
Quicksort(arr, left, i)
Quicksort(arr, j, right)
END;
1. Eating apples:
Problem: You are staying at point (1, 1) in a matrix 109×109. You can travel by following these steps:
1. If the row where you are staying has 1 or more apples, then you can go to another end of the row and
can go up.
2. Otherwise, you can go up.
The matrix contains N apples. The ith apple is at point (xi, yi). You can eat these apples while traveling. For
each of the apples, your task is to determine the number of apples that have been eaten before.Practice
Problem (hackerearth.com)
2. Specialty of a sequence:
Problem: You are given a sequence A of length n and a number k. A number A[l] is special if there exists a
contiguous sub-array that contains exactly k numbers that are strictly greater than A[l]. The specialty of a
sequence is the sum of special numbers that are available in the sequence. Your task is to determine the
specialty of the provided sequence.
Practice Problem (hackerearth.com)
3. Noor and his pond:
Problem: Noor is going fish farming. There are N types of fish. Each type of fish has size(S) and eating Factor(E).
A fish with eating factor of E, will eat all the fish of size ≤E.Help Noor to select a set of fish such that the size of
the set is maximized as well as they do not eat each other.
Practice Problem (hackerearth.com)
4. Card game:
Problem: Two friends decided to play a very exciting online card game. At the beginning of this game, each
player gets a deck of cards, in which each card has some strength assigned. After that, each player picks random
card from his deck and them compare strengths of picked cards. The player who picked card with larger strength
wins. There is no winner in case both players picked cards with equal strength.First friend got a deck
with n cards. The i-th his card has strength ai. Second friend got a deck with m cards. The i-th his card has
strength bi.
First friend wants to win very much. So he decided to improve his cards. He can increase by 1 the strength of
any card for 1 dollar. Any card can be improved as many times as he wants. The second friend can't improve his
cards because he doesn't know about this possibility.
What is the minimum amount of money which the first player needs to guarantee a victory for himself?
5. Missing Number Problem: You are given an array A. You can decrement any element of the array by 1.
This operation can be repeated any number of times. A number is said to be missing if it is the smallest positive
number which is a multiple of 2 that is not present in the array A. You have to find the maximum missing number
after all possible decrements of the elements.
Problem Statement: Given An array of integers (Range 1:10) , Find which of the elements are repeated and
which are not.{4,3,1,2,5,7,1,6,3,2,4,1,8,10}
To solve this problem, let us take the Direct address table of size 10. Each of the elements in this table are
initialized to zero.
1 2 3 4 5 6 7 8 9 10
0 0 0 0 0 0 0 0 0 0
For Input Key: 4 we will increase a count at that index.
1 2 3 4 5 6 7 8 9 10
1
For Input Key: 3 we will increase a count at that index.
1 2 3 4 5 6 7 8 9 10
1 1
For Input Key: 1 we will increase a count at that index.
1 2 3 4 5 6 7 8 9 10
1 1 1
For Input Key: 2 we will increase a count at that index.
1 2 3 4 5 6 7 8 9 10
1 1 1 1
For Input Key: Similarly we will get the Final Array as:
1 2 3 4 5 6 7 8 9 10
3 2 2 2 1 1 1 1 0 1
The values in the output array having values greater than 1 are the one having frequency more than one.
So far all the sorting methods we have talked about works on the principle of comparing two numbers whether
they are equal, smaller or larger. Counting sort algorithms rely solely on non- comparison approach. In counting
sort basically works on the principle of counting the occurrence of the elements to be sorted. This sorting
algorithm works faster having a complexity of linear time without making any comparison between two values.
It is assumed the numbers we want to sort are in range from 1 to k where the value of k small. The main idea is
to find the rank of each value in the final sorted array. Counting sort is not used frequently because there are
some limitations that make this algorithm impractical in many applications. However if the input data is in small
range then this algorithm has a distinct advantage. As it is the only algorithm that sorts the elements in order of
(n) Complexity. This is also a stable sort algorithm. Many of the comparison sort algorithms sorts in quadratic
time (θ(n²)), The exceptions – Merge, Heap and Quick Sort algorithms sorts elements in (θ(n log n)) time . We
rarely use Counting Sort but if the requirements are fulfilled then it proves to be the best algorithm in choice.
2.8.3.Limitations:-
Imagine a situation where heights of students are given. Your task is to find out any positive number n if possible,
by using this number n all the given heights becomes equal by using any of these given operations
1) Adding number n in given heights, not necessary to add in all heights
2) Subtracting number n in given heights, not necessary to subtract in all heights
3) No operation perform on given heights
Example-
12 5 12 19 19 5 12
Let us suppose that value of n=7
If addition operation perform on 5 then = 5+7=12
If subtraction operation perform on 19 then = 19-7=12
Perform no operation on 12
Now heights becomes
12 12 12 12 12 12 12
Input format
First line contains heights of N students
Second line contains an integer n
Output format
Print single line height becomes equal or height not becomes equal
1-Identify problem statement
Read the story and try to convert it into technical form. For this problem reframes as-
Store the heights in an array and read a number n and try to perform given operations.
2-Identify Problem Constraints
1<N<100
Sample input sample output heights becomes equal
7
12 5 12 19 19 5 12
7
Design Logic
1 – Take an auxiliary array and store all the unique heights in this array.
2 – Count unique heights and store it in a variable c
IF c==1 THEN
WRITE(“Height becomes equal”)
RETURN 0
IF c==2 THEN
WRITE(“Height becomes equal”)
RETURN h1-h2
//two unique heights let h1 and h2 and also h1>h2
IF c==3 THEN
IF h3-h2==h2-h1 THEN //h1<h2<h3
WRITE(“Height becomes equal”)
RETURN h3-h2
ELSE
WRITE(“Height did not become equal”)
RETURN
Time Complexity-θ(n)
1. https://www.hackerearth.com/practice/algorithms/sorting/counting-sort/practice-
problems/algorithm/shil-and-birthday-present/
2. https://www.hackerearth.com/practice/algorithms/sorting/counting-sort/practice-
problems/algorithm/finding-pairs-4/
2.9.RADIX SORT
The major problem with counting sort is that when the range of key elements is very high it does not work
efficiently as we have to increase the size of auxiliary array and sorting time is high. In such input, Radix sort
proves to be the better choice to sort elements in linear time. In Radix Sort we used to sort every digit hence
the complexity is O(nd). This algorithm is fastest and most efficient when we talk about linear time sorting
Algorithms. It was basically developed to sort large range integers.
2.9.1Analogy:- If you want to sort the 32 bit numbers, then the most efficient algorithm will be
Radix Sort.
Observe the image given below carefully and try to visualize the concept of this algorithm.
Detailed Discussion on the above example:
1. In First Iteration: the least significant bit i. e the rightmost digit is sorted by applying counting sort. Notice
that the value of 435 is below 835 and the least significant bit of both is equal so in the second list 435 will
be below 835.
2. In Second Iteration: The sorting is done on the next digit (10s place) using counting sort. Here we can
see that 608 is below 704, because the occurrence of 608 is below 704 in the previous list, and likely for
(835, 435) and (752, 453).
3. In third Iteration: The sorting is done basis of the most significant digit (MSB) i.e 100s place using
counting sort. Here we can see that here occurrence of 435 is below 453, because 435 occurred
below 453 in the previous list, and similarly for (608, 690) and (704, 752).
1. Problem Statement:
We have already studied about many sorting techniques such as Insertion sort, Bubble sort, Selection sort, Quick
sort, Merge sort, Heap sort etc. Here I will talk about a different type of sorting technique which is called as
“Radix Sort” and is probably the best sorting technique as far as time complexity is concerned.
Operations which are performed in Radix Sort is as follows:-
1) Do following for each digit i where i varies from least significant digit to the most significant digit.
a) Sort input array using counting sort (or any stable sort) according to the ith digit.
Example
Original Unsorted List 170, 45, 75, 90, 802, 24, 2, 66
Sorting by least significant digit (1s place) gives:
[Notice that we keep 802 before 2, because 802 occurred before 2 in the original list, and similarly for pairs 170 & 90
and 45 & 75.]
170, 90, 802, 2, 24, 45, 75, 66
Sorting by next digit (10s place) gives: [*Notice that 802 again comes before 2 as 802 comes before 2 in the
previous list.]
802, 2, 24, 45, 66, 170, 75, 90
Sorting by most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802
Hence we get a sorted sequence for the corresponding random sequence.
For a given set of N random numbers, generate a sorted (non-decreasing order) sequence using above discussed
technique
2. Radix Sort | CodeChef: Given n strings, how to output their order after k phases of the radix sort
Problem
You are given n strings. Output their order after k phases of the radix sort.
Input
The first line of the input file contains three integer numbers: n, the number of strings, m, the length of each
string, and k, the number of phases of the radix sort.
(1<=n<=106,1<=k<=m<=106,n∗m<=5∗107)
Then the description of the strings follows. The format is not trivial. The i-string (1<=i<=n) is written as the ith
symbols of the second, …, (m+1)th lines of the input file. In simple words, the strings are written vertically. This
is made intentionally to reduce the running time of your programs. If you construct the strings from the input
lines in your program, you are doing the wrong thing.
The strings consist of small Latin letters, from "a" to "z" inclusively. In the ASCII table, all these letters go in a
row in the alphabetic order. The ASCII code of "a": 97, of "z" : 122.
Output
Print the indices of the strings in the order these strings appear after kk phases of the radix sort.
Example:
Input
331
bab
bba
baa
Output
231
In all examples the following strings are given:
"bbb", with index 1;
"aba", with index 2;
"baa", with index 3.
Consider the first example. The first phase of the radix sort will sort the strings using their last symbol. As a
result, the first string will be "aba" (index 2), then "baa" (index 3), then "bbb" (index 1). The answer is thus "2 3
1".
algorithms - Given n strings, how to output their order after k phases of the radix sort (huge constraints)? -
Computer Science Stack Exchange
3. Descending Weights
Problem
You have been given an array A of size N and an integer K. This array consists of N integers ranging
from 1 to 107. Each element in this array is said to have a Special Weight. The special weight of an
element a[i] is a[i]%K.
You now need to sort this array in Non-Increasing order of the weight of each element, i.e the element with the
highest weight should appear first, then the element with the second highest weight and so on. In case two
elements have the same weight, the one with the lower value should appear in the output first.
Input Format:
The first line consists of two space separated integers N and K. The next line consists of N space
separated integers denoting the elements of array A.
Output Format:
Print N space separated integers denoting the elements of the array in the order in which they are
required.
Constraints:
1≤N≤105
1≤A[i]≤107
1≤K≤107
Note: You need to print the value of each element and not their weight.
Sample Input
52
12345
Sample Output
13524
2.10.1 Analogy:
2.10.2.BUCKET SORT:
Bucket sort or Bin Sort is a sorting algorithm that works by distributing the elements of an array into different
buckets uniformly. After distribution each bucket is sorted independently using any sorting algorithms or
recursively or by recursively applying bucket sort on each bucket.
Why Bucket Sort?
1. After distribution of elements into bucket array size become smaller and can be solve each bucket
independently.
2. Each bucket can be solved in parallel.
3. Bucket sort solve fractional numbers efficiently.
4. Bucket Sort is not in place sorting algorithm.
ALGORITHM Bucket Sort (arr [ ], n) // n is length of array
BEGIN:
Create n bucket with NULL value. // initialize empty bucket
FOR i=0 TO n-1 DO // start loop from first element to last element
bucket[ n*array[i]] = arr[i] // enter element into respective Bucket
Sort each bucket using insertion sort or any other sort.
Concatenate all sorted buckets.
END;
Concatenated Bucket is the sorted buckets
Complexity:
1. Time Complexity:
a. Average Case complexity is θ(n)
b. Best case complexity is Ω(n)
c. Worst case time complexity is O(n 2)
2. Space Complexity of bucket sort is θ (n + k). n is number of elements and k is number of buckets.
Cases:
1. Average case & Best Case: when all element distribution is uniform in buckets.
2. Worst Case: when n or approximate n elements lie in one Bucket. Then it will work as insertion
sort.
Example 1.
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.57 0.23 0.54 0.12 0.45 0.62 0.28 0.16 0.49 0.35 0.01 0.9
0.01
0.01 0.12
0.01 0.12 0.16 0.23 0.28 0.35 0.45 0.49 0.54 0.57
0.01 0.12 0.16 0.23 0.28 0.35 0.45 0.49 0.54 0.57 0.62
0.01 0.12 0.16 0.23 0.28 0.35 0.45 0.49 0.54 0.57 0.62 0.9
Algorithm:
1. Find Max and min element of array.
2. Calculate the range of each bucket
Range = (max+ min) / n // n is the number of bucket
3. Create n Buckets of Calculated Range
4. Distribute the elements in the buckets.
5. Bucket index=( arr[i] – min) / range
6. Now Sort each bucket individually.
7. Concatenate the sorted elements from buckets to original array.
Example:
Input array 9.6, 0.5, 10.5, 3.04 , 1.2 , 5.4 , 8.6 , 2.47, 3.24 , 1.28 and number of bucket = 5
Max=10.5
Min=0.5
Range=(10.5-0.5)/5 = 2
9.6 0.5 10.5 3.04 1.2 5.4 6.6 2.47 3.24 2.28
University Question:
Q1. What is recurrence relation? How is a recurrence solved using master’s theorem? (5 marks)
Q2. What is asymptotic notation? Explain omega(Ω) notations. (5 marks)
Q3. Solve the recurrence T (n) = 4T(n/2) + n2 (2 marks)
Q4. Explain how algorithms performance is analyzed? (2 marks)
Q5. Write an algorithm for counting sort? Illustrate the operation of counting sort on the following array:
A={2,5,3,0,2,3,0,3}. (7 Marks)
Q6. Solve the following recurrence relation: (10 marks)
i. T (n) = T (n-1) + n4
ii. T (n) = T (n/4) + T (n/2) + n2
Q7. Write an algorithm for insertion sort. Find the time complexity of insertion sort in all cases. (5
marks)
Q8. Write Merge Sort Algorithm. And sort the following sequence {23,11,5,15,68,31,4, 17} using merge
sort. (10 Marks)
Q9. What do you understand by stable and unstable sorting? Sort the following sequence {25, 57, 48,
36, 12, 91, 86, 32} by using heap sort. (10 marks)
Q10. Discuss the basic steps in the complete development of an algorithm.(2 marks)
Q11. Explain and compare best and worst time complexity of Quick Sort.(5 Marks)
Q12. Rank the following by growth rate: (2 Marks)
Q15. (7 marks)
Q16. The recurrence T (n) =7T (n/2) +n 2 describe the running time of an algorithm A. A competing
algorithm A has a running time of T’ (n) =aT’ (n/4) +n 2. What is the largest integer value for a A’ is
asymptotically faster than A? (7 Marks)
Q17. Write an algorithm for bubble sort. Illustrate the operation of bubble sort on the following array
{20, 14,12,15,25,30,6}. Also explain the condition when the time complexity of bubble sort will be O(n).
(write pseudo code) (10 marks)
Q18. What is advantage of binary search over linear search? Also state the limitation of binary search(5
marks)
Q19. Write an algorithm of shell sort. Also explain why this is called extension of insertion sort using an
example. (5 Marks)
Q20. Explain growth of functions. Mention all the asymptotic notations. How running time and
complexity of an algorithm are related to each other. Elucidate with the help of asymptotic notations (7
marks)
Q21. Explain and write bucket sort. Write complexity and draw step by step execution with appropriate
data structure to illustrate the operation of BUCKETSORT on the array A = {.79, .13, .16, .64,
.39, .20, .89, .53, .71, .42}. (10 marks)
Q22. Explain and write radix sort. Write complexity and draw step by step execution with appropriate
data structure to illustrate the operation of RADIXSORT on the list of English words {COW,DOG,
SEA, RUG, ROW, MOB, BOX, TAB, BAR, EAR, TAR,DIG, BIG, TEA, NOW, FOX}. (10 marks)
Q24. Let f (n) and g (n) be asymptotic positive functions. Prove or disprove the following conjectures:
b. f(n)+g(n)=theta(min(f(n),g(n)) (5 marks)
Q25. Write Master’s method for solving recurrence relations of different type.(2 marks)
Q26. Prove that building of MAX HEAP takes linear time. (5 marks)
Q27. What is the effect of calling Max-Heapify (A,i) when the element A[i] is larger than its children?(5
marks)
Q28. Which of the following sorting algorithm(s) are stable: insertion sort, merge sort, heap sort, and
quick sort? Argue that any comparison based sorting algorithm can be made stable without
effecting the running time by more than a constant factor.(10 marks)
Q29. Why do we want the loop index i in line 2 of BUILD-MAX-HEAP to decrease from ⌊length[A]/2⌋ to
1 rather than increase from 1 to ⌊length[A]/2⌋?(10 marks)
Q30. Suppose that the splits at every level of quicksort are in the proportion 1 - α to α, where 0 < α ≤
1/2 is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately -
lg n/ lg α and the maximum depth is approximately -lg n/ lg(1 - α). (Don'tworry about integer
round-off.)(10 marks)
Q31. Suppose that the for loop header in line 9 of the COUNTING-SORT procedure is rewritten as for
j ← 1 to length[A] .Show that the algorithm still works properly. Is the modified algorithm stable?
(10 marks)