0% found this document useful (0 votes)
85 views

Foundation of Analysis of Algorithm 4 HRS: Unit:1

This document provides an overview of analyzing algorithms. It discusses key topics like the properties of algorithms, designing and analyzing algorithms, time and space complexity analysis, best, average, and worst case complexity, and space-time tradeoffs. It also covers RAM models, types of analysis including prior and posterior analysis, and provides examples to illustrate concepts like time complexity analysis in best, average, and worst cases, and space-time tradeoffs.

Uploaded by

Dipak Giri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Foundation of Analysis of Algorithm 4 HRS: Unit:1

This document provides an overview of analyzing algorithms. It discusses key topics like the properties of algorithms, designing and analyzing algorithms, time and space complexity analysis, best, average, and worst case complexity, and space-time tradeoffs. It also covers RAM models, types of analysis including prior and posterior analysis, and provides examples to illustrate concepts like time complexity analysis in best, average, and worst cases, and space-time tradeoffs.

Uploaded by

Dipak Giri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 230

Unit:1

Foundation of Analysis of
Algorithm 4 Hrs
Instructor: Tekendra Nath Yogi
[email protected]
College Of Applied Business And Technology
Contents
• Introduction to algorithm

• Properties of algorithms

• Design and Analysis of algorithm

• Time and Space Complexity

• Best , average and worst case complexity

• Space time tradeoff

• Types of Analysis

• RAM model

• Detailed analysis of algorithms

7/30/2021 By: Tekendra Nath Yogi 2


Algorithm
• Solving problems

– Get me from home to college

– Find out simple interest based on time, principal and interest rate.

– Graduate from TU

• To solve problems we have procedures, recipes, process descriptions – in

one word Algorithms

7/30/2021 By: Tekendra Nath Yogi 3


Algorithm
• An algorithm is a sequence of unambiguous instructions for solving a
problem in a finite amount of time.

problem

algorithm

input “computer” output

Figure: The notion of algorithm

• Program = Algorithm + Data structure

7/30/2021 By: Tekendra Nath Yogi 4


Algorithm-- Properties
1. Input(s)/output(s): There must be some inputs from the standard set of

inputs and an algorithm’s execution must produce output(s).

2. Definiteness: Each step must be clear and unambiguous.

3. Finiteness: Algorithms must terminate after finite time or steps.

4. Correctness: Correct set of output values must be produced from the each

correct set of inputs.

5. Effectiveness: Each step must be carried out in finite time.

7/30/2021 By: Tekendra Nath Yogi 5


Algorithm--Design and analysis
• Designing:
– Creating set of computational steps for problem solving.
– Only studying existing algorithms bounds us to those algorithms and selection
among them.
– Having knowledge about design helps to improve the performance by using
different design techniques (e.g., divide and conquer, greedy, dynamic
programming, etc).

• Analysis:
– Predicting the resources ( memory, time) required to run the algorithm.
– Multiple algorithms exist for solving the same problem. E.g., sorting.
– Analysis helps us determining which of them is efficient in terms of time and
space consumed.
– Choose the most efficient algorithm for problem solving.

7/30/2021 By: Tekendra Nath Yogi 6


Algorithm--Design and analysis
• Example Problem: compute the product of two integers.

• Algorithm1: Product(m, n ) Algorithm 2: product (m, n )

• Input : m> 0 and n> 0 1. If m> n then swap(m, n)


2. Product = 0
• Output: the product of m and n
3. For(i= 1; i<= m; i++) do
• Steps: Product = product + n
1. Product = o
4. Return product
2. For(i= 1; i<= m; i++) do
Product = product + n
3. Return product

• Here algorithm1 makes m additions always what ever be the value of m where
as algorithm 2 makes min(m, n ) additions always.

• So we can say that algorithm2 is efficient than algorithm1. so we choose


algorithm2 to solve our problem of multiplying two integers.
7/30/2021 By: Tekendra Nath Yogi 7
Algorithm Analysis

• Two dimensions to analyses the performance of the algorithm:

1. Space complexity

– How much space is required

2. Time complexity

• How much time does computer take to run the algorithm

7/30/2021 By: Tekendra Nath Yogi 8


Algorithm Analysis-- Space complexity
• Space complexity of an algorithm is the number of elementary object that the
algorithm needs to store during its execution.
• Example: Array_Max(a[1..n], n)
– Max = a[1]
– For i=2 to n do
• If a[i] > Max, then
• Max = a[i]
- Return max
• This algorithm finds the largest element in the given array of size n. The algorithm
returns the maximum element in the array.
• Here the storage requirements are the number of elements of the array(a), array
size(n), array index(i), variable max and the instruction space.
• Therefore, required total space is = (n + 3+ instruction space).

7/30/2021 By: Tekendra Nath Yogi 9


Algorithm Analysis-- Time complexity
• The time required to solve a problem.

• Often more important than space complexity


– space available for computer programs tends to be larger and larger

– time is still a problem for all of us

• Time complexity is measured in the following three different


cases:
– Worst case

– Best case, and

– Average case.

7/30/2021 By: Tekendra Nath Yogi 10


Algorithm Analysis-- Time complexity
• Best case complexity:
– Minimum amount of time (number of steps) an algorithm takes to finish the task

– gives lower bound on the running time of the algorithm for any instance of input(s).

– This indicates that the algorithm can never have lower running time than best case for particular

class of problems.

• Average case complexity:


– Average amount of time (number of steps) required by the algorithm on any instance of the input(s).

• Worst case complexity:


– The maximum amount of time (number of steps) an algorithm takes to finish the task.

– Gives upper bound on the running time of the algorithm for all the instances of the input(s).

– This ensures that no input can overcome the running time limit posed by worst case complexity.

– Generally, we seek upper bounds on the running time, because everybody likes a guarantee.

Presented By: Tekendra Nath Yogi 11


Algorithm Analysis-- Time complexity

Figure: Graphical representation of Best case, worst case and average case

Presented By: Tekendra Nath Yogi 12


Algorithm Analysis-- Time complexity
• Example: To illustrate Best, Worst and Average case
• Algorithm: Sequential Search( A[0..n-1], search key k)
1. i = 0
2. while (( i < n) && A[i]!=k) do
i = i+1
3. if ( i < n ) then return i
4. else return -1

• The above algorithm searches for a given value K in a given array A[0..n-1]
and returns the index of the first element in A that matches K or return -1 if
there are no matching elements.

• Worst case : Item found at the end: n comparisons


• Best case : Item found at the beginning: One comparison
• Average case : Item may be found at index 0, or 1, or 2, . . . or n - 1
– Average number of comparisons is: (1 + 2 + . . . + n) / n = (n+1) / 2
Presented By: Tekendra Nath Yogi 13
Algorithm Analysis-- Space time tradeoff
• We may be able to reduce space requirements by increasing the
run time, or conversely reduce time requirements by increasing
memory space.

• This situation is referred to as the space time Tradeoff.

Presented By: Tekendra Nath Yogi 14


Algorithm Analysis-- Space time tradeoff
• Example: Swapping of two numbers
Algorithm1: using temp variable • Algorithm 2:
Swap() swap()
{ {
• Int a= 10, b= 20, temp; • Int a=10, b= 20;
• Output a and b; • Output a and b;
• Temp = a; • a= a+b;
• a = b; • b= a-b
• b= temp; • a= a-b
• Output a and b; • Output a and b;
} }

In the above algorithms for swapping two numbers algorithm1 uses three variables of integer
type, so algorithm takes 3* 2 =6 byte of memory. Algorithm 2 uses two variables a and b of
integer type. So algorithm 2 takes 2*2= 4 byte of memory.
But time complexity of algorithm 2 is higher than algorithm 1: In algorithm 1 just five
assignment operations. But in algorithm2 five assignment as in algorithm1and three additional
arithmetic operations . This show the space time tradeoff.
7/30/2021 By: Tekendra Nath Yogi 15
Algorithm Analysis-- Types of analysis

• Efficiency of an algorithm can be analyzed at two

different stages, before implementation and after

implementation. They are the following :

– Prior analysis and

– Posterior

7/30/2021 By: Tekendra Nath Yogi 16


Algorithm Analysis-- Posterior Analysis
• It is also known as experimental analysis. It is performed as follows:
– Write a program implementing the algorithm.

– Run the program with inputs of varying size and composition.


9000
– get an accurate measure of the actual running time 8000

7000
– Plot the results 6000

Time (ms)
5000

4000

3000

2000

1000

• Limitations: 0
0 50 100
Input Size

– It is necessary to implement the algorithm, which may be difficult.

– Results may not be indicative of the running time on other inputs not included in the
experiment.

– In order to compare two algorithms, the same hardware and software environments must
be used.
So, Experimental(Posterior ) analysis is not good!
7/30/2021 By: Tekendra Nath Yogi 17
Algorithm Analysis-- Prior(Theoretical) Analysis
• Means doing an analysis of an algorithm before running it on the
system.
– Uses a high-level description of the algorithm instead of an implementation

– Characterizes running time as a function of the input size, n.

– Takes into account all possible inputs

– Allows us to evaluate the performance of an algorithm independent of the


hardware/software environment

– So, it is good choice for the analysis of the algorithm!

• How to perform prior analysis


– By counting number of operations(step) performed by the algorithm, or

– By Asymptotic analysis
7/30/2021 By: Tekendra Nath Yogi 18
Algorithm Analysis– RAM Model
• RAM model is the base model for analyzing any algorithm to have
design and analysis in machine independent scenario.
• This model assumes :
– All algorithm must be implemented on the machine with random access
memory and a single processor. So that, in the RAM model,
instructions are executed one after another, with no concurrent
operations.
– Each basic operations (+, -, *,=…etc.) takes 1 step, loops and
subroutines are not basic operations.
– Each memory reference is 1 step.

• We measure run time of algorithm by counting the steps.

7/30/2021 By: Tekendra Nath Yogi 19


Algorithm Analysis– RAM Model
• Drawbacks:
– First poor assumption: We assumed that each basic operation takes constant time, i.e.
model allows Adding ,Multiplying, Comparing and etc of two numbers of any length in
constant time.
– Addition of two numbers takes a unit time!
• Not good because numbers may be arbitrarily
– Addition and multiplication both take unit time!
• Again very bad assumption

• But with all these weaknesses, RAM model is not so bad because we have to
give the
– Comparison not the absolute analysis of any algorithm.
– We have to deal with large inputs not with the small size

• Model seems to work well, describing computational power of modern


nonparallel machines.
7/30/2021 By: Tekendra Nath Yogi 20
Prior analysis by Counting Operations(steps)
• Instead of measuring the actual timing, we count the number of operations
– Operations: arithmetic, assignment, comparison, etc.

• Counting an algorithm’s operations is a way to assess its efficiency


– An algorithm’s execution time is related to the number of operations it requires

• How many operations are required?

Presented By: Tekendra Nath Yogi 21


Prior analysis by Counting Operations(steps)
• Knowing the number of operations required by the algorithm, we can state that
– Algorithm X takes 2n2 + 100n operations to solve problem of size n

• If the time t needed for one operation is known, then we can state
– Algorithm X takes (2n2 + 100n)t time units

• However, time t is directly dependent on the factors such as languages,


compilers and computers

• Instead of tying the analysis to actual time t, we can state


– Algorithm X takes time that is proportional to 2n2 + 100n for solving problem of
size n

Presented By: Tekendra Nath Yogi 22


Prior analysis by Approximation of Analysis Results
• Suppose the time complexity of

– Algorithm A is 3n2 + 2n + log n + (¼)n

– Algorithm B is 0.39n3 + n

• Intuitively, we know Algorithm A will outperform B

• But, When solving larger problem, i.e. larger n. The dominating term 3n2

and 0.39n3 can tell us approximately how the algorithms perform.

• The terms n2 and n3 are even simpler and preferred

• These terms can be obtained through asymptotic analysis

Presented By: Tekendra Nath Yogi 23


Asymptotic Analysis
• The asymptotic analysis of algorithms is a means of comparing relative
performance of the algorithm.

• Asymptotic analysis of algorithms focuses on:

– Analyzing problems of large input size (n).

– Consider only the leading term of the formula

Examples: Leading Terms:

• a(n) = ½ n + 4

– Leading term: ½ n

• b(n) = 240n + 0.001n2

– Leading term: 0.001n2

– Ignore the coefficient of the leading term


Presented By: Tekendra Nath Yogi 24
Asymptotic Analysis
• Why Choose Leading Term?

– Lower order terms contribute lesser to the overall cost as the


input grows larger.

– Example
• f(n) = 2n2 + 100n

• f(1000) = 2(1000)2 + 100(1000) = 2,000,000 + 100,000

• f(100000) = 2(100000)2 + 100(100000) = 20,000,000,000 + 10,000,000

– Hence, lower order terms can be ignored

Presented By: Tekendra Nath Yogi 25


Asymptotic Analysis
• Why Ignore Coefficient of Leading Term?
– Suppose two algorithms have 2n2 and 30n2 as the leading terms,
respectively

– Although actual time will be different due to the different constants, the
growth rates of the running time are the same

– Compare with another algorithm with leading term of n3, the difference in
growth rate is a much more dominating factor

– Hence, we can drop the coefficient of leading term when studying


algorithm complexity

Presented By: Tekendra Nath Yogi 26


Analysis of algorithm– Example1
• Analyze the following algorithm to compute the average of n numbers
Number of steps required to execute
• Algorithm:
this algorithm is:
1. n= read input size
step Execution time
2. Sum = 0
1 1
3. i= 0 2 1
4. While (i< n) repeat step 5 to 7 3 1
4 n+1
5. Num = read input
5 1* n
6. Sum = sum + Num
6 2*n
7. i= i+1 7 2*n
8. Avg = sum / n 8 2

9. Print avg. 9 1
Total steps = 6n+7
Therefore, T(n) = O(n)
Presented By: Tekendra Nath Yogi 27
Analysis of algorithm– Example2
• Analyze the following algorithm to find the smallest element in an array
• Algorithm: Time
1. Set min= A[0] 1
2. Min_index= 0 1
3. For(i=1; i< n; i++) (1+ n + n- 1)
1. If(A[i] < A[min_index]) n-1
2. Set min_index=i n-1
4. Return(A[min_index]) 1
T(n) = 4n+ 1
= O(n)

Presented By: Tekendra Nath Yogi 28


Analysis of algorithm– Example3

• Make the analysis of the following code

• Statements time(Repetition)
1. Sum =0 1

2. For(int i= 1; i<= n; i++) (1+ (n+1)+ n)

3. Sum = sum +(i*i*i); 4*n

4. Return(sum) 1

Total time T(n) = 6n +4

= O(n)

Presented By: Tekendra Nath Yogi 29


Analysis of algorithm– Example 4
• Analyze the following code
• statements Time
• Void main() -
• { -
x= y+z 2
for(i= 1; i<=n ; i++) (1+(n+1)+n)
{ -
x= y+z 2*n
} -
for(i= 1; i<=n ; i++) (1+(n+1)+n)
{ -
for(j= 1; j<=n ; i++) n*((1+(n+1)+n))
{ -
x = y+ z 2*n*n
} -
} -
}

Total running time T(n) = 4n2+ 8n + 5


Therefore, asymptotic complexityPresented
= By: Tekendra Nath Yogi O (n2) 30
Analysis of algorithm– Example 5

• Analyze the following code

• Steps Repetition
1. Sum = 0 1

2. For (i=1; i<= n; i++) (1+ (n+1)+ n)

3. For (j=1; j<= n; j++) n* (1+ (n+1)+ n)

4. Sum++ 1*(n* n )

Total Steps T(n) =2n2+ 5n+ 3

Therefore, time complexity = O (n2)

Presented By: Tekendra Nath Yogi 31


Analysis of algorithm– Example 6
• Find detailed analysis of following factorial algorithm
factorial ( ) steps
{
n= read a number 1
fact=1 1
for( i=1; i<=n; i++) (1+ (n+1)+ n)
{
fact=fact*i 2*n
}
output fact 1
}
Time complexity T(n)= 1+1+1+ (n+1)+n+2n+1=4n+5= O(n)
Space complexity= total memory references= 3= O(1)

7/30/2021 By: Tekendra Nath Yogi 32


Guidelines for Asymptotic Analysis
• Several different algorithms to solve the same problem

• Decide which one is best suited for your application.

• An essential tool for this purpose is the analysis of algorithms.

• Only after you have determined the efficiency of the various algorithms
will you be able to make a well informed decision.

• But there is no magic formula for analyzing the efficiency of


algorithms. It is largely matter of judgment, intuition and experience.

• Nevertheless, there are some techniques that are often useful, such as:

Presented By: Tekendra Nath Yogi 33


Guidelines for Asymptotic Analysis--For loop
• We know that the for loop is used to
execute the same set of instructions a Example:
fixed number of times.
// executes n times
• So, the running time of a for loop is, at
most, the running time of the for (i=1; i<=n; i++)
statements inside the loop multiplied
by the number of iterations. {
• If the set of instructions used inside the
m = m + 2; // constant time, c
for loop takes T1 time and the
instructions are executed n number of }
times.
Total time = a constant c * n = cn = O(n)
• Then total execution time of the for
loop is (T1* n).

Presented By: Tekendra Nath Yogi 34


Guidelines for Asymptotic Analysis-- Nested For loops
• Total running time of the nested for Example:
loop is equal to the “ execution
time of the statements inside the
loops * the product of the size of
all the loops”
• If the set of statements used inside
the for loop takes T1 time and the
statements are executed n number
of times in the inner loop and the
outer loop is executed m number
of times , then the total execution
Therefore, Total running time is the product of
time of the nested for loops is T1*
the sizes of all the nested loops.
n*m

Presented By: Tekendra Nath Yogi 35


Guidelines for Asymptotic Analysis-- Sequencing(Consecutive statements)
• Example:
• Let p1 and p2 be two fragments • x = x +1; //constant time
of an algorithm. // executed n times
• for (i=1; i<=n; i++)
• Let T1 and T2 be the time taken {
m = m + 2; //constant time
by p1 and p2 respectively. }
//outer loop executed n times
• The sequencing rule says that for (i=1; i<=n; i++)
{
the time required to compute
//inner loop executed n times
“p1; p2”, that is first p1 and the for (j=1; j<=n; j++)
{
p2 is simply T1+ T2. k = k+1; //constant time
}
• By the maximum rule, this time }
is in O(max(T1, T2)).
• Total time T(n)= 1+ n + n2
• Using maximum rule, Asymptotic time
complexity = O (n2)

Presented By: Tekendra Nath Yogi 36


Guidelines for Asymptotic Analysis-- If – Else statement
• In such statements one segment of the algorithm is executed depending on the
validity of the condition.

• If the condition of the if- else statement is true then the if part is executed
otherwise, else part is executed.

• Therefore, running time = T(condition check) + Max(T(if), T(else))


Example:
If(condition)………………T1= O(1)
{
//set of statements……….T2 = O(n)
}
else
{
// set of statements……….. T3= O(n2)
}
Total execution time = T1 + max(T2, T3) = O(n2)
Presented By: Tekendra Nath Yogi 37
Guidelines for Asymptotic Analysis– Logarithmic complexity

Presented By: Tekendra Nath Yogi 38


Guidelines for Asymptotic Analysis– Logarithmic complexity

• Note: Similarly, for the below can also, worst case rate of growth is O(logn).

That means, the same discussion holds good for decreasing sequence also.

for (i=n; i<=1;)

i=i/2;

Presented By: Tekendra Nath Yogi 39


Homework

• What is algorithm? Explain the various characteristics of an algorithm.

• What is the main difference between algorithms and programs?

• Why do we study design and analysis of algorithms?

• Explain worst case, best case and average case of algorithm analysis with
an example.

• Describe the best case and worst case complexity of an algorithm. Write
algorithm for insertion sort and estimate the best and worst case
complexity.

• Why design and analysis of algorithm? Write algorithm an iterative


algorithm for Fibonacci number calculation and analyze the algorithm.

7/30/2021 By: Tekendra Nath Yogi 40


Thank You !

7/30/2021 By: Tekendra Nath Yogi 41


Unit:1.2
Mathematical background

Instructor: Tekendra Nath Yogi


[email protected]
College Of Applied Business And Technology
Contents
• Asymptotic Notations:
– Big Oh notations

– Big omega notations, and

– Big theta notations

• Solving recurrences:
– substitution method

– Iterative expansion method

– recursion tree method

– Master method.

11/29/2021 By: Tekendra Nath Yogi 2


Asymptotic notations
• Asymptotic notations are a shorthand way to express the efficiency of an
algorithm.

• Why?
• They give a simple characterization of an algorithm’s efficiency.

• They allow the comparison of the performance of various algorithms

• The following 3 asymptotic notations are mostly used to represent


efficiency of algorithms.

1) Big O Notation

2)Big Ω Notation

3) Big Θ Notation

11/29/2021 By: Tekendra Nath Yogi 3


Asymptotic notations--Big –oh notation(O)
• Used to express the worst case complexity (maximum resource that can be
utilized by the algorithm) of the algorithms.

• i.e., used to express the upper-bound on the growth rate of an algorithm’s


efficiency.

• The growth rate of an algorithm is the efficiency of an algorithm in relation


with the input size of the problem.

11/29/2021 By: Tekendra Nath Yogi 4


Asymptotic notations--Big –oh notation(O)
• Let f(n) be the growth rate of the algorithm’s efficiency and g(n) be an
arbitrary function called the bounding function that bound the set of
complexities.

• Then worst case complexity of the algorithm can be expressed as:

f(n ) = O g(n). (read as f(n) is big oh of g(n) )

• Means that f(n) belongs to O(g(n)). i.e., the growth rate of the algorithm
belong to the set of growth rates which are bounded above by c* g(n).

11/29/2021 By: Tekendra Nath Yogi 5


Asymptotic notations--Big –oh notation(O)
• Formally, for given functions f(n) and g(n)

 f (n) : there exist positive constants c and n0 s.t.


O( g (n))   
 0  f ( n )  cg ( n ) for all n  n 0 

• i.e., the efficiency of an algorithm is O(g(n)) whenever the input size is


equal to or exceeds some threshold n0, its efficiency can be bounded above
by c*g(n).

11/29/2021 By: Tekendra Nath Yogi 6


Asymptotic notations--Big –oh notation(O)
• Graphical representation of Big- O notation: Figure below shows the basic
intuition behind the big-O notation

11/29/2021 By: Tekendra Nath Yogi 7


Asymptotic notations--Big-omega notation(Ω)
• Used to express the best case complexity (minimum resource that can be taken
by the algorithm) of the algorithms.

• i.e., used to express the lower-bound on the growth rate of an algorithm’s


running time

• Let f(n) be the growth rate of the algorithm’s efficiency and g(n) be an arbitrary
function.

• Then best case complexity of the algorithm can be expressed as:

f(n ) = Ω g(n). (read as f(n) is big omega of g(n) )

• Means that f(n) belongs to Ω(g(n)). i.e., the growth rate of the algorithm belong
to the set of growth rates which are bounded below by c* g(n).

11/29/2021 By: Tekendra Nath Yogi 8


Asymptotic notations--Big-omega notation(Ω)
• Formally, for given functions f(n) and g(n)

Ω (g(n)) = {f(n): there exist positive constants c and n0


such that 0 <= c*g(n) <= f(n) for all n
>= n0}.

• i.e., the efficiency of an algorithm is Ω(g(n)) whenever the input size is


equal to or exceeds some threshold n0, its running time can be bounded
below by c* g(n).

11/29/2021 By: Tekendra Nath Yogi 9


Asymptotic notations--Big-omega notation(Ω)
• Figure below shows the basic intuition behind the big-omega notation

• From the figure we can say that for all n at or to the right of n0, the value of
f(n ) is on or above cg(n).

Prepared by: Tekendra Nath Yogi 10


Asymptotic notations--Big- theta notation( Θ)
• Big- theta notation is used to express the average complexity of the
algorithm.

• Let f(n) be the growth rate of the algorithm’s efficiency and g(n) be an
arbitrary function.

• Then average case complexity of the algorithm can be expressed as:

f(n ) = Θg(n). (read as f(n) is big theta of g(n) )

• Means that f(n) belongs to Θ (g(n)). i.e., the growth rate of the algorithm
belong to the set of growth rates which are bounded above by c2* g(n) and
below by c1*g(n).

11/29/2021 By: Tekendra Nath Yogi 11


Asymptotic notations--Big- theta notation( Θ)
• Formally, for given functions f(n) and g(n)

Θ(g(n)) = {f(n): there exist positive constants


c1, c2 and n0 such that
0 <= c1*g(n) <= f(n) <= c2*g(n)
for all n >= n0}

• i.e., the efficiency of an algorithm is Θ (g(n)) if whenever the input size is


equal to or exceeds some threshold n0, its running time can be bounded
above by c2*g(n) below by c1*g(n).

11/29/2021 By: Tekendra Nath Yogi 12


Asymptotic notations--Big- theta notation( Θ)
• Graphical representation of Big-Theta notation: The following figure shows the
basic intution behind the big-theta notation.

Prepared by: Tekendra Nath Yogi 13


Comparison of functions
• Many of the relational properties of real numbers apply to asymptotic
comparisons as well.
• For the following, assume that f (n) and g(n) are asymptotically positive.
• Transitivity:

Analogy between asymptotic


notations and comparison of two
relational numbers a and b:

Prepared by: Tekendra Nath Yogi 14


Mathematical Foundation

• Since mathematics can provide clear view of an algorithm.

• Understanding the concepts of mathematics is aid in the design

and analysis of good algorithms.

• some of the mathematical concepts that are helpful in our study

are as follows.

Prepared by: Tekendra Nath Yogi 15


Mathematical Foundation
• Polynomials:
– Given a nonnegative integer d, a polynomial in n of degree d is a function p(n) of
the form

– where the constants a , a , . . . , a are the coefficients of the polynomial and ad != 0.


0 1 d

– A polynomial is asymptotically positive if and only if ad> 0.

– Example: p(n)= 100-20n+2n2 is one of the asymptotically positive function and


p(n)= 100-20n-2n2 is asymptotically negative function.

Prepared by: Tekendra Nath Yogi 16


Mathematical Foundation
• Exponentials:

Prepared by: Tekendra Nath Yogi 17


Mathematical Foundation
• Logarithms:
– We shall use the following notations:

– e denote 2.71828 . . ., the base of the natural logarithm function

Prepared by: Tekendra Nath Yogi 18


Mathematical Foundation
• Logarithms:

• where, in each equation above, logarithm bases are not 10.Computer scientists
find 2 to be the most natural base for logarithms because so many algorithms
and data structures involve splitting a problem into two parts.

Prepared by: Tekendra Nath Yogi 19


Mathematical Foundation
• Arithmetic Series:
– A series such as 3 + 7 + 11 + 15 + ··· + 99 or 10 + 20 + 30 + ··· +
1000 which has a constant difference between terms.

– The first term is a1, the common difference is d, and the number of terms
is n. The sum of an arithmetic series is found by multiplying the number of
terms times the average of the first and last terms.

– Formula: or

– Example:3 + 7 + 11 + 15 + ··· + 99 has a1 = 3 and d = 4.

– To find n, use the explicit formula for an arithmetic sequence We solve 3 +


(n – 1)·4 = 99 to get n = 25.

– or
Prepared by: Tekendra Nath Yogi 20
Mathematical Foundation
• Geometric Series
– A series such as 3 + 1 + 1/3 + 1/9 + 1/27 + 1/81 which has
a constant ratio between terms. The first term is a1, the common ratio is r,
and the number of terms is n.

Prepared by: Tekendra Nath Yogi 21


Mathematical Foundation
• Series:

Prepared by: Tekendra Nath Yogi 22


Unit:1.3
Recurrence Relations

Instructor: Tekendra Nath Yogi


[email protected]
College Of Applied Business And Technology
Recurrence Relation
• A recurrence is an equation or inequality that describes a function in terms of

itself (its value on smaller inputs).

• The recurrence relations are used to describe the complexity of algorithms,

when an algorithm contains a recursive call to itself(Particularly divide and

conquer algorithms).

• By solving the recurrence relation, we can determine the complexity (Time or

space) of algorithm.

Prepared by: Tekendra Nath Yogi 24


Forms of recurrences

• Recurrence relation as equation:


– E.g.:

– When recurrence is expressed as equation then we use generally big theta


notation to express the solution of the recurrence( complexity of the algorithm)
.

– We can also express the solution as big oh and big omega notation because big
theta is the combination of both.

Prepared by: Tekendra Nath Yogi 25


Forms of recurrences

• Recurrences as inequalities:
– T(n) <= 2T(n/2) + O(n)
• Because such recurrence relation states only an upper bound
on T(n), so we express its solution using big oh notation.

– T(n) >= 2T(n/2) + O(n)


• Because the recurrence gives only a lower bound on T(n), so
we express its solution using big omega notation.

Prepared by: Tekendra Nath Yogi 26


How to write recurrence relation for recursive algorithms
1. Recurrence relation for the algorithm that divides each sub-problems into
equal sizes: For such algorithms a recurrence relation can be expressed in
the form of
T(n) = aT(n/b) + f(n)
Where,
n= size of the original instance of the problem
a= number of sub-problem
n/b= size of the each sub-problem
f(n)= function of n that express the time needed to divide and
combine
E.g.,

Prepared by: Tekendra Nath Yogi 27


How to write recurrence relation for recursive algorithms

2. Recurrence relation for the algorithm that divides each sub-problems into
unequal sizes:

E.g., A recursive algorithm that divides the problem into 2/3 and 1/3 of
the original size and if the divide and combine steps takes linear time then
the running time of an algorithm can be expressed in terms of following
recurrence relation:

T(n)= T(2n/3) + T(n/3) + O(n)

Prepared by: Tekendra Nath Yogi 28


How to write recurrence relation for recursive algorithms

Note: We neglect the following technical details when we state and solve
recurrences

1. Omit floors and ceilings


Actual:

After omit:

2. Omit the boundary conditions

Prepared by: Tekendra Nath Yogi 29


Solving Recurrence Relations

• There is no any specific method to solve all types of


recurrences. So solving the recurrence relations is an art.

• There are so many methods to solve such relations based on


the nature of the recurrence relation. Some of them are:
– Iteration method

– Recursion tree method

– Substitution method

– Master method.

Prepared by: Tekendra Nath Yogi 30


Solving Recurrence Relations-- Iterative expansion method

• In this method, the given recurrence relation is expanded iteratively up to the


boundary condition (base case).

• The series obtained by expansion is summarized to obtain the big oh estimate of the
recurrence.

Prepared by: Tekendra Nath Yogi 31


Solving Recurrence Relations-- Iterative expansion method

Prepared by: Tekendra Nath Yogi 32


Solving Recurrence Relations-- Recursion Tree method

• The recursion tree is a pictorial(tree) representation of iterative method.


• In a recursion tree, each node represents the cost of a single sub-problem
somewhere in the set of recursive function invocations.
• How to solve recurrence?
– In this method the cost term in a given recurrence relation is expanded into a
tree and the summation of nodes(cost) at each level is taken.
– The final summation of the sum of each level gives the solution (big oh
estimate) of the given recurrence relation.
• It is used to generate a good guess which can be verified by substitution method.
• However, If the recursion tree is drawn carefully and summation is determined
properly then recursion tree can be used as a direct proof of a solution to a
recurrence.

Prepared by: Tekendra Nath Yogi 33


Solving Recurrence Relations-- Recursion Tree method

Prepared by: Tekendra Nath Yogi 34


Solving Recurrence Relations-- Substitution Method
• To solve recurrence relation by using substitution method:

1. First, We guess the solution of the recurrence relation. For


this we can use our experience with recurrence.

2. After guessing the solution we must prove that the guessed


solution is correct. For this we can use the mathematical
proof techniques.

– This method is powerful, but it can be applied only in cases


when it is easy to guess the form of the solution/answer.

Prepared by: Tekendra Nath Yogi 38


Solving Recurrence Relations-- Substitution Method

• Making a good Guess of solution to recurrences:


Unfortunately, there is no general way to guess the correct solutions to
recurrences. Guessing a solution takes experience and, creativity. Fortunately,
though, there are some heuristics that can help you become a good guesser.

1. If a recurrence is similar to one we have seen before, then


guessing a similar solution is reasonable.
• For example: T(n)= 2T(n/2 + 17)+ n which looks difficult because of added 17
in the argument to T, but this additional term can not substantially affect the
solution to the recurrence. Because, when n is large, the difference between
T(n/2) and T(n/2 + 17) is not large and both cut n nearly evenly in half. Hence
we make the guess that T(n) = O(n log n).

Prepared by: Tekendra Nath Yogi 39


Solving Recurrence Relations-- Substitution Method

2. Another way to make a good guess is to prove loose upper and lower
bounds on the recurrence and then reduce the range of uncertainty.

– For example, we might start with a lower bound of T (n) = big omega (n) for
the recurrence
– since we have the term n in the recurrence, and we can prove an initial upper bound of T
(n) = O(n2).

– Then, we can gradually lower the upper bound and raise the lower bound until we
converge on the correct, asymptotically tight solution of T (n) =big theta (n log n).

Prepared by: Tekendra Nath Yogi 40


Solving Recurrence Relations-- Substitution Method

• There are times when we can correctly guess at an asymptotic bound on the
solution of recurrence, but somehow the math does not seem to work out in
the induction.

• When we hit such a situation revising the guess by subtracting a lower


order term often permits the math to go through.

Prepared by: Tekendra Nath Yogi 41


Solving Recurrence Relations-- Substitution Method
• Changing variables : sometimes, a little algebraic manipulation can make
an unknown recurrence similar to one we have seen before.

Prepared by: Tekendra Nath Yogi 42


Solving Recurrence Relations-- Master Method
• The master method provides a “cookbook” method for solving recurrences
of the form:
T (n) = aT (n/b) + f (n) ,
– where a ≥ 1 and b > 1 are constants and

– f (n) is an asymptotically positive function.

• The recurrence above describes the running time of an algorithm that


divides a problem of size n into a sub-problems, each of size n/b. The a
sub-problems are solved recursively, each in time T (n/b).

• The cost of dividing the problem and combining the results of the sub-
problems is described by the function f (n).

• The master method depends on the following theorem called the master
theorem.
Prepared by: Tekendra Nath Yogi 43
Solving Recurrence Relations-- Master Method
• The master theorem

Prepared by: Tekendra Nath Yogi 44


Solving Recurrence Relations-- Master Method
• Example1:

Prepared by: Tekendra Nath Yogi 45


Solving Recurrence Relations-- Master Method
• Example2:

Prepared by: Tekendra Nath Yogi 46


Solving Recurrence Relations-- Master Method
• Example3:

Prepared by: Tekendra Nath Yogi 47


Limitations of asymptotic analysis
• Only consider lager sized inputs. So it is not appropriate for small amount

of input.

• Only considers growth rate and eliminates constants and lower order terms.

But they may contribute huge in actual performance.

• Assumes infinite memory, but memory is still a problem.

11/29/2021 By: Tekendra Nath Yogi 48


Amortized analysis
• Amortized analysis is any strategy for analyzing a sequence of operations
to show that the average cost per operation is small, even though a single
operation within the sequence might be expensive. It refers to finding the
average running time per operation over a worst case sequence of
operations.

• Amortized analysis differs from average case performance in that


probability is not involved; amortized analysis guarantees the time per
operation over worst case performance.

• Three main techniques:


– Aggregate analysis

– Accounting method, and

– Potential method.

11/29/2021 By: Tekendra Nath Yogi 49


Contd..
• Aggregate analysis determines the upper bound T (n) on the total cost of a
sequence of n operations, then calculations the average cost to be T(n)/n.

• Accounting method determines the individual cost of each operation,


combining its immediate execution time and its influence on the running
time of future operations.

• Potential method is like the accounting method, but overcharges


operations early to compensate for undercharges later.

11/29/2021 By: Tekendra Nath Yogi 50


Homework
• Write iterative and recursive algorithm to find the factorial of input number n
and analyze both.
• Write iterative and recursive algorithm to find the Fibonacci number of input
number n and analyze both.
• What is asymptotic analysis of algorithm? Why it is important? Explain.
• What do you mean by asymptotic notation? Explain different notations used for
asymptotic analysis of algorithms.
• What is amortized analysis? How it differ form asymptotic analyis.
• What is recurrence? Why it is important in the context of design and analysis of
algorithms?
• What is master theorem? How it is used to solve recurrence relation? Explain
with suitable example.

11/29/2021 By: Tekendra Nath Yogi 51


Thank You !

11/29/2021 By: Tekendra Nath Yogi 52


Unit:2
Iterative Algorithms

Instructor: Tekendra Nath Yogi


[email protected]
College Of Applied Business And Technology
Contents
• 2.1. Basic Algorithms: Algorithm for GCD, Fibonacci Number and analysis

of their time and space complexity .

• 2.2. Searching Algorithms: Sequential Search and its analysis .

• 2.3. Sorting Algorithms: Bubble, Selection, and Insertion Sort and their

Analysis

12/5/2021 By: Tekendra Nath Yogi 2


Introduction
• In problem solving if repetition is required we can use any • Iterative
one of:
while (true)
– Iterative algorithm or Recursive algorithm
{
• An Iterative algorithm use looping statements such as for
loop, while loop or do-while loop to repeat the same steps // Iterating
while a Recursive algorithm calls itself again and again }
till the base condition(stopping condition) is satisfied.
• Recursive
However, both successfully accomplish the same task.
recursion()
• An Iterative algorithm are faster than the Recursive
{
algorithm because of overheads like calling functions and
if (true)
registering stacks repeatedly.
recursion();
• Many times the recursive algorithms are not efficient as
else return;
they take more space and time.
}

12/5/2021 3
Algorithm for GCD
• The GCD of two numbers is the largest number that divides both numbers
without leaving a remainder. E.g. GCD (15,20) = 5.

• The most efficient iterative algorithm to find GCD of two natural numbers
is Euclid’s algorithm.

• Euclid's algorithm works by continually computing remainders until 0 is


reached. The last nonzero remainder is the final GCD of two input
numbers.

12/5/2021 By: Tekendra Nath Yogi 4


Algorithm for GCD
• Algorithm:
– Input: Two numbers A and B where A>B
– Output: GCD of A and B
– Method: GCD( A , B)
{
if(A==0) then return B as GCD of A and B
else if(B ==0) then return A as GCD of A and B
else
{
While (B!=0)
{
R=A%B
A=B
B=R
}
Return A as GCD of A and B
12/5/2021 } By: Tekendra Nath Yogi 5
Algorithm for GCD
• Example: Find the GCD of 270 and 192

– Solution:

Iteration 0: A= 270, B=192

Here, A != 0 and

B != 0

R= 270%192 =78

Iteration 1: A= B= 192

B= R= 78

R = A % B =192% 78 =36

12/5/2021 By: Tekendra Nath Yogi 6


Algorithm for GCD
Iteration 2: A= B= 78

B= R= 36

R = A % B =78% 36 =6

Iteration 3: A= B= 36

B= R= 6

R = A % B =36% 6 =0

Iteration 4: A= B= 6

B= R= 0

Since B = =0 so GCD (270, 192) =A =6

GCD of number 270 and 192 is 6

12/5/2021 By: Tekendra Nath Yogi 7


Algorithm for GCD
• Analysis:
– The worst case of Euclid’s Algorithm is when the remainders are the
biggest possible at each step, i.e. for two consecutive terms of the
Fibonacci sequence.

– When m and n are the number of digits of input number A and B,


assuming m>= n, the algorithm uses O(n) divisions.

– Note that complexities are always given in terms of the sizes of inputs,
in this case the number of digits.

12/5/2021 By: Tekendra Nath Yogi 8


Algorithm for Fibonacci Number
• The Fibonacci numbers are a sequence of integers in
which the first two elements are 0 & 1, and each following
elements are the sum of the two preceding elements.

i.e., The nth Fibonacci number is:

Fn = Fn-1 + Fn-2

• Example: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 ……….

12/5/2021 By: Tekendra Nath Yogi 9


Algorithm for Fibonacci Number
• Algorithm:
– Input : term of Fibonacci number ( n)
– Output: nth Fibonacci number
– Method:
Fibonacci (n ) Analysis:
{
- set first=0, second=1 and i =3 while loop executes
while( i <=n) at most n times so
{ the time complexity
-temp= first + second
- first = second of this algorithm is
- second = temp T ( n ) = O ( n).
- i++
}
return temp // which is the nth Fibonacci number
}
12/5/2021 By: Tekendra Nath Yogi 10
Algorithm for Fibonacci Number
• Example: Find the 9th Fibonacci number
– Iteration 1: first = 0 and second =1 and i = 3
temp = first + second = 0 + 1 = 1

– Iteration 2: first = 1 and second =1 and i = 4


temp = first + second = 1 + 1 = 2

– Iteration 3: first = 1 and second =2 and i = 5


temp = first + second = 1 + 2 = 3

– Iteration 4: first =2 and second =3 and i = 6


temp = first + second = 2 + 3 = 5
12/5/2021 By: Tekendra Nath Yogi 11
Algorithm for Fibonacci Number
– Iteration 5: first = 3 and second =5 and i = 7

temp = first + second = 3 + 5 = 8

– Iteration 6: first = 5 and second =8 and i = 8

temp = first + second = 5 + 8 = 13

– Iteration 7: first = 8 and second =13 and i = 9

temp = first + second = 8 + 13 = 21

Thus 21 is the 9th number of Fibonacci series.

12/5/2021 By: Tekendra Nath Yogi 12


Sequential Search algorithm
• Searching is the process of finding an element present in a given set of
elements.

• Why searching?
– We know that today’s computer store a lot of information.

– To retrieve this information efficiently we need very efficient searching


algorithms.

• Sequential search algorithm searches for a given value called key in a given
array left to right and return the index of the element, if found. Otherwise
return “ Not Found ”.
12/5/2021 By: Tekendra Nath Yogi 13
Sequential Search algorithm
• Algorithm:
– Input: array A [0.. n-1], array size n and search key k

– Output: The index of the first element in A that matches k or -1 if there


are no matching elements.

– Method: SequentialSearch(A [0.. n-1], n, key)


{
for(i=0;i<n; i++)
{
if(A[i] == k)
return i;
}
return -1;//-1 indicates unsuccessful search
}

12/5/2021 Presented By: Tekendra Nath Yogi 14


Sequential Search algorithm
• Example: Find the value 2 in 6 element array A= { 5, 9, 10, 2, 90, 4 }.

12/5/2021 By: Tekendra Nath Yogi 15


Sequential Search algorithm
• Analysis of Sequential search algorithm :
– For a list with n items, the best case is when the value is equal to the first
element of the list, in which case only one comparison is needed.

– The worst case is when the value is not in the list (or occurs only once at the
end of the list), in which case n comparisons are needed.

– So, Time complexity T(n) = O(n)

12/5/2021 Presented By: Tekendra Nath Yogi 16


Sorting Algorithms
• Sorting is the process of arranging data in an increasing or decreasing
order.
– i.e., if A is an array, then the sorting arranges the elements of A in such
a way that A[0] < A[1] < A[2] < ...... < A[N] or A[0] > A[1] > A[2]
>...... >A[N]
– For example, if we have an array A[] = {21, 34, 11, 9, 1, 0, 22}, Then
the sorted array (ascending order) can be given as:
A[] = {0, 1, 9, 11, 21, 22, 34};
• Sorting is important because it is often the first step in more complex
algorithms to improve the performance.

– E.g., searching a sorted array is much easier and efficient than


searching an unsorted array.

12/5/2021 17
Comparison Sorting Algorithms

• Most common comparison sorting algorithms are:

– Bubble sort

• Selection sort, and

• Insertion Sort

12/5/2021 By: Tekendra Nath Yogi 18


Bubble sort--Idea
• Idea: In bubble sorting,

– consecutive adjacent pairs of elements in the array are compared with

each other.

– If the element at the lower index is greater than the element at the

higher index, the two elements are interchanged.

– This process will continue till the list of unsorted elements exhausts.

• For example,

12/5/2021 By: Tekendra Nath Yogi 19


Bubble sort-- Algorithm
• Algorithm:
– Input: Array A and size of array N
– Output: A sorted form of input array A
– Method: BUBBLE_SORT(A, N)
Step 1: Repeat Step 2 For I=0 to N-1
Step 2: Repeat For J =1 to N – 1-I
Step 3: IF A[J-1] > A[J ]
SWAP A[J-1] and A[J]
[END OF INNER LOOP]
[END OF OUTER LOOP]
Step 4: EXIT

12/5/2021 By: Tekendra Nath Yogi 20


Bubble sort --Example
• Sort the array A[ ] = {30, 52, 29, 87, 63, 27, 19, 54} by using Bubble sort
algorithm.
– Pass 1:
(a) Compare 30 and 52. Since 30 < 52, no swapping is done.
(b) Compare 52 and 29. Since 52 > 29, swapping is done. 30, 29, 52,
87, 63, 27, 19, 54
(c) Compare 52 and 87. Since 52 < 87, no swapping is done.
(d) Compare 87 and 63. Since 87 > 63, swapping is done. 30, 29, 52,
63, 87, 27, 19, 54
(e) Compare 87 and 27. Since 87 > 27, swapping is done. 30, 29, 52,
63, 27, 87, 19, 54
(f) Compare 87 and 19. Since 87 > 19, swapping is done. 30, 29, 52,
63, 27, 19, 87, 54
(g) Compare 87 and 54. Since 87 > 54, swapping is done. 30, 29, 52,
63, 27, 19, 54, 87.
– after the end of the first pass, the largest element is placed at the highest
index of the array. All the other elements are still unsorted.
12/5/2021 By: Tekendra Nath Yogi 21
Bubble sort --Example
• Pass 2:
(a) Compare 30 and 29. Since 30 > 29, swapping is done. 29, 30, 52, 63, 27,
19, 54, 87
(b) Compare 30 and 52. Since 30 < 52, no swapping is done.
(c) Compare 52 and 63. Since 52 < 63, no swapping is done.
(d) Compare 63 and 27. Since 63 > 27, swapping is done. 29, 30, 52, 27, 63,
19, 54, 87
(e) Compare 63 and 19. Since 63 > 19, swapping is done. 29, 30, 52, 27, 19,
63, 54, 87
(f) Compare 63 and 54. Since 63 > 54, swapping is done. 29, 30, 52, 27, 19,
54, 63, 87.
After the end of the second pass, the second largest element is placed at the
second highest index of the array. All the other elements are still unsorted.
12/5/2021 By: Tekendra Nath Yogi 22
Bubble sort --Example
• Pass 3:

(a) Compare 29 and 30. Since 29 < 30, no swapping is done.

(b) Compare 30 and 52. Since 30 < 52, no swapping is done.

(c) Compare 52 and 27. Since 52 > 27, swapping is done. 29, 30, 27, 52,
19, 54, 63, 87

(d) Compare 52 and 19. Since 52 > 19, swapping is done. 29, 30, 27, 19,
52, 54, 63, 87

(e) Compare 52 and 54. Since 52 < 54, no swapping is done.

After the end of the third pass, the third largest element is placed at the
third highest index of the array. All the other elements are still unsorted.

12/5/2021 By: Tekendra Nath Yogi 23


Bubble sort --Example
• Pass 4:

(a) Compare 29 and 30. Since 29 < 30, no swapping is done.

(b) Compare 30 and 27. Since 30 > 27, swapping is done. 29, 27, 30, 19,
52, 54, 63, 87

(c) Compare 30 and 19. Since 30 > 19, swapping is done. 29, 27, 19, 30,
52, 54, 63, 87

(d) Compare 30 and 52. Since 30 < 52, no swapping is done.

• After the end of the fourth pass, the fourth largest element is placed at the
fourth highest index of the array. All the other elements are still unsorted.

12/5/2021 By: Tekendra Nath Yogi 24


Bubble sort --Example
• Pass 5:

(a) Compare 29 and 27. Since 29 > 27, swapping is done. 27, 29, 19, 30,
52, 54, 63, 87

(b) Compare 29 and 19. Since 29 > 19, swapping is done. 27, 19, 29, 30,
52, 54, 63, 87

(c) Compare 29 and 30. Since 29 < 30, no swapping is done.

• After the end of the fifth pass, the fifth largest element is placed at the fifth
highest index of the array. All the other elements are still unsorted.

12/5/2021 By: Tekendra Nath Yogi 25


Bubble sort --Example
• Pass 6:

(a) Compare 27 and 19. Since 27 > 19, swapping is done. 19, 27, 29, 30,
52, 54, 63, 87

(b) Compare 27 and 29. Since 27 < 29, no swapping is done.

• After the end of the sixth pass, the sixth largest element is placed at the
sixth largest index of the array. All the other elements are still unsorted.

• Pass 7:

(a) Compare 19 and 27. Since 19 < 27, no swapping is done.

19, 27, 29, 30, 52, 54, 63, 87

After the end of the seventh pass, the input array becomes sort array.
12/5/2021 By: Tekendra Nath Yogi 26
Bubble sort --Analysis
• Analysis:
– The complexity of any sorting algorithm depends upon the number of
comparisons. In bubble sort, there are N–1 passes in total. In the first
pass, N–1 comparisons are made to place the highest element in its
correct position. Then, in Pass 2, there are N–2 comparisons and the
second highest element is placed in its position. Therefore, to compute
the complexity of bubble sort, we need to calculate the total number of
comparisons. It can be given as:
T(n) = (n – 1) + (n – 2) + (n – 3) + ..... + 3 + 2 + 1
T(n) = n (n + 1)/2
T(n) = n2/2 + n/2 = O(n2)
– Therefore, the complexity of bubble sort algorithm is O(n2).
12/5/2021 By: Tekendra Nath Yogi 27
Insertion Sort--Idea
• Idea: Insertion sort inserts each item into its proper place in the final sorted
list as follows.

– The array of values to be sorted is divided into two sets.


• One that stores sorted values and another that contains unsorted values.

• Initially, the element with index 0 is in the sorted set.

• Rest of the elements are in the unsorted set.

– During each iteration of the algorithm, the first element in the unsorted
set is picked up and inserted into the correct position in the sorted set.

– Repeat until there are elements in the unsorted set.

• For example,

12/5/2021 By: Tekendra Nath Yogi 28


Insertion Sort--Algorithm
• Algorithm:
– Input: Array A and its size N
– Output: Sorted of input array A
– Method: INSERTION-SORT (A, N)
Step 1: Repeat Steps 2 to 5 for K = 1 to N – 1
Step 2: SET TEMP = A[K]
Step 3: SET J = K - 1
Step 4: Repeat while (TEMP <= A[J] && j>=0)
SET A[J + 1] = A[J]
SET J = J - 1
[END OF INNER LOOP]
Step 5: SET A[J + 1] = TEMP
[END OF LOOP]
Step 6: EXIT

12/5/2021 By: Tekendra Nath Yogi 29


Insertion Sort--Example
• Sort the input array A [ ] = { 39, 9, 45, 63, 18, 81, 108, 54, 72,36 } of size
N= 10.

12/5/2021 By: Tekendra Nath Yogi 30


Insertion Sort--Analysis
• Analysis:
– In insertion sort, the first element of the unsorted set has to be
compared with almost every element in the sorted set.

– Furthermore, every iteration of the inner loop will have to shift the
elements of the sorted set of the array before inserting the next element.

– Therefore, in the worst case, insertion sort has a quadratic running time
(i.e., O(n2)).

12/5/2021 By: Tekendra Nath Yogi 31


Selection sort--Idea
• Idea: Selection sort

– First find the smallest value in the array and place it in the first
position.

– Then, find the second smallest value in the array and place it in the
second position.

– Repeat this procedure until the entire array is sorted.

• For example,

• Note: Red is current min, Yellow is sorted list and Blue is current item.
12/5/2021 By: Tekendra Nath Yogi 32
Selection sort-- Algorithm
• Algorithm:
– Input: Array A and its size N

– Output: Sorted of input array A

– Method:

12/5/2021 By: Tekendra Nath Yogi 33


Selection sort--Example
• Example: sort the input array ARR = { 39, 9, 81, 45, 90, 27, 72, 18} of
size N= 8.

12/5/2021 By: Tekendra Nath Yogi 34


Selection sort-- Analysis
• Analysis:
– In Pass 1, selecting the element with the smallest value calls for
scanning all n elements; thus, n–1 comparisons are required in the first
pass. Then, the smallest value is swapped with the element in the first
position.

– In Pass 2, selecting the second smallest value requires scanning the


remaining n – 1 elements and (n-2) comparisons and so on.

– Therefore,

• T (n)= (n – 1) + (n – 2) + ... + 2 + 1

• T (n) = n(n – 1) / 2 = O(n2) comparisons

12/5/2021 By: Tekendra Nath Yogi 35


Homework
• What is Iterative algorithm? How it works? Differentiate between Iterative
and recursive algorithms.

• What is searching? Explain sequential search algorithm with suitable


example.

• Trace the bubble sort algorithm for following list of data items: A = { 4, 55,
3, 2, 88, 1, 98, 43, 66,11,93,12, 4,76}.

• Trace the Insertion sort algorithm for following list of data items: A = {3, 4,
55, 11, 2, 34, 33, 23}.

• Trace the Selection sort algorithm for following list of data items: A = {1,
23, 21, 66, 22, 14, 98, 45, 78}.

12/5/2021 By: Tekendra Nath Yogi 36


Thank You !

12/5/2021 By: Tekendra Nath Yogi 37


Unit 3: Divide and Conquer
Algorithms (8)

Instructor: Tekendra Nath Yogi


[email protected]
College Of Applied Business And Technology
Contents
• 3.1. Searching Algorithms: Binary Search, Min-Max Finding and their
Analysis

• 3.2. Sorting Algorithms: Merge Sort and Analysis, Quick Sort and Analysis
(Best Case, Worst Case and Average Case), Heap Sort (Heapify, Build
Heap and Heap Sort Algorithms and their Analysis), Randomized Quick
sort and its Analysis.

• 3.3. Order Statistics: Selection in Expected Linear Time, Selection in Worst


Case Linear Time and their Analysis.

12/6/2021 By: Tekendra Nath Yogi 2


Divide-and-Conquer approach
• The general concept of divide and conquer
approach is:

• Divide the original problem into a number


of sub-problems. These sub- problems are
Similar to original problems but smaller in
size

• Conquer the sub-problems: Solve the sub-


problems recursively. If Sub-problem size
becomes small enough then solve the
problems in straightforward manner.

• Combine the solutions of the sub-problems


to obtain solution for original problem.

12/6/2021 Presented By: Tekendra Nath Yogi 3


Divide-and-Conquer approach
• Divide and Conquer approach is an efficient algorithm design
approach:
– Illustration: For example: calculate an
• Simple Algorithm: an= a*a* a*….*a
– This algorithm requires (n-1 ) multiplications to calculate the
value of an
– E.g., 24= 2*2*2*2 (requires 3 multiplications)
– Therefore, we can say that time complexity of this algorithm is
T(n) = O(n).

• We can improve running time by using algorithm designed using


divide and conquer approach!

12/6/2021 Presented By: Tekendra Nath Yogi 4


Divide-and-Conquer approach
• Fast exponentiation algorithm:
– divide and conquer algorithm to calculate an .

– To calculate an this algorithm break n into two parts i and j such that n
= i+ j.

– So, we can write an = ai * aj

– In this algorithm we assume that i= n/2

– Based on this assumption we can write :


• If n is even then an = ai* ai

• If n is odd then an = ai* ai*a

12/6/2021 Presented By: Tekendra Nath Yogi 5


Divide-and-Conquer approach
Algorithm:
FastExp(a, n)
{
if (n==1)
Return(a)
else
{
x= FastExp(a, n/2) // x= ai and i= n/2
if (n is odd)
Return(x*x*a)
else
Return(x*x)
}
}

12/6/2021 Presented By: Tekendra Nath Yogi 6


Divide-and-Conquer approach
• Time complexity of fast exponentiation algorithm:
– Above algorithm divides the problem into approximately half part.
– The cost of dividing and combining the problem and its solution is
constant.
– So, time complexity of fast exponentiation algorithm can be expressed as:
T(n) = T(n/2) + O(1)
– Now solving this recurrence relation by using master method.
• Here a = 1, b= 2 and f(n) = 1, so n logba =1
• f(n) is also 1.
• This satisfies the case 2 of master theorem so, the solution of this recurrence relation is:
T(n) = O(log2n)

– Therefore, from this illustration we can say that divide and conquer
approach is an efficient algorithm design approach.

12/6/2021 Presented By: Tekendra Nath Yogi 7


Searching
• What is searching?
– Searching is the process of finding an element present in a given set of
elements.

• Why searching?
– We know that today‟s computer store a lot of information.

– To retrieve this information efficiently we need very efficient searching


algorithms.

• There are number of techniques available for searching. Two of


them are:
– Sequential search
– Binary Search
12/6/2021 Presented By: Tekendra Nath Yogi 8
Searching -- Binary search
• For searching an element K in the list of n element arranged in
some order( here we assume that non-decreasing order):
– Divide the list into two half and get the index of the element at mid position.

– Compare the element K with the element at mid position. If it is same, the
searching element is found so, return the index of that element.

– Otherwise, if the element K is smaller than the mid element then repeat the
divide and search process on the left half of the list otherwise on the right half
of the list.

– This process continues until either the element match or the list size becomes
1(list with single element).

12/6/2021 By: Tekendra Nath Yogi 9


Searching -- Binary Search Algorithm
BinarySearch(A, low, high, key)
{
if(high = = low) // when array has single element
{
if(key = = A[low])
then return (low);
else return (-1);
}
else{
mid = (low + high) /2 ; //integer division
if(key = = A[mid]
then return (mid);
else if (key < A[mid])
then return BinarySearch(A, low, mid-1, key) ;
else
return BinarySearch(A, mid+1, high, key) ;
}
}
12/6/2021 By: Tekendra Nath Yogi 10
Searching -- Binary Search Example
• Example1: find the element 2 in the following array by using
binary search algorithm.

• A[] = {2 , 5 , 7, 9 ,18, 45 ,53, 59, 67, 72, 88, 95, 101, 104}

• For key=2
Low High Mid Condition testing

0 13 6 Key<A[6]

0 5 2 Key<A[2]

0 1 0 A[0]==Key , terminating condition satisfied

• Searching is successful So, return the index i.e., return 0


12/6/2021 Presented By: Tekendra Nath Yogi 11
Searching -- Analysis of Binary search algorithm

• From the above algorithm we can say that the running time of
the algorithm is:

• T(n) = T(n/2) + O(1)

– Now solving this recurrence relation by using master


method.
• Here a = 1, b= 2 and f(n) = 1, so n logba =1

• f(n) is also 1.

• This satisfies the case 2 of master theorem so, the solution of this
recurrence relation is: T(n) = O(log2n).

12/6/2021 Presented By: Tekendra Nath Yogi 12


Max and Min Finding
• Algorithm:
• Find the minimum and •MinMax(l, r) {
if(l = = r)
maximum items in a set of max = min = A[l];
else if(l = r-1) {
n elements. if(A[l] < A[r]) {
max = A[r];
• Min-Max Finding Idea: min = A[l];
}
– If number of elements is 1 else {
max = A[l];
or 2 then max and min are min = A[r];
}
obtained trivially.
}
– Otherwise, split input else { //Divide the problems
mid = (l + r)/2; //integer division
array into approximately //solve the subproblems
{min, max}=MinMax(l,mid);
equal part and solved {min1,max1}= MinMax(mid +1,r);
//Combine the solutions
recursively.
if(max1 > max) max = max1;
if(min1 < min) min = min1;
12/6/2021
}}
Presented By: Tekendra Nath Yogi 13
Max and Min Finding – Algorithm Analysis
• Analysis:
– Here we analyze in terms of number of comparisons as cost.
Now we can give recurrence relation as below for MinMax
algorithm in terms of number of comparisons.
• T(n) = T( n / 2 ) + T( n / 2 ) + O(1) , if n>2
• T(n) = 1 , if n =2
• T(n) = 0 , if n =1

– We can simplify above relation to


• T(n) = 2T(n/2) + O(1).

– Solving the recurrence by using master method complexity is


(case 1) :
• T(n)= O(n).
12/6/2021 Presented By: Tekendra Nath Yogi 14
Sorting
• Sorting is the process of arranging data in an increasing or decreasing
order.
• Sorting is important because it is often the first step in more complex
algorithms to improve the performance.
– E.g., searching a sorted array is much easier and efficient than searching an unsorted
array.

• Sorting can be performed any one of the following methods:


– Bubble sort
– Selection sort
– Insertion sort
– Merge sort
– Quick sort
– randomized quick sort
– Heap sort
12/6/2021 Presented By: Tekendra Nath Yogi 15
Merge sort
• Merge sort is an efficient sorting techniques which is based on the divide
and conquer approach. So, to sort an array A[l . . r]:
• Divide
– Divide the n-element sequence to be sorted into two subsequences of n/2
elements each

• Conquer
– Sort the subsequences recursively using merge sort
– When the size of the sequences is 1 there is nothing more to do because, they
are sorted in themselves.

• Combine
– Merge the two sorted subsequences into a single sorted sequence. So, requires
extra storage for temporary hold the merged sequence.

12/6/2021 Presented By: Tekendra Nath Yogi 16


Merge Sort -- Algorithm
MERGE-SORT(A, l, r)
{
if( l < r) // Check for base case
{
then m = (l + r)/2 //Divide

//Conquer i.e., sort A[l…m]


MERGE-SORT(A, l, m)

//Conquer i.e., sort A[m+1..r]


MERGE-SORT(A, m + 1, r)

//Combine i.e., merge two sorted arrays


MERGE(A, l, m, r)
}
}

12/6/2021 Presented By: Tekendra Nath Yogi 17


Merge Sort--Example1

12/6/2021 Presented By: Tekendra Nath Yogi 18


Merge Sort--Example1

12/6/2021 Presented By: Tekendra Nath Yogi 19


Merge Sort—Example2

12/6/2021 Presented By: Tekendra Nath Yogi 20


Merge Sort—Example2

12/6/2021 Presented By: Tekendra Nath Yogi 21


Merge Sort--Analysis
• Time complexity of merge sort:
– To compute the time complexity of the merge sort algorithm, let us consider the
input array size = n.

– If statement takes O(1) time

– Divide operation( mid point calculation ) takes O(1)

– 1st recursive call to MERGESORT( )takes T(n/2) time

– 2nd recursive call to MERGESORT( ) also takes T(n/2) time

– The complexity of merge operation is O(n). Since it compares at most n times for
merging n elements

– Therefore, T(n) = O(1) + O(1)+ T(n/2) + T(n/2) + O(n)

= 2T(n/2) + O(n)

By solving this recurrence relation we get T(n) = O( nlog2n)

12/6/2021 Presented By: Tekendra Nath Yogi 22


Quick Sort Algorithm
• The quick sort algorithm was developed by C.A.R. Hoare.

• Based on the divide and conquer approach.


– Divide

– Conquer, and

– Combine

• The quick sort algorithm works as follows:

1. Select an element pivot from the array elements.

2. Partitioning around the pivot: Rearrange the elements in the array so,
that pivot is placed in its final position.

3. Recursively sort the two sub-arrays thus obtained.

12/6/2021 Presented By: Tekendra Nath Yogi 23


Quick Sort Algorithm
• Partitioning process:

1. Take first element of array as pivot element.

2. Set left marker L at beginning of array and right marker R at end of array
i.e., L = 0 and R = (n-1) for array of n element.

3. While(L<R) repeat
• Increment L (move right the left marker) until the element at index position is less than or
equal to pivot element.

• Decrement R(Move left the Right marker) until the element at index position is greater
than pivot element.

• If (L< R) then swap(A[L], A[R])

4. If (L> R) then swap (A[R], Pivot element) and return R

12/6/2021 24
Quick Sort Algorithm
Partitioning Algorithm :
• Quick sort Algorithm:
Partition(A, i, j)
QuickSort(A, i, j) {
L= i
{
R= j
If(i< j) Pivot = A[i]
{ While(L<R)
{
p = partition(A, i, j); While(A[L]<= pivot && L<= j)
QuickSort(A, i, p-1); {
L++
QuickSort(A, P+1, j);
}
While(A[R]> pivot)
} {
R--
} }

if(L<R)
Swap(A[L], A[R])
}
swap(A[R], pivot)
Return R
12/6/2021 } 25
Quick Sort -- Example

12/6/2021 26
Quick Sort -- Example

The left sub-array containing


31, 26, 20,17 and 44 and the
right sub-array containing
77,55 and 93 are sorted in the
same manner.

12/6/2021 27
Quick Sort Algorithm-- Analysis
• The running time of quick sort depends on whether the partitioning
is balanced or unbalanced, and this in turn depends on which
elements are used for partitioning.
• If the partitioning is balanced, the algorithm runs fast. If the
partitioning is unbalanced it can run slowly.
• complexity of partitioning is O(n) because outer while loop executes
(c*n) times.
• Thus quick sort algorithm‟s time complexity can be written as the
recurrence relation:
T(n) = T(k-1) + T(n-k) + O(n).
• Where k is the result of Partition(A,1,n)
12/6/2021 Presented By: Tekendra Nath Yogi 28
Quick Sort Algorithm-- Analysis
• Best Case:
– Occurs when division is as balanced as possible at all time.
– So, we have
T(n) = 2T(n/2) + O(n).
– The tree for best case is like

Solving above recurrence we get T(n) =

12/6/2021 29
Quick Sort Algorithm-- Analysis
• Worst Case:
– Worst case occurs if the partition gives the pivot as first element or last
element at all the time i.e. k =1 or k = n.
– This happens when the elements are completely sorted, so we have
T(n) = T(n-1) + O(n).
– The tree for worst case is like
Quick Sort -- Pros and Cons:
It is faster than other algorithms such as
bubble sort, selection sort and insertion
sort, Quick sort can be used to sort arrays
of small size, medium size, or large size.
On the flip side quick sort is complex and
massively recursive.

Therefore, 1+2+3+ … + n =n(n+1)/2 = O(n2)


12/6/2021 30
Quick sort
• Example1: Trace out the quick sort algorithm for the following array
• A[]={16,7,15,14,18,25,55,32}

• Example2: Trace out the quick sort algorithm for the following array
• A[]={40,66,93,8,12,44,6,88,60}

• Example3: Trace out the quick sort algorithm for the following array
• A[]={16,7,6,13,14,1,8,25,55,32,45,37}

• Example4: Trace out the quick sort algorithm for the following array
• A[]={10,20,30,40,50,60,70,80,90, 100}

• Example5:Trace the quick sort algorithm for the following array


• A[]={5,3,2,6,,4,1,3,7}

• Example6:Trace the quick sort algorithm for the following array


• A[]={2,8,7,1,3,5,6,4}

12/6/2021 By: Tekendra Nath Yogi 31


Randomized Algorithm
• An algorithm that uses random numbers to decide what to do
next anywhere in its logic is called Randomized Algorithm.

• For example, in Randomized Quick Sort, we use random


number to pick the next pivot.

• The beauty of the randomized algorithm is that no particular


input can produce worst-case behavior of an algorithm.

12/6/2021 By: Tekendra Nath Yogi 32


Randomized quick sort
• Quick sort algorithm uses a normal partitioning. But in normal
partitioning of quick sort, if the array is already sorted, the pivot is
taken as first item remains at same position and the array at one side is
empty and in other side is 1 less than whole array before position.

• This is the worst case for quick sort. So to improve the performance in
worst case, the randomized partitioning can be applied.

• For random partitioning, instead of taking first element as pivot, the


pivot is selected randomly between the lower and upper index of the
array of each step.

12/6/2021 By: Tekendra Nath Yogi 33


Randomized quick sort

• Idea and its advantages:


– Partition around a random element. Running time is independent
of the input order.

– No assumptions need to be made about the input distribution.


No specific input elicits the worst-case behavior.

– The worst case is determined only by the output of a random-


number generator.

– Randomization cannot eliminate the worst-case but it can make


it less likely!

12/6/2021 By: Tekendra Nath Yogi 34


Randomized quick sort
• Algorithm:
RandQuickSort (A, i, j)
{
if( i<j )
{
P = RandPartition(A,i,j);
RandQuickSort(A,i,P-1);
RandQuickSort(A,P+1,j);
}
}

12/6/2021 Presented By: Tekendra Nath Yogi 35


Randomized quick sort
• Contd..
RandPartition(A,i,j)

k = random(i, j); //generates random number between i and j including both.

swap(A[i],A[k]);

return Partition(A,i,j); // normal partitioning

12/6/2021 Presented By: Tekendra Nath Yogi 36


Randomized quick sort
• Contd…
Partition(A, i, j)// normal partitioning algorithm
{
L=i; R =j ; pivot = A[i];
while(L<R)
{
while(A[L] <= pivot and L<= j)
{
L++;
}
while(A[R] >=pivot)
{
R--;
}
if(L<R)
swap(A[L],A[R]);
}
Swap(A[R], pivot);
return R; //return position of pivot
12/6/2021 Presented By: Tekendra Nath Yogi 37
Randomized quick sort
• Time complexity:
– As we know that, the randomization in randomized quick sort
algorithm cannot eliminate the worst-case but it can make it less likely.

– Therefore the time complexity of randomized quick sort algorithm is


also:

– In worst case: O(n2), and

– In best case: O(nlog2n)

12/6/2021 Presented By: Tekendra Nath Yogi 38


Randomized quick sort
• Example1: Trace out the randomized quick sort algorithm for the following
array
• A[]={16,7,15,14,18,25,55,32}

• Example2: Trace out the randomized quick sort algorithm for the following
array
• A[]={40,66,93,8,12,44,6,88,60}

• Example3: Trace out the randomized quick sort algorithm for the following
array
• A[]={16,7,6,13,14,1,8,25,55,32,45,37}

• Example4: Trace out the randomized quick sort algorithm for the following
array
• A[]={10,20,30,40,50,60,70,80,90, 100}

12/6/2021 By: Tekendra Nath Yogi 39


Heap Sort
• A heap is a complete binary tree with the following two properties:
– Structural property: all levels are full, except possibly the last one, which is
filled from left to right. E.g.,

– Order (heap) property:


• Max-heap property:
– for any node x, Parent(x) ≥ x
– Such a heap is called max heap.

• Min heap property:


– For any node x parent(x) <= x.
– Such a heap is called min-heap.
12/6/2021 Presented By: Tekendra Nath Yogi 40
Heap Sort
• Is the sequence 99, 45, 88, 32, 37, 56, 76, 15, 30, 44 a max
heap?
– For a given sequence to be a max heap it should hold two property:
• Structural property and

• Max heap property

12/6/2021 Presented By: Tekendra Nath Yogi 41


Heap Sort
• Array Representation of Heaps:

• A heap can be stored as an array A as follows.


– Root of tree is A[1]

– Left child of A[i] = A[2i]

– Right child of A[i] = A[2i + 1]

– Parent of A[i] =

– Heapsize[A] ≤ length[A]

– The elements in the sub-array are leaves

12/6/2021 Presented By: Tekendra Nath Yogi 42


Heap Sort
• Example1: Given heap is:

– Array representation of given heap is:

12/6/2021 Presented By: Tekendra Nath Yogi 43


Heap Sort
• Example2: Given heap is:

– Array representation of given heap is:

12/6/2021 Presented By: Tekendra Nath Yogi 44


Heap Sort
• Operations on Heaps:
– Maintain/Restore the heap property
• MAX-HEAPIFY(used in Heap sort): Maintain/Restore the max-
heap property

• MIN- HEAPIFY: Maintain/Restore the min heap property

– Create a max-heap from an unordered array


• BUILD-MAX-HEAP

– Sort an array in place


• HEAPSORT

12/6/2021 Presented By: Tekendra Nath Yogi 45


Heap Sort
• MAX-HEAPIFY:
– The process of Maintaining(restoring) the MAX-Heap Property in a
given heap is called MAX-HEAPIFY.

– Suppose a node „i‟ is smaller than a child


• And Left and Right sub-trees of „i‟ are max-heaps

• Here, Violation of MAX-heap property occurs!

– To eliminate the violation:


• Exchange the node with larger child

• Move down the tree

• Continue until node is not smaller than children

12/6/2021 Presented By: Tekendra Nath Yogi 46


Heap Sort
Example

MAX-HEAPIFY(A, 2, 10)

A[2]  A[4]

A[2] violates the max-heap property A[4] violates the max-heap property

A[4]  A[9]

Heap property restored


12/6/2021 Presented By: Tekendra Nath Yogi 47 47
Heap Sort
• Algorithm:
– Max-Heapify(A, i, n) // n is heapSize(A), and i is the current element
{
l = Left(i)
r = Right(i)
largest=i;
If( l ≤ n and A[l] > A[largest])
largest = l
If( r ≤ n and A[r] > A[largest])
largest = r
If( largest != i)
exchange (A[i] , A[largest])
Max-Heapify(A, largest, n)
}
12/6/2021 Presented By: Tekendra Nath Yogi 48
Heap Sort
• Analysis of MAX-Heapify:

– Intuitively:
• It traces a path from the root to a leaf (longest path length = h).

• At each level, it makes exactly 2 comparisons.

• Total number of comparisons is 2h.

• So running time is O(h) or O(logn)


– Since the height of the heap is logn

12/6/2021 Presented By: Tekendra Nath Yogi 49


Heap Sort
• Example of Heapify for heapsize[A] = 10:
– A[10] ={99, 15, 88, 32, 37, 56,76, 17, 30, 33}
– Heap of given input array is:

– In this max heap, node at index i= 2 does not hold max heap property
– So we need to perform heapify(A, 2, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 50


Heap Sort

Now node at i= 5 does not hold max heap property, so we need to perform recursive call
Heapify(A, 5, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 51


Heap Sort

Now perform Heapify(A, 10, 10) this yield no changes in the max heap.

12/6/2021 Presented By: Tekendra Nath Yogi 52


Heap Sort
• Building a Heap: To build a heap
– Convert an array A[1 … n] into a max-heap (n = length[A])
– The elements in the sub-array A[(n/2+1) .. n] are leaves
– So, Apply MAX-HEAPIFY on elements between 1 and n/2

Algorithm: BUILD-MAX-HEAP(A) 1

4
1. n = length[A]
2 3
2. for i ← n/2 down to 1 1 3
4 5 6 7
3. do MAX-HEAPIFY(A, i, n) 2 16 9 10
8 9 10

14 8 7

4 1 3 2 16 9 10 14 8 7

12/6/2021 Presented By: Tekendra Nath Yogi 53


Heap Sort
• Example1: Build a heap form the following array :
4 1 3 2 16 9 10 14 8 7

– Converting a given array into a heap 1

– Here heap size (n)= 10 4

– n/2 = 5 2 3

– So leaf nodes index start from 6 to 10 1 3


4 5
– Now applying max heapify() on element
2 16 9 10
between 1 to 5 6 7

For i= 5 8
14 9
8 7 10

• Heapify(A, 5, 10)
• No violation of max heap property so no change

12/6/2021 Presented By: Tekendra Nath Yogi 54


Heap Sort
• For i= 4 1

2 3

1 3
4 5 6 7

2 16 9 10
8 9 10

14 8 7

• Heapify(A, 4, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 55


Heap Sort
• For i=3 1

2 3

1 3
4 5 6 7

14 16 9 10
8 9 10

2 8 7

• Heapify(A, 3, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 56


Heap Sort
• For i=2
1

2 3

1 10
4 5 6 7

14 16 9 3
8 9 10

2 8 7

• Heapify(A, 2, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 57


Heap Sort
• For i=1
1

2 3

16 10
4 5 6 7

14 7 9 3
8 9 10

2 8 1

• Heapify(A, 1, 10)

12/6/2021 Presented By: Tekendra Nath Yogi 58


Heap Sort
• This is our final desired Max heap of the
1 given array

16

2 3

14 10
4 5 6 7

8 7 9 3
8 9 10

2 4 1

12/6/2021 Presented By: Tekendra Nath Yogi 59


Heap Sort
• Example2 of BuildHeap: A[] = {10,12,53, 34, 23, 77, 59, 66, 5, 8}
– Converting the given array into a heap
– Here n= 10
– n/2= 5
– So leaf nodes index starts form 6 to 10
– Now applying max heapify() on element between 5 to 1

12/6/2021 Presented By: Tekendra Nath Yogi 60


Heap Sort

12/6/2021 Presented By: Tekendra Nath Yogi 61


Heap Sort

12/6/2021 Presented By: Tekendra Nath Yogi 62


Heap Sort

12/6/2021 Presented By: Tekendra Nath Yogi 63


Heap Sort

12/6/2021 Presented By: Tekendra Nath Yogi 64


Heap Sort

Analysis:
BUILD-MAX-HEAP(A)
1. n = length[A]
2. for i ← n/2 downto 1
O(n)
3. do MAX-HEAPIFY(A, i, n)
O(logn)

 Running time: O(nlogn)


• This is not an asymptotically tight upper bound
• So, Running time of BUILD-MAX-HEAP: T(n) = O(n)

12/6/2021 Presented By: Tekendra Nath Yogi 65 65


Heap Sort
• Goal:
– Sort an array using heap representations

• Idea:
– Build a max-heap from the array

– Swap the root (the largest element) with the last element in the array

– “Discard” this last node by decreasing the heap size

– Call MAX-HEAPIFY on the new root

– Repeat this process until only one node remains

12/6/2021 Presented By: Tekendra Nath Yogi 66


Heap Sort
• Example: Trace the heap sort algorithm for the following array

A={7, 4, 3, 1, 2}

MAX-HEAPIFY(A, 1, 5) MAX-HEAPIFY(A, 1, 4) MAX-HEAPIFY(A, 1, 3)

MAX-HEAPIFY(A, 1, 2)

Presented By: Tekendra Nath Yogi 67


Heap Sort
• Algorithm:
HEAPSORT(A)

{
1. BUILD-MAX-HEAP(A) O(n)
2. for i ← length[A] down to 2
3. do exchange A[1] ↔ A[i] n-1 times
4. MAX-HEAPIFY(A, 1, i - 1) O(logn)
}

• Running time: O(nlogn)

12/6/2021 Presented By: Tekendra Nath Yogi 68 68


Heap Sort
• Example1: Trace the heap sort algorithm for given data
– A[] = {5,3,17,10,84,19,6,22,9}

• Example2: Trace the heap sort algorithm for given data


– A[] = {7,3,25,11,4,2,9,6,8,19}

• Example3: Trace the heap sort algorithm for given data


• A[] = {5, 8, 10, 12, 23, 34, 53, 59, 66, 77}

• Example4: is the given array represents a heap? If not construct the min
heap and sort using heap sort.
– A[] ={ 23,17, 14, 6,13,10,1, 5, 7,12}

12/6/2021 Presented By: Tekendra Nath Yogi 69


Selection Problem and Order Statistics
• Selection Problem is defined as “select the ith order statistic
from a set of n distinct number”.

– Formally :
• Input: A set of n distinct elements and number i, with 1 <= i <= n.

• Output: The element from the set of elements, that is larger than
exactly (i-1) other elements of the given set.

12/6/2021 Presented By: Tekendra Nath Yogi 70


Selection Problem and Order Statistics
• Order Statistics:

– ith order statistic of a set of elements gives ith smallest(largest)


element.
• If i= 1 then it is called 1st order statistics gives minimum and
• If i= n then it is called nth order statistics or last order
statistics gives maximum.
– Median order statistics: If ith order statistic of set of n distinct
elements gives median then such ith order statistic is called
median order statistics. The ith order statistics of a set of n
distinct elements gives median only if i = (n+1)/2 for odd n and i
= n/2 and n/2 + 1 for even n.

12/6/2021 Presented By: Tekendra Nath Yogi 71


Selection Problem -- Naive algorithm
• Algorithms for solving selection problem:
– Naive algorithm:
• 1st sort the n distinct elements.
• Then pick the ith element.
– This algorithm takes O(nlogn) time.
– First sort the elements in O(nlogn) time and then pick up the ith
item from the array in constant time.

– We can improve on the time by using the following


algorithms:
• Selection in expected linear time( Randomized Section algorithm)
• Selection in worst case linear time
12/6/2021 Presented By: Tekendra Nath Yogi 72
Selection Problem -- Selection in expected linear time

• Selection in expected linear time (Randomized selection


algorithm ):
– The expected linear time algorithm for general selection problem is
based on the divide and conquer approach that uses the randomized
partitioning (as in the randomized quick sort).

12/6/2021 Presented By: Tekendra Nath Yogi 73


Selection Problem -- Selection in expected linear time
• Algorithm:
RandomizedSelect(A, p, r, i) // A is input array, p is 1st element position,
{// r is last element position, i the position of the element to be selected
if(p = =r )
return A[p];
q = RandomPartition(A, p, r);
k = q – p + 1;
if(i <= k)
return RandomizedSelect(A, p, q, i);
else
return RandomizedSelect(A, q+1, r, i - k);
}
12/6/2021 Presented By: Tekendra Nath Yogi 74
Selection Problem -- Selection in expected linear time

RandomPartition (A, p, r)

n= random(p, r); //generates random number between p and r including both.

swap(A[p],A[n]);

return Partition(A, p, r); // normal partitioning

12/6/2021 Presented By: Tekendra Nath Yogi 75


Selection Problem -- Selection in expected linear time
Partition(A, p, r)// normal partitioning algorithm
{
L=p; R =r ; pivot = A[p];
while(L<R)
{
while(A[L] <= pivot and L<= r)
{
L++;
}
while(A[R] >pivot)
{
R--;
}
if(L<R)
swap(A[L],A[R]);
}
Swap(A[R], pivot);
return R; //return position of pivot
}
12/6/2021 Presented By: Tekendra Nath Yogi 76
Selection Problem -- Selection in expected linear time
• Example: Find the 3rd largest element in the following input array by using
randomized selection algorithm. A[]={70, 10, 20, 5, 40}

12/6/2021 By: Tekendra Nath Yogi 77


Selection Problem -- Selection in expected linear time
• Analysis:
– Randomized selection algorithm also partitions that input array like in quick sort but
unlike quick sort which recursively works on both side of the partition, randomized
selection works only on one side like binary search algorithm. This difference
shows up in the analysis that quick sort has an expected running time O(nlogn) but
the expected running time of randomized selection algorithm is O(n).
– It may be unlucky partition that the pivot element is always at the end of the list. So,
one side of the list is empty and other side contains (n-1) elements at each partition
step.
– So like a worst case of quick sort, the time complexity of this algorithm is:
– T(n) = O(n)+ O(n-1) + O(n-2) + …….+O(2)
– = O(n(n+1/2)-1)

– = O(n2)

12/6/2021 Presented By: Tekendra Nath Yogi 78


Selection Problem -- Selection in expected linear time
• But the partition position at another point than the end of the list, the
algorithm has expected running time much more less than the quadratic
function.
• For example: for the very best case if the partitioning very lucky then the list
is partitioned at each step is 1/10: 9/10 i.e., 1/10th of elements are left and
9/10th elements are right at each time and the selection towards 9/10 section.
• So T(n) = T(9n/10)+ O(n)
• Solving the recurrence relation by using master method
• a= 1 b= 10/9 and f(n) = O(n)
• nlog ba = n0= 1< f(n)
• This satisfies the case 3rd of master theorem so, T(n) = O(n) (expected linear
time means probability of being the linear time but not always true)
12/6/2021 Presented By: Tekendra Nath Yogi 79
Selection Problem -- Selection in expected linear time
• Summary of randomized selection:

• Works fast: linear expected time.

• Excellent algorithm in practice.

• But, the worst case is very bad: O(n2)

Question: Is there an algorithm that runs in linear time in the worst case?

Answer: Yes, due to Blum, Floyd, Pratt, Rivest, and Tarjan [1973].

• IDEA: Generate a good pivot recursively.


Selection Problem -- Selection in worst-case linear time

• Selection in worst-case linear time:


– In Randomized selection, it is not guaranteed that the partition is good.
So, for worst case, it runs in O(n2).

– If some algorithm guaranties the good partition recursively then


selection is always expected in linear time.

– So, for the worse case, to guarantee the running time linear the
algorithm can be used as computing the median of group medians of
small groups and taking the median of medians as a partitioning pivot.

– The algorithm can be outlined as:

12/6/2021 Presented By: Tekendra Nath Yogi 81


Worst-case linear-time selection Algorithm
SELECT(A, p, r, i)
1. Divide the n elements into groups of 5. Find the median of each 5-
element group.
2. Recursively SELECT the median x of the n/5 group medians to be
the pivot.
3. Partition around the pivot x. Let k = rank(x).
4. if i = k then return x
5. elseif i < k then recursively SELECT the ith smallest element
Same as
in the lower part
RAND-
6. else recursively SELECT the (i–k)th smallest element in the SELECT
upper part
Choosing the pivot
Choosing the pivot

1. Divide the n elements into groups of 5.


Choosing the pivot

1. Divide the n elements into groups of 5. Find lesser


the median of each 5-element group.

greater
Choosing the pivot

1. Divide the n elements into groups of 5. Find the median of each lesser
5-element group by rote.
2. Recursively SELECT the median x of the n/5 group medians to
greater
be the pivot.
Analysis

lesser
At least half the group medians are  x, which is at
least  n/5 /2 = n/10 group medians.
greater
Analysis

At least half the group medians are  x, which is at least  n/5 /2 lesser
= n/10 group medians.
• Therefore, at least 3 n/10elements are  x.
greater
(Assume all elements are distinct.)
Analysis

At least half the group medians are  x, which is at least  n/5 lesser
/2 = n/10 group medians.
• Therefore, at least 3 n/10elements are  x.
greater
• Similarly, at least 3 n/10elements are  x.
Analysis
Need “at most” for worst-case runtime

• At least 3 n/10elements are x


 at most n-3 n/10elements are  x

• At least 3 n/10elements are  x


 at most n-3 n/10elements are  x

• The recursive call to SELECT in Step 5 and 6 is executed recursively on at


most n-3 n/10elements.
Developing the recurrence
T(n) SELECT(A, p, r, i)
1. Divide the n elements into groups of 5. Find the median of each 5-
O(n) element group.
2. Recursively SELECT the median x of the n/5 group medians to be the
T(n/5) pivot.
O(n) 3. Partition around the pivot x. Let k = rank(x).
4. if i = k then return x
5. Else if i < k then recursively SELECT the ith smallest element in the
T(7n/10) lower part
6. Else recursively SELECT the (i–k)th smallest element in the upper part
Solving the recurrence
1  7 
T ( n)  T  n   T  n   n
5   10 

Solving the above recurrence relation by using substitution method


Our expected time for selection is linear so, T(n) = O(n)
i.e., T(n)<= c.n for n>1

T (n)  c(n 5)  c(7n 10)  n


 cn 5  7cn / 10  n
 9cn / 10  n
 cn  (cn / 10  n)
 cn if c ≥ 10 and n ≥ 50

Therefore, running time of worst case linear time selection algorithm is T(n) = O(n)
Matrix Multiplication
• Given two A and B n-by-n matrices our aim is to find the
product of A and B as C that is also n-by-n matrix.

• We can find this by using the relation:

12/6/2021 Presented By: Tekendra Nath Yogi 93


Matrix Multiplication

• Algorithm:
MatrixMultiply(A,B)
{
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
for(k=0;k<n;k++)
{
C[i][j] = C[i][j]+ A[i][k]*B[k][j];

}
}
}
}

12/6/2021 Presented By: Tekendra Nath Yogi 94


Matrix Multiplication
• Analysis:
– Using the above formula we need O(n) time to get C(i,j).

– There are n2 elements in C hence the time required for matrix


multiplication is O(n3).

– We can improve the above complexity by using divide and conquer


strategy.

12/6/2021 Presented By: Tekendra Nath Yogi 95


Matrix Multiplication
• Divide and Conquer Algorithm:
– Divide the n x n square matrix into four matrices of size n/2 x n/2. The
basic calculation is done for matrix of size 2 x 2 .

12/6/2021 Presented By: Tekendra Nath Yogi 96


Matrix Multiplication

• Analysis:

– Now, We can write recurrence relation for this as

– T(n) = b ;if n≤2

– T(n)= 8T(n/2) + cn2 ;if n>2

– Solving this we get, T(n) = O(n3)

12/6/2021 Presented By: Tekendra Nath Yogi 97


Matrix Multiplication -- Strassens’s Matrix Multiplication
• Strassens’s Matrix Multiplication:
– Strassen showed that 2x2 matrix multiplication can be accomplished in

7 multiplication and 18 additions or subtractions.

– The basic calculation is done for matrix of size 2 x 2 .

12/6/2021 Presented By: Tekendra Nath Yogi 98


Matrix Multiplication -- Strassens’s Matrix Multiplication

12/6/2021 Presented By: Tekendra Nath Yogi 99


Matrix Multiplication -- Strassens’s Matrix Multiplication

• Analysis:
– Now, We can write recurrence relation for this as
• T(n) = 1 ; if n≤2

• T(n)= 7T(n/2) + cn2 ; if n>2

• Solving this we get, T(n) = O(n2.81)

12/6/2021 Presented By: Tekendra Nath Yogi 100


Homework
• How divide and conquer approach based algorithms works? What kinds of
problems can be solved by using such algorithms? Explain with suitable
example.

• What is min-max problem? State and explain the divide and conquer approach
based algorithm for finding min and max of input array with suitable example.

• What is searching? State and explain the binary search algorithm with suitable
example. Also analyze the binary search algorithm in detail.

• What is sorting? Explain the merge sort algorithm with suitable example.
Analyze merge sort algorithm.

• How quick sort algorithm works? Explain with suitable example. Analyze the
best case, average case and worst case complexity of quick sort algorithm.

12/6/2021 By: Tekendra Nath Yogi 101


Contd…
• What is Heap? How heap sort algorithms works? Explain with suitable
example. Analyze the worst case complexity of Heap sort algorithm,
• What is ith order statistics? State and explain the Naïve algorithm to find the ith
largest element with suitable example. Analyze the worst case complexity of
this algorithm,
• What is ith order statistics? State and explain the selection in expected linear
time algorithm to find the ith largest element with suitable example. Analyze the
worst case complexity of this algorithm,
• What is ith order statistics? State and explain the worst case linear time
selection algorithm to find the ith largest element with suitable example.
Analyze the worst case complexity of this algorithm,
• State and explain the Strassens‟s Matrix Multiplication algorithm with suitable
example. Also analyze worst case complexity of this algorithm.
12/6/2021 By: Tekendra Nath Yogi 102
Thank You !

12/6/2021 By: Tekendra Nath Yogi 103

You might also like