0% found this document useful (0 votes)
2 views65 pages

Module 1 Introduction Iterative Algorithms & Recurrrence

The document outlines the course structure for CSC 402: Analysis of Algorithms, detailing the marks distribution, syllabus modules, and course objectives. It covers various algorithmic approaches including Divide and Conquer, Greedy, Dynamic Programming, Backtracking, and String Matching, along with their complexities and applications. Additionally, it provides information on the lab component, assessment criteria, and recommended textbooks.

Uploaded by

blameunknown89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views65 pages

Module 1 Introduction Iterative Algorithms & Recurrrence

The document outlines the course structure for CSC 402: Analysis of Algorithms, detailing the marks distribution, syllabus modules, and course objectives. It covers various algorithmic approaches including Divide and Conquer, Greedy, Dynamic Programming, Backtracking, and String Matching, along with their complexities and applications. Additionally, it provides information on the lab component, assessment criteria, and recommended textbooks.

Uploaded by

blameunknown89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Analysis

of
Algorithm
s
COURSE CODE: CSC 402
SECOND YEAR SEMESTER IV
COMPUTER ENGINEERING
Marks Distribution

Total: 150 Marks

CSC 402: Analysis of Algorithms

• End Semester Exam: 80 Marks


• Internal Assessment ( I & II): 20 Marks

CSL 401: Analysis of Algorithms Lab

• Termwork: 25 Marks
• Practical: 25 Marks
Syllabus

Module No. Description Hours

Introduction
• Performance analysis, space and time complexity, Growth of function, Asymptotic
1 (Big- Oh, Omega, Theta) notation, Mathematical background for algorithm Analysis, 8
Analysis of selection sort, insertion sort.
• Recurrences: The substitution method, Recursion tree method, Master Method

Divide and Conquer Approach


2 • General method, Merge sort, Quick sort, Finding minimum and maximum algorithms 6
and their Analysis, Analysis of Binary search.

Greedy Method Approach


3
• General Method, Single source shortest path: Dijkstra Algorithm, Fractional Knapsack 6
problem, Job sequencing with deadlines, Minimum cost spanning trees: Kruskal and
Prim’s algorithms
Syllabus

Module No. Description Hours

Dynamic Programming Approach


• General Method, Multistage graphs, Single source shortest path: Bellman
4 Ford Algorithm, All pair shortest path: Floyd Warshall Algorithm, 9
Assembly-line scheduling Problem0/1 knapsack Problem, Travelling
Salesperson problem, Longest common subsequence

Backtracking and Branch and bound


5 • General Method, Backtracking: N-queen problem, Sum of subsets, 6
Graph coloring

String Matching Algorithms


6 • The Naïve string-matching algorithm, The Rabin Karp algorithm, The 4
Knuth-Morris-Pratt algorithm
Books

Textbook Reference Books


T. H. Cormen, C.E. Leiserson,R.L. Sanjoy Dasgupta, Christos
Rivest, and C. Stein, “Introduction to Papadimitriou, Umesh Vazirani,
algorithms”, 2nd Edition, PHI Publication “Algorithms”, Tata McGrawHill Edition.
2005. S. K. Basu, “Design Methods and
Ellis Horowitz, Sartaj Sahni, S. Analysis of Algorithm”, PHI
Rajsekaran. “Fundamentals of
computer algorithms” University Press.
Analysis of Algorithms Lab

 Term work should consist of 10 experiments.


 Journal must include at least 2 assignments on content of theory and practical of
“Analysis of Algorithms”.
 Total 25 Marks (Experiments: 15-marks, Attendance Theory & Practical: 05-marks,
Assignments: 05-marks)
 Experiment (15 Marks)
 Execution (10 Marks)
 Viva (3 Marks)
 Timely Submission (2 Marks)
Course
Objective
 Analyze the running time and space
complexity of algorithms.
 Analyze the complexity of divide and
conquer strategy.
 Analyze the complexity of greedy
strategy.
 Analyze the complexity of dynamic
programming strategy.
 Apply backtracking, branch and
techniques to deal with some hard
problems.
 Apply string matching techniques to
deal with some hard problems.
Module 1 : Introduction

• Performance analysis • Mathematical background for


• Space and Time complexity algorithm Analysis
• Growth of function • Analysis of Selection Sort and
Insertion Sort.
• Asymptotic notation
• Big- Oh notation
• Recurrences
• Omega notation • Substitution Method
• Theta) notation • Recursion Tree Method
• Master Method
Learning Objective

To identify the different To define the


types of the algorithms measuring units of the
algorithm
Algorithm

 An algorithm is
 A sequence of finite instructions
 Which converts a given input
 Into the desired Output
 To solve a well formulated problem

Input Algorith
m Output
Analysis

Consider you want to travel from Location A to B.

Bus Train Car

If you travel by Bus, If you travel by Train, If you travel by Car,


then it will take 2 Hours then it will take 1 Hours then it will take 1.25
and Rs. 40 but you will and Rs. 10 but you will Hours and Rs. 200 with
find the moderate rush. find the maximum rush. maximum comfort.

Which to select?
If you can spend If you don’t want to
You can spend any
some more money spend too much
amount of money for
and want some money but ready to
the comfort
comfort also. lose the comfort
 Sort the following 10 numbers
1, 3, 8, 2, 7, 6, 4, 9, 0, 5
Different ways,
Selection Sort
Insertion Sort
Example Quick Sort
Merge Sort
Radix Sort
Shell Sort
…..& so, on
Types of Algorithm

Divide &
Iterative Recursive Greedy
Conquer
Algorithm Algorithm Method
Algorithm

Dynamic
Back Branch & String
Programmin
Tracking Bound Algorithm
g
Iterative Algorithm

 An iterative algorithm executes steps in iterations. It aims to find


successive approximation in sequence to reach a solution. They are
most used in linear programs where large numbers of variables are
involved.
 Example
 Selection Sort
 Insertion Sort
Recursive Algorithm

 Recursive algorithm is a method of simplification


that divides the problem into sub-problems of the
same nature. The result of one recursion is the
input for the next recursion. The repletion is in the
self-similar fashion. The algorithm calls itself with
smaller input values and obtains the results by
simply performing the operations on these smaller
values.
 Example
 Fibonacci Series
 Factorial
Divide & Conquer Algorithm

 A divide-and-conquer algorithm recursively breaks down a problem into


two or more sub-problems of the same or related type, until these
become simple enough to be solved directly. The solutions to the sub-
problems are then combined to give a solution to the original problem.
 Example
 Quick Sort
 Merge Sort
Greedy Algorithm

 Among all the algorithmic approaches, the simplest and straightforward


approach is the Greedy method. In this approach, the decision is taken based
on current available information without worrying about the effect of the
current decision in future.
 Greedy algorithms build a solution part by part, choosing the next part in such
a way, that it gives an immediate benefit. This approach never reconsiders the
choices taken previously. This approach is mainly used to solve optimization
problems. Greedy method is easy to implement and quite efficient in most of
the cases. Hence, we can say that Greedy algorithm is an algorithmic
paradigm based on heuristic that follows local optimal choice at each step
with the hope of finding global optimal solution.
 In many problems, it does not produce an optimal solution though it gives an
approximate (near optimal) solution in a reasonable time.
Optimization Problem

 An optimization problem is the problem of finding the best solution from all feasible
solutions.
 Optimization problems can be divided into two categories, depending on whether the
variables are continuous or discrete:
 An optimization problem with discrete variables is known as a discrete optimization, in which
an object such as an integer, permutation or graph must be found from a countable set.
 A problem with continuous variables is known as a continuous optimization, in which an
optimal value from a continuous function must be found. They can include constrained
problems and multimodal problems.
Maxima Minima

 The point at which a function takes the


minimum value is called as global minima.
 Those several points which appear to be
minima but is not the point where the
function takes the minimum value is called
as local minima.
 The point at which a function takes the
maximum value is called as global
maxima.
 Those several points which appear to be
maxima but is not the point where the
function takes the maximum value is
called as local maxima.
Greedy Method

Sr. No Weight Profit


Capacity = 20
1 10 20

2 5 30
Profit = 66
3 6 18
Example

B
60
10

20 30
A C E

30 10
D
Dynamic Programming

 Dynamic programming is basically, recursion plus using


common sense. What it means is that recursion allows you to
express the value of a function in terms of other values of that
function. Where the common sense tells you that if you
implement your function in a way that the recursive calls are
done in advance, and stored for easy access, it will make your
program faster.
Backtracking

 Backtracking is a general algorithm for finding all


(or some) solutions to some computational
problems, notably constraint satisfaction
problems, that incrementally builds candidates to
the solutions, and abandons a candidate
("backtracks") as soon as it determines that the
candidate cannot possibly be completed to a valid
solution.
Branch & Bound

 Branch and bound algorithms are used to find the optimal solution
for combinatory, discrete, and general mathematical optimization
problems. In general, given an NP-Hard problem, a branch and
bound algorithm explores the entire search space of possible
solutions and provides an optimal solution.
String Algorithm

 In computer science, string-searching algorithms, sometimes


called string-matching algorithms, are an important class of string
algorithms that try to find a place where one or several strings
(also called patterns) are found within a larger string or text.
Space & Time Complexity

 The space complexity of an algorithm or a computer


program is the amount of memory space required to
solve an instance of the computational problem as a
function of characteristics of the input. It is the
memory required by an algorithm to execute a
program and produce output.
 The time complexity is the computational complexity
that describes the amount of time it takes to run an
algorithm. Time complexity is commonly estimated
by counting the number of elementary operations
performed by the algorithm, supposing that each
elementary operation takes a fixed amount of time to
perform.
Growth of Function

 The relationship between input size and the performance of the


algorithm is called order of growth.
 Few Efficiency Classes:
Efficiency Class Order of growth
rate
Constant 1
Logarithmic
Linear n
Quadratic
Cubic
Exponential
Factorial n!
Asymptotic Notation

Asymptotic notations are


used to represent the There are three notations
complexities of algorithms that are commonly used.
for asymptotic analysis.

Big – Oh Notation: The Upper Bound


Big – Omega Notation (Ω): The Lower
Bound
Big – Theta Notation (Θ): The Tight Bound

These notations are


mathematical tools to
represent the complexities.
Big Oh (O) Notation

Let f(n) and g(n) be two non-negative


functions indicating the time complexity
of two algorithms. We can say that g(n)
is upper bound of f(n) if there exist some
positive constant c and such that to the
right of value of f(n) always lies on or
below the value of c*g(n). It is denoted
as,
Big Omega (Ω) Notation

Let f(n) and g(n) be two non-negative


functions indicating the time complexity
of two algorithms. We can say that g(n)
is lower bound of f(n) if there exist some
positive constant c and such that to the
right of value of f(n) always lies on or
above the value of c*g(n). It is denoted
as,
Big Theta (Θ) Notation

Let f(n) and g(n) be two non-negative


functions indicating the time complexity
of two algorithms. We can say that g(n)
is tight bound of f(n) if there exist some
positive constant and such that to the
right of value of f(n) always lies between
value of inclusively. It is denoted as,
Asymptotic Notation

Let f(n) and g(n) be two non-negative Let f(n) and g(n) be two non-negative Let f(n) and g(n) be two non-negative
functions indicating the time functions indicating the time functions indicating the time
complexity of two algorithms. We can complexity of two algorithms. We can complexity of two algorithms. We can
say that g(n) is upper bound of f(n) if say that g(n) is lower bound of f(n) if say that g(n) is tight bound of f(n) if

and 𝑛_0 such that to the right of 𝑛_0 and 𝑛_0 such that to the right of 𝑛_0
there exist some positive constant c there exist some positive constant c there exist some positive constant
and such that to the right of value of
f(n) always lies on or below the value value of f(n) always lies on or above value of f(n) always lies between

𝑓(𝑛)=Ω(𝑔(𝑛)) 𝑓(𝑛)=Θ(𝑔(𝑛))
of c*g(n). the value of c*g(n). It is denoted as, value of inclusively. It is denoted as,
It is denoted as,
Mathematic Background

∑ 1=1+1+1+1+ …+1=𝑛=O (𝑛)


𝑖 =1

𝑛
𝑛 ( 𝑛+1 ) 𝑛2 +𝑛
∑ 𝑖=1+2+ 3+ 4+ …+𝑛= ∑ 𝑛=
2
=
2
=O (𝑛2
)
𝑖 =1

𝑛
𝑛𝑘
∑ 𝑖 𝑘
=1+2𝑘
+ 3𝑘
+ 4𝑘
+ …+𝑛𝑘
=
𝑘+1
=O(𝑛𝑘+ 1
)
𝑖 =1

𝑛
𝑘𝑛+1
∑ 𝑘 =𝑘+𝑘 +𝑘 +𝑘 +…+ 𝑘 = 𝑘− 1 =O(𝑘𝑛 )
𝑖 2 3 4 𝑛

𝑖 =1
3 cases of Analysis

Best Case Worst Case Average Case

It is the case in which the It is the case in which the It is the case in which the
time complexity is minimum. time complexity is maximum. time complexity lies between
This is the most favorable This is the most unwanted the best & worst. This is the
condition or scenario. condition or scenario. case of maximum interest.
Selection Sort

Step Frequency
1 n
2 n-1
3 (n-1)*n
4 (n-1)*(n-1)
5 (n-1)*(n-1)* p(T)
6 (n-1)*(n-1)
7 (n-1)*(n-1)
8 (n-1)*(n-1)
Total
Insertion Sort

¿
Algorithm Cost Frequency
for j  2 to length [A] c1 n
do Key  A[j] c2 n-1
I  j-1 C3 n-1
while i > 0 and A[i]
> Key
c4

do A[i+1]  A[i]
c5

i  i -1
c6

A[i +1]  Key c7 n-1


Insertion Sort

Best Case:
In case of best case the while loop will get executed (n-1) times whereas statements within the while will not be
executed. Hence, we can say that

Hence above equation can be re-written as follows,


Insertion Sort

Worst Case and Average Case:


In case of average case and worst case the while loop will get executed (n-1) times and also statements
within the while will be executed. Hence, we can say that

Hence above equation can be re-written as follows,


Selection Sort
Insertion Sort
Recurrence Relation

 An equation or inequality that describes a function in terms of its value


on smaller inputs.

 Recurrences arise when an algorithm contains recursive calls to itself


 What is the actual running time of the algorithm?
 Need to solve the recurrence–
 Find an explicit formula of the expression
 Bound the recurrence by an expression that involves n
Example Recurrences

 Recursive algorithm that loops through the input to eliminate one item
T(n) = T(n-1) + n
 Recursive algorithm that halves the input in one step
T(n) = T(n/2) + c
 Recursive algorithm that halves the input but must examine every item in
the input
T(n) = T(n/2) + n O(n)
 Recursive algorithm that splits the input into 2 halves and does a constant
amount of other work
T(n) = 2T(n/2) + 1 O(n)
Example Recurrences

 Binary Search

 Merge Sort

 Fibonacci Series
Methods for Solving Recurrences

ITERATION SUBSTITUTIO RECURSION MASTER


METHOD N METHOD TREE METHOD
METHOD
Function Growth Rate
Substitution Method

 The Substitution Method Consists of two main steps:


 Guess the Solution.
 Use the mathematical induction to find the boundary condition and shows
that the guess is correct.
Question: Now if we put n =1 then we will get,
T(1) = 1 and T(1) = c log (1) = 0.
Solve the equation by Substitution
Method. If we put n=2 then we will get, T(2)
= T(1)+1 = 1+1 =2 and T(2) = c log
Substitution We must show that it is
(2) = c. so above condition will hold
true if value of c ≥ 2.
Method - asymptotically bound by O (log n). Hence, we can say our assumption is
true for .
Example Answer:
For T (n) = O (log n)
We must show that for some
constant c,
T (n) ≤ c log n
Put this in given Recurrence
Equation.

Hence our assumption is true if c ≥


1.
Substitution Method - Example

and guess solution is O(n log n)


Recursion Tree Method

1. Recursion Tree Method is a pictorial representation of a recursive method which is in the


form of a tree where at each level nodes are expanded.
2. In general, we consider the second term in recurrence as root.
3. It is useful when the divide & Conquer algorithm is used.
4. It is sometimes difficult to produce a good guess. In Recursion tree, each root and child
represents the cost of a single subproblem.
5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs
and then sum all pre-level costs to determine the total cost of all levels of the recursion.
6. A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.
Recursion Tree Method - Example

𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑎𝑛𝑑 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 , 𝑇 ( 𝑛 ) =2 𝑇 ( )


𝑛
2
+𝑛

The Recursion tree for the above recurrence is


Recursion Tree Method - Example
2
𝑛
( ) ( ) ( )
2 2 2
𝑛 𝑛 𝑛
4 4 4

( )( ) ( )
2 2 2
𝑛 𝑛 𝑛
2
4 16 16

11 1
Recursion Tree Method - Example
2
𝑛
( ) ( ) ( )
2 2 2
𝑛 𝑛 𝑛
4 4 4

( )( ) ( )
2 2 2
𝑛 𝑛 𝑛
2
4 16 16

11 1
Recursion Tree Method - Example
2
𝑛
( ) ( ) ( )
2 2 2
𝑛 𝑛 𝑛
4 4 4

( )( ) ( )
2 2 2
𝑛 𝑛 𝑛
2
4 16 16

11 1
Recursion Tree Method - Example

𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑎𝑛𝑑 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 , 𝑇 ( 𝑛 ) = 4 𝑇 ( )


𝑛
2
+𝑛
Recursion Tree Method - Example

𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑎𝑛𝑑 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 , 𝑇 ( 𝑛 ) =𝑇 () ( )


𝑛
3
+𝑇
2𝑛
3
+𝑛
Recursive Tree Example

𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑎𝑛𝑑 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 , 𝑇 ( 𝑛 ) =𝑇 ( )


𝑛
4
+𝑛
Master Method Theorem

 The Master Method provides a method to solve the recurrence equation of the form

where, a ≥ 1 and b > 1 are constant and f(n) is asymptotically positive function. The
master method requires memorization of the three cases, but the solution of many
recurrence can be determined quite easily.
 The above recurrence describes the running time of complexity of an algorithm that
divides a problem of size “n” into “a” subproblems each of size where a & b are
constants. The subproblems are recursively solved in The cost of dividing the problem
& combining the result of the subproblem is described by the function f(n).
 can be interpreted as either the floor or celling value.
Master Method Theorem Statement

Let a ≥ 1 and b > 1 be constant, and f(n) be a non-negative function & let T(n) be defined
on non-negative integers by recurrence

Where we interpret as either , then T(n) can be bounded asymptotically as follows,


1. If
2. If
Example

Question :
Solve the below recurrence equation using Master Method.

Answer:
Comparing above recurrence equation with we get,
a = 4, b = 2 & f(n) = n.

Applying case 1 of master method theorem we can say,


Example

Question :
Solve the below recurrence equation using Master Method.

Answer:
Comparing above recurrence equation with we get,
a = 2, b = 2 & f(n) = n.

Applying case 2 of master method theorem we can say,


Example

Question :
Solve the below recurrence equation using Master Method.

Answer:
Comparing above recurrence equation with we get,
a = 3, b = 4 & f(n) = .

For Sufficiently large value of n,

Applying case 3 of master method theorem we can say,


Selection Sort
Insertion Sort
Insertion Sort v/s Bubble Sort
Proof of Master Method Theorem

Click Here

You might also like