0% found this document useful (0 votes)
89 views

DAA Unit - 1

Uploaded by

Avsec Cse
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

DAA Unit - 1

Uploaded by

Avsec Cse
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 68

18AIC209T - Design and Analysis

of Algorithms

Mr.K.Jeya ganesh kumar


Assistant Professor
Department of Artificial Intelligence and Data Science
M.Kumarasamy College of Engineering
UNIT – I
Introduction
 Characteristics of Algorithm. Analysis of Algorithm:
 Performance Measurements of Algorithm, Time
and Space Trade-Offs
 Asymptotic analysis of Complexity Bounds – Best,
Average and Worst-Case behaviour
 Analysis of Recursive Algorithms through
Recurrence Relations
 Substitution Method
 Recursion Tree Method
 Masters’ Theorem
Day – 1
Characteristics of Algorithm.
Analysis of Algorithm
Software Development
 Requirements
 Analysis: bottom-up vs. top-down
 Design: data objects and operations
 Refinement and Coding
 Verification
– Program Proving
– Testing
– Debugging
Algorithm
 Definition
An algorithm is a finite set of instructions that
accomplishes a particular task.
Design and Analysis of Algorithm

 An Algorithm is a sequence of steps to solve a problem

 Design and Analysis of Algorithm is very important for


designing algorithm to solve different types of problems
in the branch of computer science and information
technology.
Design and Analysis of Algorithms

• Analysis: predict the cost of an algorithm in


terms of resources and performance

• Design: design algorithms which minimize the


cost

L1.7
Algorithm - Characteristics
The criteria for any set of instruction for an algorithm is as
follows:

 Input : Zero of more quantities that are externally


applied
 Output : At least one quantity is produced
 Definiteness : Each instruction should be clear and
unambiguous
 Finiteness : Algorithm terminates after finite number of
steps for all test cases.
 Effectiveness: Each instruction is basic enough for a
person to carried out using a pen and paper. That means
ensure not only definite but also check whether feasible
or not.
Example of an Algorithm
Designing an algorithm has following
advantages :

 Effective Communication: Since algorithm is written in


English like language, it is simple to understand step-by-
step solution of the problems.
 Easy Debugging: Well-designed algorithm makes
debugging easy so that we can identify logical error in the
program.
 Easy and Efficient Coding: An algorithm acts as a
blueprint of a program and helps during program
development.
 Independent of Programming Language: An algorithm is
independent of programming languages and can be easily
coded using any high level language.
Analysis of algorithms
 Analysis of algorithms is the determination of
the amount of time and space resources
required to execute it.
 Usually, the efficiency or running time of an
algorithm is stated as a function relating the input
length to the number of steps, known as time
complexity, or volume of memory, known as
space complexity.
Need for Analysis of Algorithms

 Algorithm analysis is an important part of


computational complexity theory, which provides
theoretical estimation for the required
resources of an algorithm to solve a specific
computational problem. Analysis of algorithms
is the determination of the amount of time and
space resources required to execute it.
Performance Measurement of
Algorithms
 Performance analysis of an algorithm is done to understand
how efficient that algorithm is compared to another
algorithm that solves the same computational problem.
 The two most common measures are speed and memory
usage
Measurements
 Criteria
– Is it correct?
– Is it readable?
 Performance Analysis (machine independent)
– space complexity: storage requirement
– time complexity: computing time
 Performance Measurement (machine
dependent)
Time Complexity
 Time complexity is defined as the amount of
time taken by an algorithm to run, as a
function of the length of the input.
 It measures the time taken to execute each
statement of code in an algorithm.
Time Complexity
T(P)=C+TP(I)
 Compile time (C)
independent of instance characteristics
 run (execution) time TP
 Definition
A program step is a syntactically or semantically meaningful program segment whose execution time is
independent of the instance characteristics.
 Example
– abc = a + b + b * c + (a + b - c) / (a + b) + 4.0
– abc = a + b + c
T (n)=caADD(n)+csSUB(n)+clLDA(n)+cstSTA(n)
P

Regard as the same unit


machine independent
Space Complexity
 The space complexity of an algorithm or a
computer program is the amount of memory
space required to solve an instance of the
computational problem as a function of
characteristics of the input.
 It is the memory required by an algorithm until it
executes completely.
 Space complexity includes both Auxiliary space
and space used by input.
Space Complexity
S(P)=C+SP(I)
 Fixed Space Requirements (C)
Independent of the characteristics of the inputs and outputs
– instruction space
– space for simple variables, fixed-size structured variable, constants
 Variable Space Requirements (SP(I))
depend on the instance characteristic I
– number, size, values of inputs and outputs associated with I
– recursive stack space, formal parameters, local variables, return address
Asymptotic analysis
 It is a technique of representing limiting behavior.
 The methodology has the applications across
science.
 It can be used to analyze the performance of an
algorithm for some large data set.
Example
 In computer science in the analysis of algorithms,
considering the performance of algorithms when applied to
very large input datasets
 The simplest example is a function ƒ (n) = n2+3n, the term
3n becomes insignificant compared to n2 when n is very
large. The function "ƒ (n) is said to be asymptotically
equivalent to n2 as n → ∞", and here is written
symbolically as ƒ (n) ~ n2.
Asymptotic notations
 Asymptotic notations are used to write fastest and
slowest possible running time for an algorithm. These
are also referred to as 'best case' and 'worst case'
scenarios respectively.
 "In asymptotic notations, we derive the complexity
concerning the size of the input. (Example in terms of
n)"
 "These notations are important because without
expanding the cost of running the algorithm, we can
estimate the complexity of the algorithms."
Why is Asymptotic Notation
Important?
 They give simple characteristics of an algorithm's
efficiency.
 They allow the comparisons of the performances
of various algorithms.
Asymptotic Notations:
 Asymptotic Notation is a way of comparing function
that ignores constant factors and small input sizes.
 Three notations are used to calculate the running time
complexity of an algorithm:
– Big-oh notation (O)
– Omega Notation (Ω)
– Theta Notation (Θ)
– Little ‘Oh’ Notation (o)
– Little Omega (ω)
Common asymptotic functions
Function Name
1 Constant
log n Logarithmic
n Linear
n log n n log n
n2 Quadratic
n3 Cubic
2n Exponential
n! Factorial
1. Big-oh notation(O):
 Big-oh is the formal method of expressing the upper bound
of an algorithm's running time. It is the measure of the
longest amount of time. The function f (n) = O (g (n))
[read as "f of n is big-oh of g of n"] if and only if exist
positive constant c and such that
 f (n) ⩽ k.g(n)f(n)⩽k.g(n) for n>n0n>n0 in all case
 Hence, function g (n) is an upper bound for function f (n),
as g (n) grows faster than f (n)
Linear Functions
 f(n) = 3n + 2
 General form is f(n) ≤ cg(n)
 When n ≥ 2, 3n + 2 ≤ 3n + n = 4n
 Hence f(n) = O(n), here c = 4 and n0 = 2
 When n ≥ 1, 3n + 2 ≤ 3n + 2n = 5n
 Hence f(n) = O(n), here c = 5 and n0 = 1
 Hence we can have different c,n0 pairs satisfying
for a given function.
Example:

 f(n) = 3n + 3
 When n ≥ 3, 3n + 3 ≤ 3n + n = 4n
 Hence f(n) = O(n), here c = 4 and n 0 = 3
Example
 f(n) = 100n + 6
 When n ≥ 6, 100n + 6 ≤ 100n + n = 101n
 Hence f(n) = O(n), here c = 101 and n 0 = 6
Quadratic Functions
 Example
 f(n) = 10n2 + 4n + 2
 When n ≥ 2, 10n2 + 4n + 2 ≤ 10n2 + 5n
 When n ≥ 5, 5n ≤ n2, 10n2 + 4n + 2 ≤
10n2 + n2 = 11n2
 Hence f(n) = O(n2), here c = 11 and n0 = 5
Example:
 f(n) = 1000n2 + 100n - 6
 f(n) ≤ 1000n2 + 100n for all values of n.
 When n ≥ 100, 5n ≤ n2, f(n) ≤ 1000n2 +
n2 = 1001n2
 Hence f(n) = O(n2), here c = 1001 and n0 =
100
Exponential Functions
 Example
 f(n) = 6*2n + n2
 When n ≥ 4, n2 ≤ 2n
 So f(n) ≤ 6*2n + 2n = 7*2n
 Hence f(n) = O(2n), here c = 7 and n0 = 4
Constant Functions
 Example
 f(n) = 10
 f(n) = O(1), because f(n) ≤ 10*1
Omega Notation (Ω)
 Ω (g(n)) = { f(n) : there exist positive constants c
and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
 It is the lower bound of any function. Hence it
denotes the best case complexity of any
algorithm. We can represent it graphically as
Example:
 f(n) = 3n + 2
 3n + 2 > 3n for all n.
 Hence f(n) = Ω(n)
 Similarly we can solve all the examples
specified under Big ‘Oh’.
Theta Notation (Θ)
 Θ(g(n)) = {f(n) : there exist positive constants c 1,c2
and n0 such that c1g(n) ≤f(n) ≤c2g(n) for all n ≥ n0 }
 If f(n) = Θ(g(n)), all values of n right to n 0 f(n) lies
on or above c1g(n) and on or below c2g(n). Hence
it is asymptotic tight bound for f(n).
Example 1.14

 f(n) = 3n + 2
 f(n) = Θ(n) because f(n) = O(n) , n ≥ 2.

 Similarly we can solve all examples


specified under Big’Oh’.
Little ‘Oh’ Notation (o)

 o(g(n)) = { f(n) : for any positive constants c


> 0, there exists n0>0, such that 0 ≤ f(n) <
cg(n) for all n ≥ n0 }

 It defines the asymptotic tight upper bound.


Main difference with Big Oh is that Big Oh
defines for some constants c by Little Oh
defines for all constants.
Little Omega (ω)

 ω(g(n)) = { f(n) : for any positive constants


c>0 and n0>0 such that 0 ≤ cg(n) < f(n) for
all n ≥ n0 }
 It defines the asymptotic tight lower
bound. Main difference with Ω is that, ω
defines for some constants c by ω defines
for all constants.
Analyze an Algorithm:
 Best Case : The function which performs the
minimum number of steps on input data of n
elements.
 The Worst Case : The function which performs
an average number of steps on input data of n
elements.
 Average Case : The function which performs the
maximum number of steps on input data of size n
Example: In linear search,
 Best-case complexity is O(1) where the
element is found at the first index.
 Worst-case complexity is O(n) where the
element is found at the last index or element
is not present in the array.
 Average case complexity of linear search is
n/2.
Example: In binary search,
 Best-case complexity is O(1) where the
element is found at the middle index.
 The worst-case complexity is O(log2n)
Let’s assume that we have an array of size 10,000.

 In a linear search,
– Best-case complexity is O(1) and
– Worst-case complexity is O(10000).
 In a binary search,
– Best-case complexity is O(1) and
– Worst-case complexity is O(log210000)=
O(13.287).
Data Type
 Data Type
A data type is a collection of objects and a set of operations that act on those
objects.
 Abstract Data Type
An abstract data type(ADT) is a data type that is organized in such a way that the
specification of the objects and the operations on the objects is separated from the
representation of the objects and the implementation of the operations.
Specification vs. Implementation
 Operation specification
– function name
– the types of arguments
– the type of the results
 Implementation independent
*Structure 1.1:Abstract data type Natural_Number (p.17)
structure Natural_Number is
objects: an ordered subrange of the integers starting at zero and ending

at the maximum integer (INT_MAX) on the computer


functions:
for all x, y  Nat_Number; TRUE, FALSE  Boolean
and where +, -, <, and == are the usual integer operations.
Nat_No Zero ( ) ::= 0
Boolean Is_Zero(x) ::= if (x) return FALSE
else return TRUE
Nat_No Add(x, y) ::= if ((x+y) <= INT_MAX) return x+y
else return INT_MAX
Boolean Equal(x,y) ::= if (x== y) return TRUE
else return FALSE
Nat_No Successor(x) ::= if (x == INT_MAX) return x
else return x+1
Nat_No Subtract(x,y) ::= if (x<y) return 0
else return x-y
::= is defined as
end Natural_Number
*Program 1.9: Simple arithmetic function (p.19)
float abc(float a, float b, float c)
{
return a + b + b * c + (a + b - c) / (a + b) + 4.00;
}
Sabc(I) = 0

*Program 1.10: Iterative function for summing a list of numbers (p.20)


float sum(float list[ ], int n)
{
Ssum(I) = 0
float tempsum = 0;
int i; Recall: pass the address of the
for (i = 0; i<n; i++) first element of the array &
tempsum += list [i]; pass by value
return tempsum;
}
*Program 1.11: Recursive function for summing a list of numbers (p.20)
float rsum(float list[ ], int n)
{
if (n) return rsum(list, n-1) + list[n-1];
return 0;
} Ssum(I)=Ssum(n)=6n

Assumptions:
*Figure 1.1: Space needed for one recursive call of Program 1.11 (p.21)

Type Name Number of bytes


parameter: float list [ ] 2
parameter: integer n 2
return address:(used internally) 2(unless a far address)
TOTAL per recursive call 6
Methods to compute the step count

 Introduce variable count into programs


 Tabular method
– Determine the total number of steps contributed by each statement
step per execution  frequency
– add up the contribution of all statements
Iterative summing of a list of numbers
*Program 1.12: Program 1.10 with count statements (p.23)

float sum(float list[ ], int n)


{
float tempsum = 0; count++; /* for assignment */
int i;
for (i = 0; i < n; i++) {
count++; /*for the for loop */
tempsum += list[i]; count++; /* for assignment */
}
count++; /* last execution of for */
return tempsum;
count++; /* for return */
}
2n + 3 steps
*Program 1.13: Simplified version of Program 1.12 (p.23)

float sum(float list[ ], int n)


{
float tempsum = 0;
int i;
for (i = 0; i < n; i++) 2n + 3 steps
count += 2;
count += 3;
return 0;
}
Recursive summing of a list of numbers
*Program 1.14: Program 1.11 with count statements added (p.24)

float rsum(float list[ ], int n)


{
count++; /*for if conditional */
if (n) {
count++; /* for return and rsum invocation */
return rsum(list, n-1) + list[n-1];
}
count++;
return list[0];
}
2n+2
Matrix addition

*Program 1.15: Matrix addition (p.25)

void add( int a[ ] [MAX_SIZE], int b[ ] [MAX_SIZE],


int c [ ] [MAX_SIZE], int rows, int cols)
{
int i, j;
for (i = 0; i < rows; i++)
for (j= 0; j < cols; j++)
c[i][j] = a[i][j] +b[i][j];
}
*Program 1.16: Matrix addition with count statements (p.25)

void add(int a[ ][MAX_SIZE], int b[ ][MAX_SIZE],


int c[ ][MAX_SIZE], int row, int cols )
{
int i, j; 2rows * cols + 2 rows + 1
for (i = 0; i < rows; i++){
count++; /* for i for loop */
for (j = 0; j < cols; j++) {
count++; /* for j for loop */
c[i][j] = a[i][j] + b[i][j];
count++; /* for assignment statement */
}
count++; /* last time of j for loop */
}
count++; /* last time of i for loop */
}
*Program 1.17: Simplification of Program 1.16 (p.26)

void add(int a[ ][MAX_SIZE], int b [ ][MAX_SIZE],


int c[ ][MAX_SIZE], int rows, int cols)
{
int i, j;
for( i = 0; i < rows; i++) {
for (j = 0; j < cols; j++)
count += 2;
count += 2;
}
count++;
}
2rows  cols + 2rows +1
Suggestion: Interchange the loops when rows >> cols
Tabular Method
*Figure 1.2: Step count table for Program 1.10 (p.26)
Iterative function to sum a list of numbers
steps/execution
Statement s/e Frequency Total steps
float sum(float list[ ], int n) 0 0 0
{ 0 0 0
float tempsum = 0; 1 1 1
int i; 0 0 0
for(i=0; i <n; i++) 1 n+1 n+1
tempsum += list[i]; 1 n n
return tempsum; 1 1 1
} 0 0 0
Total 2n+3
Recursive Function to sum of a list of numbers
*Figure 1.3: Step count table for recursive summing function (p.27)

Statement s/e Frequency Total steps


float rsum(float list[ ], int n) 0 0 0
{ 0 0 0
if (n) 1 n+1 n+1
return rsum(list, n-1)+list[n-1]; 1 n n
return list[0]; 1 1 1
} 0 0 0
Total 2n+2
Matrix Addition
*Figure 1.4: Step count table for matrix addition (p.27)

Statement s/e Frequency Total steps

Void add (int a[ ][MAX_SIZE]‧‧‧) 0 0 0


{ 0 0 0
int i, j; 0 0 0
for (i = 0; i < row; i++) 1 rows+1 rows+1
for (j=0; j< cols; j++) 1 rows‧(cols+1) rows‧cols+rows
c[i][j] = a[i][j] + b[i][j]; 1 rows‧cols rows‧cols
} 0 0 0

Total 2rows‧cols+2rows+1
Exercise 1

*Program 1.18: Printing out a matrix (p.28)

void print_matrix(int matrix[ ][MAX_SIZE], int rows, int cols)


{
int i, j;
for (i = 0; i < row; i++) {
for (j = 0; j < cols; j++)
printf(“%d”, matrix[i][j]);
printf( “\n”);
}
}
Exercise 2

*Program 1.19:Matrix multiplication function(p.28)

void mult(int a[ ][MAX_SIZE], int b[ ][MAX_SIZE], int c[ ][MAX_SIZE])


{
int i, j, k;
for (i = 0; i < MAX_SIZE; i++)
for (j = 0; j< MAX_SIZE; j++) {
c[i][j] = 0;
for (k = 0; k < MAX_SIZE; k++)
c[i][j] += a[i][k] * b[k][j];
}
}
Exercise 3
*Program 1.20:Matrix product function(p.29)

void prod(int a[ ][MAX_SIZE], int b[ ][MAX_SIZE], int c[ ][MAX_SIZE],

int rowsa, int colsb, int colsa)


{
int i, j, k;
for (i = 0; i < rowsa; i++)
for (j = 0; j< colsb; j++) {
c[i][j] = 0;
for (k = 0; k< colsa; k++)
c[i][j] += a[i][k] * b[k][j];
}
}
Exercise 4

*Program 1.21:Matrix transposition function (p.29)

void transpose(int a[ ][MAX_SIZE])


{
int i, j, temp;
for (i = 0; i < MAX_SIZE-1; i++)
for (j = i+1; j < MAX_SIZE; j++)
SWAP (a[i][j], a[j][i], temp);
}
Asymptotic Notation (O)
 Definition
f(n) = O(g(n)) iff there exist positive constants c and n0 such that f(n)  cg(n) for all n, n  n0.
 Examples
– 3n+2=O(n) /* 3n+24n for n2 */
– 3n+3=O(n) /* 3n+34n for n3 */
– 100n+6=O(n) /* 100n+6101n for n10 */
– 10n2+4n+2=O(n2) /* 10n2+4n+211n2 for n5 */
– 6*2n+n2=O(2n) /* 6*2n+n2 7*2n for n4 */
Example
 Complexity of c1n2+c2n and c3n
– for sufficiently large of value, c3n is faster than
c1n2+c2n
– for small values of n, either could be faster
• c1=1, c2=2, c3=100 --> c1n2+c2n  c3n for n  98
• c1=1, c2=2, c3=1000 --> c1n2+c2n  c3n for n  998
– break even point
• no matter what the values of c1, c2, and c3, the n beyond
which c3n is always faster than c1n2+c2n
 O(1): constant
 O(n): linear
 O(n2): quadratic
 O(n3): cubic
 O(2n): exponential
 O(logn)
 O(nlogn)
*Figure 1.7:Function values (p.38)
*Figure 1.8:Plot of function values(p.39)

nlogn

logn
*Figure 1.9:Times on a 1 billion instruction per second computer(p.40)

You might also like