0% found this document useful (0 votes)
35 views

Asymptotic Analysis of Algorithms (Growth of Function)

Uploaded by

AJIT TAJI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Asymptotic Analysis of Algorithms (Growth of Function)

Uploaded by

AJIT TAJI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Asymptotic Analysis of algorithms (Growth of

function)
Resources for an algorithm are usually expressed as a
function regarding input. Often this function is messy
and complicated to work. To study Function growth
efficiently, we reduce the function down to the
important part.
Let f (n) = an2+bn+c

In this function, the n2 term dominates the function


that is when n gets sufficiently large.

Dominate terms are what we are interested in reducing


a function, in this; we ignore all constants and
coefficient and look at the highest order term
concerning n.

Asymptotic notation:

The word Asymptotic means approaching a value or


curve arbitrarily closely (i.e., as some sort of limit is
taken).

Why is Asymptotic Notation Important?

1. They give simple characteristics of an algorithm's


efficiency.

2. They allow the comparisons of the performances of


various algorithms.
Asymptotic Notations

Asymptotic notations are the mathematical notations used to describe the


running time of an algorithm when the input tends towards a particular
value or a limiting value.

For example: In bubble sort, when the input array is already sorted, the
time taken by the algorithm is linear i.e. the best case.

But, when the input array is in reverse condition, the algorithm takes the
maximum time (quadratic) to sort the elements i.e. the worst case.

When the input array is neither sorted nor in reverse order, then it takes
average time. These durations are denoted using asymptotic notations.

There are mainly three asymptotic notations: Theta notation, Omega


notation and Big-O notation.

Theta Notation (Θ-notation)

Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average case complexity of an
algorithm.
or a function g(n) , Θ(g(n)) is given by the relation:

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0


such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set Θ(g(n)) if there exist positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n) , for sufficiently large n.
If a function f(n) lies anywhere in between c1g(n) and c2 > g(n) for all n ≥ n0 ,

then f(n) is said to be asymptotically tight bound.


Example:
1) f(n) = 30
To satisfy theta condition, the function can be expressed as
29*1 <=f(n) <=30*1
Where c1 =29, c2 =30 and n0=0
f(n) = (1)
2) f(n) = 30n+7 for n>=7
<=30n+n<=31n (c2=31, n0 =7 ) put 7=n
Also 30n<30n+7 or all n, c1=30
30n<30n+7<=31n where c1=30,c2=31, n0=7
f(n)=(n)

3) f(n)= 5n2 + 15n


Put 15=n
<=5n2 + n*n = 6n2 where c2=6, n0=15
5n2 + 15n<6n2
Also 5n2< 5n2 + 15n for all n, c1=5
5n2< 5n2 + 15n <= 6n2where c1= 5, c2 = 6 and n0= 15
f(n) = (n2)

4) f(n) = 5n2 + 5n +6
<=5n2 + 5n + n (n>=6)
<=5n2 + 6n
For n>=6, n2>=6n
<= 5n2 + n2
<= 6n2 , c2=6,c1=5 , n0=6
F(n) =(n2)

5) f(n) = 5n3 + n2 + 3n + 2
For n>= 2
5n3 + n2 + 3n + 2 <= 5n3 + n2 + 3n + n <=5n3 + n2 + 4n
For n2>= 4n
5n3 + n2 + 4n <= 5n3 + n2 + n2<= 5n3 + 2n2
For n3>=2n2
5n3 + 2n2<= 5n3 + n3<= 6n3 where c2=6 and n0=2
5n3< 5n3 + n2 + 3n + 2 for all n>n0 and c1= 5
5n3< 5n3 + n2 + 3n + 2 <= 6n3
F(n) = (n3)
Big-O Notation (O-notation)

Big-O notation represents the upper bound of the running time of an


algorithm. Thus, it gives the worst case complexity of an algorithm.

Big-O gives the upper bound of a function

O(g(n)) = { f(n): there exist positive constants c and n0


such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set O(g(n)) if there exists a positive constant c such that it lies
between 0 and cg(n) , for sufficiently large n .
For any value of n , the running time of an algorithm does not cross time
provided by O(g(n)) .

Since it gives the worst case running time of an algorithm, it is widely used
to analyze an algorithm as we are always interested in the worst case
scenario.

Example
1) f(n) = 20

To satisfy big O condition

f(n) <= 20*1, where c= 20 n0=0

f(n) = O(1)

2)

F(n) = 30n+7

<=30n+n n>=7

<=31n c=31, n0 =7

Omega Notation (Ω-notation)

Omega notation represents the lower bound of the running time of an


algorithm. Thus, it provides best case complexity of an algorithm.
Omega gives the lower bound of a function

Ω(g(n)) = { f(n): there exist positive constants c and n0


such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set Ω(g(n)) if there exists a positive constant c such that it lies above cg(n) ,

for sufficiently large n .


For any value of n , the minimum time required by the algorithm is given by
Omega Ω(g(n))

Example

1) f(n) = 20n + 7

20n<20n+7 ; c=20 n0 =7
f(n) = Ω (n)

2) f(n)= 5n2 +5n +6

5n2<5n2 +5n +6 ; c=5

F(n) = Ω(n2)

Asymptotic analysis

It is a technique of representing limiting behavior. The


methodology has the applications across science. It can
be used to analyze the performance of an algorithm for
some large data set.

1. In computer science in the analysis of algorithms,


considering the performance of algorithms when
applied to very large input datasets

The simplest example is a function ƒ (n) = n2+3n, the


term 3n becomes insignificant compared to n2 when n
is very large. The function "ƒ (n) is said to
be asymptotically equivalent to n2 as n →∞", and
here is written symbolically as ƒ (n) ~ n2.

Asymptotic notations are used to write fastest and


slowest possible running time for an algorithm. These
are also referred to as 'best case' and 'worst case'
scenarios respectively.

"In asymptotic notations, we derive the complexity


concerning the size of the input. (Example in terms of
n)"

"These notations are important because without


expanding the cost of running the algorithm, we can
estimate the complexity of the algorithms."
Analyzing Algorithm Control Structure
To analyze a programming code or algorithm, we must
notice that each instruction affects the overall
performance of the algorithm and therefore, each
instruction must be analyzed separately to analyze
overall performance. However, there are some
algorithm control structures which are present in each
programming code and have a specific asymptotic
analysis.

Some Algorithm Control Structures are:

1. Sequencing
2. If-then-else
3. for loop
4. While loop

1. Sequencing:

Suppose our algorithm consists of two parts A and B. A


takes time tA and B takes time tB for computation. The
total computation "tA + tB" is according to the sequence
rule. According to maximum rule, this computation
time is (max (tA,tB)).

Example
2. If-then-else:

The total time computation is according to the


condition rule-"if-then-else." According to the
maximum rule, this computation time is max (tA,tB).

Example:
3. For loop:

The general format of for loop is:

1. For (initialization; condition; updation)

2. Statement(s);
Complexity of for loop:
The outer loop executes N times. Every time the outer
loop executes, the inner loop executes M times. As a
result, the statements in the inner loop execute a total
of N * M times. Thus, the total complexity for the two
loops is O (N2)

Consider the following loop:

1. for i ← 1 to n
2. {
3. P (i)
4. }
If the computation time ti for ( PI) various as a function
of "i", then the total computation time for the loop is
given not by a multiplication but by a sum i.e.

1. For i ← 1 to n
2. {
3. P (i)
4. }

Takes

If the algorithms consist of nested "for" loops, then the


total computation time is
For i ← 1 to n
{

For j ← 1 to n
{
P (ij)
}
}

Example:

Consider the following "for" loop, Calculate the total


computation time for the following:

1. For i ← 2 to n-1
2. {
3. For j ← 3 to i
4. {
5. Sum ← Sum+A [i] [j]
6. }
7. }

Solution:
The total Computation time is:
4. While loop:

The Simple technique for analyzing the loop is to


determine the function of variable involved whose
value decreases each time around. Secondly, for
terminating the loop, it is necessary that value must be
a positive integer. By keeping track of how many times
the value of function decreases, one can obtain the
number of repetition of the loop. The other approach
for analyzing "while" loop is to treat them as recursive
algorithms.

Algorithm:
1. 1. [Initialize] Set k: =1, LOC: =1 and MAX: = DATA [1]

2. 2. Repeat steps 3 and 4 while K≤N


3. 3. if MAX<DATA [k],then:
4. Set LOC: = K and MAX: = DATA [k]
5. 4. Set k: = k+1
6. [End of step 2 loop]
7. 5. Write: LOC, MAX
8. 6. EXIT

Example:

The running time of algorithm array Max of computing


the maximum element in an array of n integer is O (n).

Solution:
1. array Max (A, n)
2. 1. Current max ← A [0]
3. 2. For i ← 1 to n-1
4. 3. do if current max < A [i]
5. 4. then current max ← A [i]
6. 5. return current max.

The number of primitive operation t (n) executed by


this algorithm is at least.

1. 2 + 1 + n +4 (n-1) + 1=5n
2. 2 + 1 + n + 6 (n-1) + 1=7n-2

The best case T(n) =5n occurs when A [0] is the


maximum element. The worst case T(n) = 7n-2 occurs
when element are sorted in increasing order.

We may, therefore, apply the big-Oh definition with


c=7 and n0=1 and conclude the running time of this is
O (n).

You might also like