Introduction-DAA
Introduction-DAA
• When the statements to count are introduced into Algorithm 1.6, the
result is Algorithm 1.8. The change in the value of count by the time the
program terminates is the number of steps executed by Algorithm 1.6. If
count is zero to start with, then it will be 2n+3 on termination
Summary of Time Complexity
• The time complexity of an algorithm is given by the number of
steps taken by the algorithm.
• The number of steps itself is the function of the instance
characteristics.
• Any specific instance may have several characteristics (e.g.,
number of inputs, number of outputs, etc.). Usually, the number
of steps is computed as a fraction of some subset of these.
• Once the relevant characteristics (n, m, p, q, r, …) are selected,
we can define what a step is. A step is any computation unit that
is independent of the characteristics (n, m, p, q, i,…).
• Thus, 10 additions can be one step; 100 multiplications can also
be one step; but n additions cannot; nor can m/2 additions, p+q
subtractions and so on.
… Summary of Time Complexity
• For many algorithms, the time complexity is not dependent solely on the
number of inputs or outputs or some other easily specified characteristic.
• For example, the searching algorithm, may terminate in one step, or it
may take two steps, and so on.
• Consequently, when the chosen parameters are not adequate to
determine the step count uniquely, we can define three kinds of step
counts: best case, worst case, and average.
• Our motivation to determine step counts is to be able to compare the
time complexities of two algorithms that compute the same function and
also to predict the growth in run time as the instance characteristics
change.
• But determining the exact step count (best case, worst case and average)
of an algorithm can be difficult. Determining the step count exactly is not
a worthwhile endeavor, since the notion of step is itself inexact.
… Summary of Time Complexity
• Because of inexactness of what a step stands for, the
exact step count is not useful for comparative
purposes.
• For most situations, it is adequate to be able to make a
statement like c1n2≤tp(n)≤c2n2 or tQ(n,m)=c1n+c2m,
where c1 and c2 are non-negative constants.
• If we have two algorithms with complexity c1n2+c2n
and c3n respectively, then we know that the one with
complexity c3n will be faster than the one with
complexity c1n2+c2n for sufficiently large values of n.
For small values of n, either algorithm could be faster
(depending on c1, c2, and c3).
Order of Growth
• When the algorithms are compared with respect to their behavior
for large input size, the measure is known as Order of Growth.
• The order of growth can be estimated by taking into account the
dominant term of the running expression. The dominant term
overpasses the values of the other terms when the input size is
large.
For example:
• Let us consider T(n) = an + b where a>0, If the input size 'n' is
multiplied by 'k' then the dominant term in T(n) i.e. 'an' is also
multiplied by 'k'. This produces T(kn) = k(an)+b and we can say that
T(n) has a linear order of growth.
• Let us consider T(n) = an2+ bn + c where a>0, If the input size 'n' is
multiplied by 'k' ,this produces T(kn) = (k2a)n2+ (kb)n + c, thus the
dominant term is multiplied by k2 and we can say that T(n) has a
quadratic order of growth. and so on for cubic, logarithmic etc.
Asymptotic Notation (O, Ω, θ)
• In Asymptotic analysis it is considered that an algorithm
'A1' is better than algorithm 'A2' if the order of growth of
the running time of the 'A1' is lower than that of 'A2'.
• Therefore asymptotic efficiency of algorithms are
concerned with how the running time of an algorithm
increases with the size of the input in the limit, as the size
of the input increases without bound.
• Asymptotic Notations: For every algorithm corresponding
to efficiency analysis, we have three basic cases :
– Best Case
– Average Case
– Worst Case
Asymptotic Notation (O, Ω, θ)
• There are 5 asymptotic notations:
Big O Notation
• Big O notation is used to describe an asymptotic
upper bound for the magnitude of a function in
terms of function.
• It is the formal method of expressing the upper
bound of an algorithm’s running time.
• It’s a measure of the longest amount of time it
could possibly take for the algorithm to complete.
• The function f(n) = O(g(n)) (read as “f of n is big
oh of g of n”) iff there exist positive constants c
and n0 such that f(n)<=c*g(n) for all n, n>=n0.
Omega Notation
• Just as big O notation provides an asymptotic
upper bound on a function. Omega notation
provides an asymptotic lower bound.
• Since Omega notation describes a lower bound,
we use it to bind the best-case running time of an
algorithms.
• The function f(n) = Ω(g(n)) (read as “f of n is
omega of g of n”) iff there exist positive constants
c and n0 such that f(n)>=c*g(n) for all n, n>=n0.
Theta Notation
• The theta notation is used when the function f can be bounded both from above
and below by the same function g.
• The function f(n) =Ѳ(g(n)) (read as “f of n is theta of g of n”) iff there exist
positive constants c1, c2 and n0 such that c1*g(n)<=f(n)<=c2*g(n) for all n, n>=n0.
• When we write f(n) = Ѳ(g(n)), we are saying that f lies between c1 times the
function g and c2 times the function g except possibly when n is smaller than n0.
• Here c1 and c2 are positive constants. Thus g is both a lower and upper bound on
the value of f for suitably large n.
• Another way to view the theta notation is that it says f(n) is both Ω(g(n)) and
O(g(n)).
Little o Notation
• Little o notation is used to describe an upper bound that cannot be tight. In other
words, loose upper bound of f(n).
• Definition : Let f(n) and g(n) be functions that map positive integers to positive
real numbers. We say that f(n) is ο(g(n)) if for any real constant c > 0, there exists
an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).
Little ω notation
• We use ω to denote a lower bound that is not asymptotically tight.
• Definition : Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ω(g(n)) if for any real constant c >
0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for
every integer n ≥ n0.