Asympotic Notations
Asympotic Notations
ASYMPTOTIC BOUNDS
Structure Page Nos.
2.0 Introduction 41
2.1 Objectives 41
2.2 Some Useful Mathematical Functions &Notations 42
2.2.1 Functions & Notations
2.2.2 Modular Arithmetic/Mod Function
2.3 Mathematical Expectation 49
2.4 Principle of Mathematical Induction 50
2.5 Concept of Efficiency of an Algorithm 52
2.6 Well Known Asymptotic Functions & Notations 56
2.6.1 Enumerate the Five Well-Known Approximation Functions
and How These are Pronounced
2.6.2 The Notation O
2.6.3 The Ω Notation
2.6.4 The Notation Θ
2.6.5 The Notation o
2.6.6 The Notation ω
2.7 Summary 66
2.8 Solutions/Answers 67
2.9 Further Readings 70
2.0 INTRODUCTION
We have already mentioned that there may be more than one algorithms, that solve a
given problem. In Section 3.3, we shall discuss eight algorithms to sort a given list of
numbers, each algorithm having its own merits and demerits. Analysis of algorithms,
The understanding of the
the basics of which we study in Unit 3, is an essential tool for making well-informed theory of a routine may be
decision in order to choose the most suitable algorithm, out of the available ones if greatly aided by providing,
any, for the problem or application under consideration. at the time of construction
one or two statements
concerning the state of the
A number of mathematical and statistical tools, techniques and notations form an machine at well chose
essential part of the baggage for the analysis of algorithms. We discuss some of these points…In the extreme form
tools and techniques and introduce some notations in Section 2.2. However, for of the theoretical method a
detailed discussion of some of these topics, one should refer to the course material of watertight mathematical
MCS-013. proof is provided for the
assertions. In the extreme
form of the experimental
Also, in this unit, we will study a number of well-known approximation functions. method the routine is tried
These approximation functions which calculate approximate values of quantities out one the machine with a
under consideration, prove quite useful in many situations, where some of the variety of initial conditions
and is pronounced fit if the
involved quantities are calculated just for comparison with each other. And the assertions hold in each case.
correct result of comparisons of the quantities can be obtained even with approximate Both methods have their
values of the involved quantities. In such situations, the advantage is that the weaknesses.
approximate values may be calculated much more efficiently than can the actual
A.M. Turing
values. Ferranti Mark 1
Programming Manual (1950)
2.1 OBJECTIVES
After going through this Unit, you should be able to:
Unless mentioned otherwise, we use the letters N, I and R in the following sense:
N = {1, 2, 3, …}
I = {…, ─ 2, ─, 0, 1, 2, ….}
R = set of Real numbers.
(i) Summation:
The expression
a1 + a2+ …+ai+…+an
may be denoted in shorthand as
n
∑a
i =1
i
(ii) Product
The expression
a1 × a2 × … × ai × … × an
may be denoted in shorthand as
n
∏a
i =1
i
42
Definition 2.2.1.2: Some Pre-requisites and
Asymptotic Bounds
Function:
For two given sets A and B (which need not be distinct, i.e., A may be the same as B)
a rule f which associates with each element of A, a unique element of B, is called a
function from A to B. If f is a function from a set A to a set B then we denote the fact
by f: A → B. Also, for x ε A, f(x) is called image of x in B. Then, A is called the
domain of f and B is called the Codomain of f.
Example 2.2.1.3:
Let f: I → I be defined such that
f(x) = x2 for all x ε I
Then
f maps ─ 4 to 16
f maps 0 to 0
f map 5 to 25
Remark 2.2.1.4:
We may note the following:
(i) if f: x → y is a function, then there may be more than one elements, say x1
and x2 such that
f(x1) = f(x2)
For example, in the Example 2.2.1.3
f(2) = f(─2) = 4
(ii) Though for each element x ε X, there must be at least one element y ε Y
s.t f(x) = y. However, it is not necessary that for each element y ε Y,
there must be an element x ε X such that f(x) = y. For example, for
y = ─ 3 ε Y there is no x ε X s.t f(x) = x2 = ─ 3.
By putting the restriction on a function f, that for each y ε Y, there must be at least
one element x of X s.t f(x) = y, we get special functions called onto or surjective
functions and shall be defined soon.
Definition 2.2.1.5:
We have already seen that the function defined in Example 2.2.1.3 is not 1-1.
However, by changing the domain, through defined by the same rule, f becomes a
1-1 function.
Example 2.2.1.2:
In this particular case, if we change the domain from I to N = {1,2,3…} then we can
easily check that function
∗
Some authors write 1-to-1 in stead of 1-1. However, other authors call a function 1-to-1 if
f is both 1-1 and onto (to be defined 0 in a short while). 43
Introduction to f:N→I defined as
Algorithmics f(x) = x2, for all x ε N,
is 1-1.
Because, in this case, for each x ε N its negative ─ x ∉ N. Hence for f(x) = f(y)
implies x = y. For example, If f(x) = 4 then there is only one value of x, viz, x = 2 s.t
f(2) = 4.
Definition 2.2.1.7:
We have already seen that the function defined in Example 2.2.1.3 is not onto.
However, in this case either, by changing the codomain Y or changing the rule, (or
both) we can make f as Onto.
Definition 2.2.1.10:
Monotonic Functions: For the definition of monotonic functions, we consider
only functions
f: R → R
where, R is the set of real numbers ∗ .
∗
Monotonic functions
f : X → Y,
may be defined even when each of X and Y, in stead of being R, may be any ordered sets.
44 But, such general definition is not required for our purpose.
A function f: R → R is said to be monotonically increasing if for x, y ε R and x ≤ y Some Pre-requisites and
Asymptotic Bounds
we have f(x) ≤ f(y).
In other words, as x increases, the value of its image f(x) also increases for a
monotonically increasing function.
Further, f is said to be strictly monotonically increasing, if x < y then f(x) < f(y)
Example 2.2.1.11:
We will discuss after a short while, useful functions called Floor and Ceiling
functions which are monotonic but not strictly monotonic.
Further, f is said to be strictly monotonically decreasing, if x < y then f(x) > f(y).
Example 2.2.1.12:
Let f: R → R be defined as
F(x) = ─ x + 3
if x1 ≥ x2 then ─ x1 ≤ ─ x2 implying ─ x1+3 ≤ ─ x2 + 3,
which further implies f(x1) ≤ f(x2)
Hence, f is monotonically decreasing.
Next, we define Floor and Ceiling functions which map every real number to an
integer.
Definition 2.2.1.13:
Floor Function: maps each real number x to the integer, which is the greatest of all
integers less than or equal to x. Then the image of x is denoted by ⎣ x ⎦.
Definition 2.2.1.14:
Ceiling Function: maps each real number x to the integer, which is the least of all
integers greater than or equal to x. Then the image of x is denoted by ⎡x ⎤.
x ─ 1 < ⎣ x ⎦ ≤ x ≤ ⎡x ⎤ < x + 1.
45
Introduction to Example 2.2.1.15:
Algorithmics
Each of the floor function and ceiling function is a monotonically increasing function
but not strictly monotonically increasing function. Because, for real numbers x and y,
if x ≤ y then y = x + k for some k ≥ 0.
Similarly
⎡y⎤ = ⎡x + k⎤ = least integer greater than or equal to x + k ≥ least integer
greater than or equal to x = ⎡x⎤.
But, each of floor and ceiling function is not strictly increasing, because
(ii) If, it is 5th day (i.e., Friday) of a week, after 4 days, it will be 2nd day
(i.e., Tuesday) and not 9th day, of course of another, week (whenever the number of
the day exceeds 7, we subtract n = 7 from the number, we are taking here Sunday as 7th day, in
stead of 0th day)
(iii) If, it is 6th month (i.e., June) of a year, then after 8 months, it will be 2nd month
(i.e., February) of, of course another, year ( whenever, the number of the month exceeds
12, we subtract n = 12)
Definition 2.2.2.1:
b mod n: if n is a given positive integer and b is any integer, then
If b = ─ 42 and n = 11 then
b mod n = ─ 42 mod 11 = 2 (Θ ─ 42 = (─ 4) × 11 + 2)
Mod function can also be expressed in terms of the floor function as follows:
b (mod n) = b ─ ⎣ b/n⎦ × n
Definition 2.2.2.2:
Factorial: For N = {1,2,3,…}, the factorial function
factorial: N ∪ {0} → N∪ {0}
46 given by
factorial (n) = n × factorial (n ─ 1) Some Pre-requisites and
Asymptotic Bounds
has already been discussed in detail in Section 1.6.3.2.
Definition 2.2.2.3:
Exponentiation Function Exp: is a function of two variables x and n where x is any
non-negative real number and n is an integer (though n can be taken as non-integer
also, but we restrict to integers only)
For n = 0
Exp (x, 0) = x0 = 1
For n > 0
Exp (x, n) = x × Exp (x, n ─ 1)
i.e
xn = x × xn-1
For n < 0, let n = ─ m for m > 0
1
xn = x-m =
xm
In xn, n is also called the exponent/power of x.
For example: if x = 1.5, n = 3, then
For two integers m and n and a real number b the following identities hold:
((b) ) m n
= bmn
(b ) m n
= (b )n m
bm . bn = bm+n
Definition 2.2.2.4:
Polynomial: A polynomial in n of degree k, where k is a non-negative integer, over
R, the set of real numbers, denoted by P(n), is of the form
We may note that P(n) = nk = 1.nk for any k, is a single-term polynomial. If k ≥ 0 then
P(n) = nk is monotonically increasing. Further, if k ≤ 0 then p(n) = nk is
monotonically decreasing.
nc
Lim n = 0
n →∞ b
The result, in non-mathematical terms, states that for any given constants b and c, but
1c 2 c 3c kc
with b > 1, the terms in the sequence 1 , 2 , 3 , ..., k , .... gradually decrease and
b b b b
approaches zero. Which further means that for constants b and c, and integer
variable n, the exponential term bn, for b > 1, increases at a much faster rate than
the polynomial term nc.
Definition 2.2.2.6:
The letter e is used to denote the quantity
1 1 1
1+ + + + ......,
1! 2! 3!
and is taken as the base of natural logarithm function, then for all real numbers x,
x2 x3 ∞
xi
ex = 1 + x + +
2! 3!
+ .... = ∑
i = 0 i!
Definition 2.2.2.8:
Logarithm: The concept of logarithm is defined indirectly through the definition of
Exponential defined earlier. If a > 0, b > 0 and c > 0 are three real numbers, such that
c = ab
The following important properties of logarithms can be derived from the properties
of exponents. However, we just state the properties without proof.
Result 2.2.2.9:
For n, a natural number and real numbers a, b and c all greater than 0, the following
identities are true:
(i) loga (bc) = loga b+loga c
n
(ii) loga (b ) = n logab
(iii) logba = logab
(iv) loga (1/b) = ─ logba
1
(v) logab =
log b a
Example 2.1: Suppose, the students of MCA, who completed all the courses in the
year 2005, had the following distribution of marks.
0% to 20% 08
20% to 40% 20
40% to 60% 57
60% to 80% 09
80% to 100% 06
If a student is picked up randomly from the set of students under consideration, what
is the % of marks expected of such a student? After scanning the table given above,
we intuitively expect the student to score around the 40% to 60% class, because, more
than half of the students have scored marks in and around this class.
Assuming that marks within a class are uniformly scored by the students in the class,
the above table may be approximated by the following more concise table:
49
Introduction to % marks Percentage of students scoring the marks
Algorithmics
10 ∗ 08
30 20
50 57
70 09
90 06
Thus, we assign weight (8/100) to the score 10% (Θ 8, out of 100 students, score on
the average 10% marks); (20/100) to the score 30% and so on.
Thus
8 20 57 9 6
Expected % of marks = 10 × + 30 × + 50 × + 70 × + 90 × = 47
100 100 100 100 100
The final calculation of expected marks of 47 is roughly equal to our intuition of the
expected marks, according to our intuition, to be around 50.
We generalize and formalize these ideas in the form of the following definition.
Mathematical Expectation
For a given set S of items, let to each item, one of the n values, say, v1, v2,…,vn, be
associated. Let the probability of the occurrence of an item with value vi be pi. If an
item is picked up at random, then its expected value E(v) is given by
n
E(v) = ∑pv
i −1
i i = p1. v1 + p 2 . v 2 + ........ p n . vn
50 ∗
10 is the average of the class boundaries 0 and 20.
Let us consider the following sequence in which nth term S(n) is the sum of first Some Pre-requisites and
Asymptotic Bounds
(n─1) powers of 2, e.g.,
S(1) = 20 =2─1
S(2) = 20 + 21 = 22 ─ 1
S(3) = 20 + 21 + 22 = 23 ─ 1
(ii) Induction Hypothesis: Assume, for some k > base-value (=1, in this case)
that
S(k) = 2k ─ 1.
(iii) Induction Step: Using (i) & (ii) establish that (in this case)
S(k+1) = 2k+1 ─ 1
In order to establish
S(k+1) = 2k+1 ─ 1, (A)
we use the definition of S(n) and Steps (i) and (ii) above
By definition
S(k+1) = 20 + 21+…+2k+1-1
= (20 + 21+…+2k-1) + 2k (B)
But by definition
20 + 21+…+2k-1 = S(k). (C)
S(k+1) = (2k ─ 1) + 2k
∴ S(k+1) = 2.2k ─ 1 = 2k+1 ─ 1
which establishes (A).
Ex.2) Let us assume that we have unlimited supply of postage stamps of Rs. 5 and
Rs. 6 then
51
Introduction to (i) through, direct calculations, find what amounts can be realized in terms
Algorithmics of only these stamps.
(ii) Prove, using Principle of Mathematical Induction, the result of your
efforts in part (i) above.
Mainly the two computer resources taken into consideration for efficiency measures,
are time and space requirements for executing the program corresponding to the
solution/algorithm. Until it is mentioned otherwise, we will restrict to only time
complexities of algorithms of the problems.
It is easy to realize that given an algorithm for multiplying two n × n matrices, the
time required by the algorithm for finding the product of two 2 × 2 matrices, is
expected to take much less time than the time taken by the same algorithm for
multiplying say two 100 × 100 matrices. This explains intuitively the notion of the
size of an instance of a problem and also the role of size in determining the (time)
complexity of an algorithm. If the size (to be later considered formally) of general
instance is n then time complexity of the algorithm solving the problem (not just
the instance) under consideration is some function of n.
In view of the above explanation, the notion of size of an instance of a problem plays
an important role in determining the complexity of an algorithm for solving the
problem under consideration. However, it is difficult to define precisely the concept
of size in general, for all problems that may be attempted for algorithmic solutions.
Formally, one of the definitions of the size of an instance of a problem may be taken
as the number of bits required in representing the instance.
However, for all types of problems, this does not serve properly the purpose for which
the notion of size is taken into consideration. Hence different measures of size of an
instance of a problem, are used for different types of problem. For example,
(i) In sorting and searching problems, the number of elements, which are to be
sorted or are considered for searching, is taken as the size of the instance of
the problem of sorting/searching.
(ii) In the case of solving polynomial equations or while dealing with the algebra
of polynomials, the degrees of polynomial instances, may be taken as the
sizes of the corresponding instances.
There are two approaches for determining complexity (or time required) for executing
an algorithm, viz.,
52
(i) empirical (or a posteriori) and Some Pre-requisites and
Asymptotic Bounds
(ii) theoretical (or a priori).
The theoretical approach has a number of advantages over the empirical approach
including the ones enumerated below:
(i) The approach does not depend on the programming language in which the
algorithm is coded and on how it is coded in the language,
(ii) The approach does not depend on the computer system used for executing (a
programmed version of) the algorithm.
(iii) In case of a comparatively inefficient algorithm, which ultimately is to be
rejected, the computer resources and programming efforts which otherwise
would have been required and wasted, will be saved.
(iv) In stead of applying the algorithm to many different-sized instances, the
approach can be applied for a general size say n of an arbitrary instance of the
problem under consideration. In the case of theoretical approach, the size n
may be arbitrarily large. However, in empirical approach, because of
practical considerations, only the instances of moderate sizes may be
considered.
Remark 2.5.1:
In view of the advantages of the theoretical approach, we are going to use it as
the only approach for computing complexities of algorithms. As mentioned earlier,
in the approach, no particular computer is taken into consideration for calculating time
complexity. But different computers have different execution speeds. However, the
speed of one computer is generally some constant multiple of the speed of the other.
Therefore, this fact of differences in the speeds of computers by constant
multiples is taken care of, in the complexity functions t for general instance sizes
n, by writing the complexity function as c.t(n) where c is an arbitrary constant.
An important consequence of the above discussion is that if the time taken by one
machine in executing a solution of a problem is a polynomial (or exponential)
function in the size of the problem, then time taken by every machine is a polynomial
(or exponential) function respectively, in the size of the problem. Thus, functions
differing from each other by constant factors, when treated as time complexities
should not be treated as different, i.e., should be treated as complexity-wise
equivalent.
53
Introduction to Remark 2.5.2:
Algorithmics
Asymptotic Considerations:
Computers are generally used to solve problems involving complex solutions. The
complexity of solutions may be either because of the large number of involved
computational steps and/or because of large size of input data. The plausibility of the
claim apparently follows from the fact that, when required, computers are used
generally not to find the product of two 2 × 2 matrices but to find the product of two
n × n matrices for large n, running into hundreds or even thousands.
Similarly, computers, when required, are generally used not only to find roots of
quadratic equations but for finding roots of complex equations including polynomial
equations of degrees more than hundreds or sometimes even thousands.
The above discussion leads to the conclusion that when considering time complexities
f1(n) and f2(n) of (computer) solutions of a problem of size n, we need to consider and
compare the behaviours of the two functions only for large values of n. If the relative
behaviours of two functions for smaller values conflict with the relative behaviours
for larger values, then we may ignore the conflicting behaviour for smaller values.
For example, if the earlier considered two functions
represent time complexities of two solutions of a problem of size n, then despite the
fact that
f1 (n) ≥ f2 (n) for n ≤ 14,
we would still prefer the solution having f1 (n) as time complexity because
This explains the reason for the presence of the phrase ‘n ≥ k’ in the definitions
of the various measures of complexities and approximation functions, discussed
below:
Remark 2.5.3:
Comparative Efficiencies of Algorithms: Linear, Quadratic, Polynomial
Exponential
Suppose, for a given problem P, we have two algorithms say A1 and A2 which solve
the given problem P. Further, assume that we also know time-complexities T1(n) and
T2 (n) of the two algorithms for problem size n. How do we know which of the two
algorithms A1 and A2 is better?
The difficulty in answering the question arises from the difficulty in comparing time
complexities T1(n) and T2(n).
More explicitly
The issue will be discussed in more detail in Unit 3. However, here we may mention
that, in view of the fact that we generally use computers to solve problems of large
54
sizes, in the above case, the algorithms A1 with time-complexity T1 (n) = 1000n2 is Some Pre-requisites and
Asymptotic Bounds
preferred over the algorithm A2 with time-complexity T2 (n) = 5n4, because
T1 (n) ≤ T2(n) for all n ≥ 15.
BT ( n ) = a k n k + a k −1 + ..... + a i n i + ... + a 1 n + a 0
1
for some k ≥ 0 with a ' s as real numbers and a k > 0, and
i
(ii) If, again a problem is solved by two algorithms D1 and D2 with respectively
polynomial time complexities DT1 and DT2 then if
then the algorithm D1 is assumed to be more efficient and is preferred over D2.
Similarly, the terms ‘quadratic’ and ‘polynomial time’ complexity functions and
algorithms are used when the involved complexity functions are respectively of the
forms c n2 and c1 nk +…….+ck.
Remark 2.5.4:
For all practical purposes, the use of c, in (c t(n)) as time complexity measure, offsets
properly the effect of differences in the speeds of computers. However, we need to be
on the guard, because in some rarely occurring situations, neglecting the effect of c
may be misleading.
For example, if two algorithms A1 and A2 respectively take n2 days and n3 secs for
execution of an instance of size n of a particular problem. But a ‘day’ is a constant
multiple of a ‘second’. Therefore, as per our conventions we may take the two
complexities as of C2 n2 and C3 n3 for some constants C2 and C3. As, we will discuss
later, the algorithm A1 taking C2 n2 time is theoretically preferred over the algorithm
A2 with time complexity C3 n3. The preference is based on asymptotic behaviour of
complexity functions of the algorithms. However in this case, only for instances
requiring millions of years, the algorithm A1 requiring C2 n2 time outperforms
algorithms A2 requiring C3 n3.
Remark 2.2.5:
Unit of Size for Space Complexity: Though most of the literature discusses the
complexity of an algorithm only in terms of expected time of execution, generally
55
Introduction to neglecting the space complexity. However, space complexity has one big advantage
Algorithmics over time complexity.
Ex.3) For a given problem P, two algorithms A1 and A2 have respectively time
complexities T1 (n) and T2 (n) in terms of size n, where
Find the range for n, the size of an instance of the given problem, for whichA1 is more
efficient than A2.
f: N→N
g: N→N
Solution:
Part (i)
Consider
57
Introduction to ∴ there exist C = 6 and k = 1 such that
Algorithmics
f(x) ≤ C. x3 for all x≥k
Thus we have found the required constants C and k. Hence f(x) is O(x3).
Part (ii)
As above, we can show that
However, we may also, by computing some values of f(x) and x4, find C and k as
follows:
Part (iii)
for C = 1 and k = 1 we get
x3 ≤ C (2x3 + 3x2 +1) for all x ≥ k
Part (iv)
We prove the result by contradiction. Let there exist positive constants C and k
such that
Part (v)
Again we establish the result by contradiction.
Let O (2 x3+3x2+1) = x2
implying
implying
x≤C for x ≥ k
58
Again for x = max {C + 1, k } Some Pre-requisites and
Asymptotic Bounds
Example 2.6.2.2:
The big-oh notation can be used to estimate Sn, the sum of first n positive integers
Remark 2.6.2.2:
It can be easily seen that for given functions f(x) and g(x), if there exists one pair of C
and k with f(x) ≤ C.g (x) for all x ≥ k, then there exist infinitely many pairs (Ci, ki)
which satisfy
Because for any Ci ≥ C and any ki ≥ k, the above inequality is true, if f(x)≤ c.g(x) for
all x ≥ k.
Example 2.6.3.1:
For the functions
(iv) x3 = Ω (h(x))
(v) x2 ≠ Ω (h(x))
Solutions:
Part (i)
59
Introduction to Part (ii)
Algorithmics
h(x) = 2x3─3x2+2
Let C and k > 0 be such that
2x3─3x2+2 ≥ C x3 for all x ≥ k
3 2
i.e., (2─C) x ─ 3x +2 ≥ 0 for all x ≥ k
Part (iii)
2x3─3x2+2 = Ω (x2)
It can be easily seen that lesser the value of C, better the chances of the above
inequality being true. So, to begin with, let us take C = 1 and try to find a value of k
s.t
2x3─ 4x2+2 ≥ 0.
For x ≥ 2, the above inequality holds
Part (iv)
Let the equality
x3 = Ω (2x3─3x2+2)
x3 ≥ C(2(x3─3/2 x2 +1))
Part (v)
We prove the result by contradiction.
2C +1
≥ x for all x ≥ k
C
60
(2 C + 1) Some Pre-requisites and
But for any x ≥ 2 , Asymptotic Bounds
C
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be Θ (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) ≤ f(x) ≤ C1 g(x) for all x ≥ k.
(Note the last inequalities represent two conditions to be satisfied simultaneously viz.,
C2 g(x) ≤ f(x) and f(x) ≤ C1 g(x))
We state the following theorem without proof, which relates the three functions
O, Ω, Θ.
Theorem: For any two functions f(x) and g(x), f(x) = Θ (g(x)) if and only if
f(x) = O (g(x)) and f(x) = Ω (g(x)).
Solutions
Part (i)
for C1 = 3, C2 = 1 and k = 4
Part (ii)
We can show by contradiction that no C1 exists.
Let, if possible for some positive integers k and C1, we have 2x3+3x2+1≤C1. x2 for all
x≥k
Then
i.e.,
But for
61
Introduction to x= max {C1 + 1, k }
Algorithmics
f(x) ≠ Θ (x4)
C2 x4 ≤ (2x3 + 3x2 + 1)
If such a C2 exists for some k then C2 x4 ≤ 2x3 + 3x2 + 1 ≤ 6x3 for all x ≥ k≥1,
implying
C2 x ≤ 6 for all x ≥ k
⎛ 6 ⎞
But for x = ⎜⎜ + 1⎟⎟
⎝ C2 ⎠
Then for f (x) = O (x3), though there exist C and k such that
yet there may also be some values for which the following equality also holds
However, if we consider
f(x) = O (x4)
The case of f(x) = O (x4), provides an example for the next notation of small-oh.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of
g of x) if there exists natural number k satisfying
62
Here we may note the following points Some Pre-requisites and
Asymptotic Bounds
(i) In the case of little-oh the constant C does not depend on the two functions f (x)
and g (x). Rather, we can arbitrarily choose C >0
(ii) The inequality (B) is strict whereas the inequality (A) of big-oh is not
necessarily strict.
Solutions:
Part (i)
Let C > 0 be given and to find out k satisfying the requirement of little-oh.
Consider
Case when n = 4
3 1
2+ + <C x
x x3
⎧7 ⎫
if we take k = max ⎨ ,1⎬
⎩C ⎭
then
therefore
We prove the result by contradiction. Let, if possible, f(x) = 0(xn) for n≤3.
3 1
2+ + < C xn-3
x x2 63
Introduction to n ≤ 3 and x ≥ k
Algorithmics
As C is arbitrary, we take
3 1
2+ + 2 < C. xn-3 for n ≤ 3 and x ≥ k ≥ 1.
x x
Also, it can be easily seen that
3 1
∴ 2+ + ≤1 for n ≤ 3
x x2
However, the last inequality is not true. Therefore, the proof by contradiction.
We state (without proof) below two results which can be useful in finding small-oh
upper bound for a given function.
Theorem 2.6.5.3: Let f(x) and g(x) be functions in definition of small-oh notation.
f (x)
Lim =0
x →∞ g( x )
Next, we introduce the last asymptotic notation, namely, small-omega. The relation of
small-omega to big-omega is similar to what is the relation of small-oh to big-oh.
Further
f(x) = ω (g(x))
Consider
2x3 + 3x2 + 1 > C x
1
2x2 + 3x + > C (dividing throughout by x)
x
Let k be integer with k≥C+1
More generally
Theorem 2.6.6.3: Let f(x) and g(x) be functions in the definitions of little-omega
Then f(x) = ω (g(x)) if and only if
f (x )
Lim =∞
x →∞ g(x )
g(x )
Lim =0
x →∞ f (x )
2.7 SUMMARY
In this unit, first of all, a number of mathematical concepts are defined. We defined
the concepts of function, 1-1 function, onto function, ceiling and floor functions, mod
function, exponentiation function and log function. Also, we introduced some
mathematical notations.
n
E(v) = ∑pv
i −1
i i = p1. v1 + p 2 . v 2 + ........ p n . vn
Next, five Well Known Asymptotic Growth Rate functions are defined and
corresponding notations are introduced. Some important results involving these
are stated and/or proved
Let f(x) and g(x) be two functions, each from the set of natural numbers or set of
positive real numbers to positive real numbers.
The Notation Θ
Provides simultaneously both asymptotic lower bound and asymptotic upper bound
for a given function.
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be Θ (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) ≤ f(x) ≤ C1 g(x) for all x ≥ k.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of
g of x) if there exists natural number k satisfying
The Notation ω
Again the asymptotic lower bound Ω may or may not be tight. However, the
asymptotic bound ω cannot be tight. The formal definition of ω is as follows:
Let f(x) and g(x) be two functions each from the set of natural numbers or the set of
positive real numbers to set of positive real numbers.
Further
f(x) = ω (g(x))
2.8 SOLUTIONS/ANSWERS
Ex. 1) We follow the three-step method explained earlier.
Let S(n) be the statement: 6 divides n3 ─ n
67
Introduction to But 0 = 6 × 0. Therefore, 6 divides 0. Hence S(0) is
Algorithmics correct.
Ex. 2) Part (i): With stamps of Rs. 5 and Rs. 6, we can make the following
the following amounts
5 = 1 × 5 + 0× 6
6 = 0 × 5 + 1× 6 using 2 stamps
10 = 2 × 5 + 0× 6
11 = 1 × 5 + 1× 6 using 2 stamps
12 = 0 × 5 + 2× 6
15 = 3× 5 + 0× 6
16 = 2× 5 + 1× 6
17 = 1 × 5 + 2× 6 using 3 stamps
18 = 0 × 5 + 3× 6
19 is not possible
20 = 4× 5 + 0× 6
21 = 3 × 5 + 1× 6
22 = 2 × 5 + 2× 6 using 4 stamps
23 = 1 × 5 + 3× 6
24 = 0× 5 + 4× 6
68
25 = 5 × 5 + 0× 6 Some Pre-requisites and
26 = 4× 5 + 1× 6 Asymptotic Bounds
27 = 3× 5 + 2× 6
28 = 2× 5 + 3× 6 using 5 stamps
29 = 1 × 5 + 4× 6
30 = 0 × 5 + 5× 6
It appears that for any amount A ≥ 20, it can be realized through stamps
of only Rs. 5 and Rs. 6.
Ex. 3) Algorithm A1 is more efficient than A2 for those values of n for which
4n5 + 3n = T1 (n) ≤ T2 (n) = 2500 n3 + 4n
i.e.,
4n4 + 3 ≤ 2500 n2 + 4
i.e.,
for n ≤ 25
Next, consider n ≥ 26
69
Introduction to 4n2 ─ 2500 ≥ 4(26)2 ─ 2500 = 2704 ─ 2500
Algorithmics 1 1
= 204 > 1 > ≥ 2
(26) n
2
Each factor on the right hand side is less than equal to 1 for all value of
n. Hence, The right hand side expression is always less than one.
Therefore, n!/nn ≤ 1
or, n! ≤ nn
Therefore, n! =O( nn)
Ex. 5)
1. Discrete Mathematics and Its Applications (Fifth Edition) K.N. Rosen: Tata
McGraw-Hill (2003).
70