0% found this document useful (0 votes)
10 views

1 Analysis of Algorithms

ENGR 352 class notes

Uploaded by

abdulkarimmawji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

1 Analysis of Algorithms

ENGR 352 class notes

Uploaded by

abdulkarimmawji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Analysis of Algorithms

Dr. Hakim Mellah


Department of Computer Science & Software Engineering
Concordia University, Montreal, Canada

These slides has been extracted, modified and updated from original slides of :
• Data Structures and Algorithms in Java, 5th edition. John Wiley& Sons, 2010. ISBN 978-0-470-38326-1.
• Dr. Hanna’s slides (http://aimanhanna.com/concordia/comp352/index.htm)

Copyright © 2010 Michael T. Goodrich, Roberto Tamassia


All rights reserved

Analysis of Algorithms 1
How to Estimate Efficiency?
❑ Correctness of a method depends merely on whether
the algorithm performs what it is supposed to do.
❑ Clearly, efficiency is hence different than
correctness.
❑ Efficiency can be measured in many ways; i.e:
◼ Less memory utilization
◼ Faster execution time
◼ Quicker release of allocated recourses
◼ etc.
❑ How efficiency can then be measured?
◼ Measurement should be independent of used software (i.e.
compiler, language, etc.) and hardware (CPU speed,
memory size, etc.)
◼ Particularly, run-time analysis can have serious weaknesses
Analysis of Algorithms 2
Experimental Studies
❑ Write a program 9000

implementing the 8000

algorithm. 7000

❑ Run the program with 6000

Time (ms)
inputs of varying size and 5000
composition. 4000
❑ Use a method like 3000
System.currentTimeMillis() to 2000
get an accurate measure
1000
of the actual running time.
0
❑ Plot the results. 0 50 100
Input Size

Analysis of Algorithms 3
Limitations of Experiments
❑ It is necessary to implement the algorithm, which may
be difficult/costly.

❑ Results may not be indicative of the running time on


other inputs not included in the experiment.

❑ In order to compare two algorithms, the same


hardware and software environments must be used

❑ In some multiprogramming environments, such as


Windows, it is very difficult to determine how long a
single task takes (since there is so much going behind
the scene).
Analysis of Algorithms 4
How to Estimate Efficiency?
❑ Efficiency, to a great extent, depends on how the
method is defined.
❑ An abstract analysis that can be performed by direct
investigation of the method definition is hence
preferred.
❑ Ignore various restrictions; i.e:
◼ CPU speed
◼ Memory limits; for instance allow an int variable to take any
allowed integer value, and allow arrays to be arbitrarily
large
◼ etc.
❑ Since the method is now unrelated to specific
computer environment, we refer to it as algorithm.
Analysis of Algorithms 5
Estimating Running Time
❑ How can we estimate the running/execution-time from
algorithm’s definition?

❑ Consider the number of executed statements, in a trace


of the algorithm, as a measurement of running-time
requirement.

❑ This measurement can be represented as function of


the “size” of the problem.

❑ The running time of an algorithm typically grows with


the input size.
Analysis of Algorithms 6
Estimating Running Time
❑ We focus on the worst case best case
average case
of running time since this is worst case
crucial to many applications 120

such as games, finance, 100


robotics, etc.

Running Time
80

60
❑ Given a method of a
problem of size n, find 40

worstTime(n) , which is 20

the maximum number of 0


executed statements in a 1000 2000 3000 4000
Input Size
trace, considering all
possible parameters/input
values.
Analysis of Algorithms 7
Estimating Running Time
Example:
Assume an array a [0 … n] of int, and assume the
following trace:

for (int i = 0; i < n - 1; i++)


if (a [i] > a [i + 1])
System.out.println (i);

What is worstTime(n)?
Analysis of Algorithms 8
Estimating Running Time
Statement Worst Case Number
of Executions
i=0 1
i<n–1 n
i++ n-1
a[i] > a[i+1] n-1
System.out.println() n-1

❑ That is, worstTime(n) is: 4n – 2.


Analysis of Algorithms 9
Estimating Running Time
➔ See aboveMeanCount() method

❑ What is worstTime(n)?

Analysis of Algorithms 10
Estimating Running Time
worstTime(n ) of aboveMeanCount() method
Statements Worst # of Executions
double[] a, double mean 1+1
(assignment of 1st & 2nd arguments)

n = a.length, count = 0 1+1


i = 0, return count 1+1
i<n n+1
i++ n
a[i] > mean n
count++ n–1
= 4n + 6
Analysis of Algorithms 11
Pseudocode
❑ High-level description Example: find max
of an algorithm. element of an array
❑ More structured than Algorithm arrayMax(A, n)
English prose. Input array A of n integers
❑ Less detailed than a Output maximum element of A
program.
❑ Preferred notation for currentMax  A[0]
describing algorithms for i  1 to n − 1 do
❑ Hides program design if A[i]  currentMax then
issues. currentMax  A[i]
return currentMax

Analysis of Algorithms 12
Pseudocode Details
❑ Control flow ❑ Method call
◼ if … then … [else …] var.method (arg [, arg…])
◼ while … do … ❑ Return value
◼ repeat … until … return expression
◼ for … do … ❑ Expressions
◼ Indentation replaces braces  Assignment
(like = in Java)
❑ Method declaration = Equality testing
Algorithm method (arg [, arg…]) (like == in Java)
Input … n2 Superscripts and other
Output … mathematical
formatting allowed

Analysis of Algorithms 13
Seven Important Functions
❑ Seven functions often
1E+30
appear in algorithm 1E+28 Cubic
1E+26
analysis: 1E+24
Quadratic
Linear
1E+22
◼ Constant  1 1E+20
◼ Logarithmic  log n 1E+18
1E+16

T(n)
◼ Linear  n 1E+14
1E+12
◼ N-Log-N  n log n 1E+10
◼ Quadratic  n2 1E+8
1E+6
◼ Cubic  n3 1E+4
1E+2
◼ Exponential  2n 1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n

Analysis of Algorithms 14
Functions Graphed Slide by Matt Stallmann
included with permission.

Using “Normal” Scale


g(n) = n lg n
g(n) = 1 g(n) = 2n

g(n) = n2
g(n) = lg n

g(n) = n
g(n) = n3

Analysis of Algorithms 15
The Random Access Machine
(RAM) Model
❑ A CPU.

❑ A potentially unbounded
bank of memory cells, 2
1
each of which can hold an 0
arbitrary number or
character.
Memory cells are numbered and accessing
any cell in memory takes unit time.
Analysis of Algorithms 16
Primitive Operations
❑ Basic computations
❑ Examples:
performed by an algorithm.
◼ Evaluating an
❑ Identifiable in pseudocode. expression
❑ Largely independent from the ◼ Assigning a value
to a variable
programming language.
◼ Indexing into an
❑ Exact definition not important array
(we will see why later). ◼ Calling a method
Returning from a
❑ Assumed to take a constant ◼

method
amount of time in the RAM
model.

Analysis of Algorithms 17
Counting Primitive Operations
❑ By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by
an algorithm, as a function of the input size.
Algorithm compareValues(A, n)
Input array A of n integers
Output display all elements larger than following ones
# of operations
for i  0 to n − 2 do n−1
if A[i]  A[i+1] then n−1
display i n−1
increment counter i n−1
Total 4n − 4
Analysis of Algorithms 18
Estimating Running Time
❑ Algorithm compareValues executes 4n − 4
primitive operations in the worst case.
❑ Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
❑ Let T(n) be the running time of compareValues.
Then
a (4n − 4)  T(n)  b(4n − 4)
❑ Hence, the running time T(n) is bounded by two
linear functions
Analysis of Algorithms 19
Growth Rate of Running Time
❑ Changing the hardware/software
environment
◼ Affects T(n) by a constant factor, but
◼ Does not alter the growth rate of T(n)
❑ The linear growth rate of the running
time T(n) is an intrinsic property of
algorithm compareValues

Analysis of Algorithms 20
Slide by Matt Stallmann
included with permission.

Why Growth Rate Matters


if runtime
time for n + 1 time for 2 n time for 4 n
is...

c log n c log (n + 1) c (log n + 1) c(log n + 2)

cn c (n + 1) 2c n 4c n

~ c n log n 2c n log n + 4c n log n + runtime


c n log n
+ cn 2cn 4cn quadruples
when
c n2 ~ c n2 + 2c n 4c n2 16c n2 problem
size doubles
c n3 ~ c n3 + 3c n2 8c n3 64c n3

c 2n c 2 n+1 c 2 2n c 2 4n

Analysis of Algorithms 21
Slide by Matt Stallmann
included with permission.

Comparison of Two Algorithms


insertion sort is
n2 / 4
merge sort is
2 n log n
sort a million items?
insertion sort takes
roughly 70 hours
while
merge sort takes
roughly 40 seconds

This is a slow machine, but if


100 x as fast then it’s 40 minutes
versus less than 0.5 seconds
Analysis of Algorithms 22
Constant Factors
1E+26
❑ The growth rate is 1E+24 Quadratic

not affected by 1E+22 Quadratic


Linear
1E+20
◼ constant factors or 1E+18 Linear

◼ lower-order terms 1E+16


1E+14

T(n)
❑ Examples 1E+12
1E+10
◼ 102n + 105 is a linear
1E+8
function 1E+6
◼ 105n2 + 108n is a 1E+4
quadratic function 1E+2
1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n

Analysis of Algorithms 23
Big-O Notation
10,000
❑ We do NOT need to 3n

calculate the exact worst 2n+10


1,000
time since it is only an
approximation of time n

requirements. 100

❑ Instead, we can just


approximate that time 10

by means of “Big-O”
notation. 1
1 10 100 1,000
❑ That is quite satisfactory n
since it gives us
approximation of an
approximation! Analysis of Algorithms 24
Big-O Notation
❑ The basic idea is to determine an upper bound for the
behavior of the algorithm/function.

❑ In other words, to determine how bad the


performance of the algorithm can get!

❑ If some function g(n) is an upper bound of function


f(n), then we say that f(n) is Big-O of g(n).

Analysis of Algorithms 25
Big-O Notation
❑ Specifically, Big-O is defined as follows: Given
functions f(n) and g(n), we say that f(n) is O(g(n)) if
there are positive constants c and n0 such that
f(n)  cg(n) for n  n0

❑ The idea is that if f(n) is O(g(n)) then it is bounded


above (cannot get bigger than) some constant times
g(n).

Analysis of Algorithms 26
Big-O Notation
❑ Further, by a standard abuse of notation, we often
associate a function with the value is calculates.

❑ For instance, if g(n) = n3, for n = 0, 1, 2,….,


then instead of saying that f(n) is O(g(n)), we say
f(n) is O(n3).

Analysis of Algorithms 27
Big-O Notation
❑ Example 1: What is O() if f(n) = 2n + 10?
◼ 2n  2n for n  0
◼ 10  10n for n  1
So, for any n  1
◼ 2n + 10  12n ➔ consider c = 12, n0 = 1 ➔ g(n) = n

Consequently, the above f(n) is O(n).

Analysis of Algorithms 28
Big-O Notation
❑ In general, if f(n) is a polynomial function, which is of
the form:
aini + ai-1ni–1 + ai-2ni–2 + … + a1n + a0

Then, we can directly establish that f(n) is O(ni).

Proof:
Choose
◼ n0 = 1, and
◼ c = |ai| + |ai–1| + |ai–2| + … + |a1| + |a0|

Analysis of Algorithms 29
Big-O Notation
❑ Example 2: What is O() if
f(n) = 3n4 + 6n3 + 10n2 + 5n + 4?
◼ 3n  3n for n  0
4 4

◼ 6n3  6n4 for n  0


◼ 10n2  10n4 for n  0
◼ 5n  5n for n  0
4

◼ 4  4n for n  1
4

So, for any n  1


◼ 3n4 + 6n3 + 10n2 + 5n + 4  28n4 ➔ consider c = 28,
n0 = 1 ➔ g(n) = n4
Consequently, the above f(n) is O(n4).
Analysis of Algorithms 30
Big-O Notation
❑ When determining O(), we can (and actually do)
ignore the logarithmic base.
Proof:
Assume that f(n) is O(loga n), for some positive
constant a.
◼ Then f(n)  C * loga n

for some positive constant C and some n0  n


◼ By logarithmic fundamentals,
loga n = loga b * logb n, for any n  0
◼ Let C1 = C * loga b. Then for all n  n0
f(n)  C * loga n = C * loga b * logb n = C1 * logb n
➔ f(n) is O(logb n).Analysis of Algorithms 31
Big-O Notation
❑ Example 3: What is O() if
f(n) = 3 log n + 5
◼ 3 log n  3 log n for n  1
◼ 5  5 log n for n  2

So, for any n  2


◼ 3 log n + 5  8 log n ➔ consider c = 8, n0 = 2
➔ g(n) = log n
Consequently, the above f(n) is O(log n).

Analysis of Algorithms 32
Big-O Notation
❑ Nested loops are significant when estimating O().
❑ Example 4:
Consider the following loop segment, what is O()?
for (int i = 0; i < n; i++ )
for (int j = 0; j < n; j++ )
……………
◼ The outer loop has 1 + (n + 1) + n executions.
◼ The inner loop has n(1 + (n + 1) + n ) executions.
◼ Total is: 2n2 + 4n + 2 ➔ O(n2).

Hint: As seen in Example 2 for polynomial functions


Analysis of Algorithms 33
Big-O Notation
❑ Important Note: Big-O only gives an upper bound
of the algorithm.
❑ However, if f(n) is O(n), then it is also
O(n + 10), O(n3), O(n2 + 5n + 7), O(n10), etc.
❑ We, generally, choose the smallest element from
this hierarchy of orders.

❑ For example, if f(n) = n + 5, then we choose O(n),


even though f(n) is actually also O(n log n), O(n4),
etc.
❑ Similarly, we write O(n) instead of O(2n + 8), O(n –
log n), etc.
Analysis of Algorithms 34
Big-O Notation
❑ Elements of the Big-O hierarchy can be as:
O(1)  O(log n)  O(n½)  O(n)  O(n log n) 
O(n2)  O(n3)  ……..  O(2n)  ……..
Where the symbol “”, indicates “is contained in”.

Analysis of Algorithms 35
Big-O Notation
❑ The following table provides some examples:
Sample Functions Order of O()

f(n) = 3000 O(1)


f(n) = (n * log2(n+1) + 2) / (n+1) O(log n)
f(n) = (500 log2 n) + n / 100000 O(n)
f(n) = (n * log10 n) + 4n + 8 O(n log n)
f(n) = n * (n + 1) / 2 O(n2)
f(n) = 3500 n100 + 2n O(2n)
Analysis of Algorithms 36
Big-O Notation
❑ Warning: One danger of Big-O is that it can be
misleading when the values of n are small.

❑ For instance, consider the following two functions


f(n) and g(n) for n  0

f1(n) = 1000 n ➔ f1(n) is hence O(n)


f2(n) = n2 / 10 ➔ f2(n) is hence O(n2)

However, and despite of the fact that f2(n) has a


higher/worst order than the one of f1(n), f1(n) is actually
greater than f2(n), for all values of n less than 10,000!
Analysis of Algorithms 37
Finding Big-O Estimates Quickly
❑ Case 1: Number of executions is independent of n
➔ O(1)
❑ Example:
// Constructor of a Car class
public Car(int nd, double pr)
{
numberOfDoors = nd;
price = pr;
}

Example:
for (int j = 0; j < 10000; j++ )
System.out.println(j);
Analysis of Algorithms 38
Finding Big-O Estimates Quickly
❑ Case 2: The splitting rule ➔ O(log n)
❑ Example:
while(n > 1)
{
n = n / 2;
…;
}

Example:
See the binary search method in Recursion6.java
& Recursion7.java

Analysis of Algorithms 39
Finding Big-O Estimates Quickly
❑ Case 3: Single loop, dependent on n ➔ O(n)
❑ Example:
for (int j = 0; j < n; j++ )
System.out.println(j);

Note: It does NOT matter how many simple statement (i.e.


no inner loops) are executed in the loop. For instance, if
the loop has k statements, then there is k*n executions of
them, which will still lead to O(n).

Analysis of Algorithms 40
Finding Big-O Estimates Quickly
❑ Case 4: Double looping dependent on n & splitting
➔ O(n log n)
❑ Example:
for (int j = 0; j < n; j++ )
{
m = n;
while (m > 1)
{
m = m / 2;
…;
// Does not matter how many statements are here
}
}
Analysis of Algorithms 41
Finding Big-O Estimates Quickly
❑ Case 4: Double looping dependent on n
➔ O(n2)
❑ Example:
for (int i = 0; i < n; i++ )
for (int j = 0; j < n; j++ )

{
…;
// Does not matter how many statements are here
}

Analysis of Algorithms 42
Finding Big-O Estimates Quickly
❑ Case 4 (Continues): Double looping dependent on n
➔ O(n2)
❑ Example:
for (int i = 0; i < n; i++ )
for (int j = i; j < n; j++ )

{
…;
// Does not matter how many statements are here
}
The number of executions of the code segment is as follows:
n + (n-1) + (n-2) + … + 3 + 2 + 1
Which is: n(n + 1) / 2 = ½ n2 + ½ n ➔ O(n2)
Analysis of Algorithms 43
Finding Big-O Estimates Quickly
❑ Case 5: Sequence of statements with different O( )
O(g1(n)) + O(g2(n)) + … = O(g1(n) + g2(n) + …)
❑ Example:
for (int i = 0; i < n; i++ )
{ ...
}
for (int i = 0; i < n; i++ )
for (int j = i; j < n; j++ )
{ ...
}
The first loop is O(n) and the second is O(n2). The entire segment
is hence O(n) + O(n2), which is equal to O(n + n2), which is in
this case O(n2 ).
Analysis of Algorithms 44
Asymptotic Algorithm Analysis
❑ In computer science and applied mathematics,
asymptotic analysis is a way of describing limiting
behaviours (may approach ever nearer but never
crossing!).

❑ Asymptotic analysis of an algorithm determines the


running time in big-O notation.
❑ To perform the asymptotic analysis
◼ We find the worst-case number of primitive operations
executed as a function of the input size
◼ We then express this function with big-O notation

Analysis of Algorithms 45
Asymptotic Algorithm Analysis
❑ Example:
◼ We determine that algorithm compareValues
executes at most 4n − 4 primitive operations

◼ We say that algorithm compareValues “runs in


O(n) time”, or has a “complexity” of O(n)

Note: Since constant factors and lower-order terms are


eventually dropped anyhow, we can disregard them
when counting primitive operations.

Analysis of Algorithms 46
Asymptotic Algorithm Analysis
❑ If two algorithms A & B exist for solving the same
problem, and, for instance, A is O(n) and B is O(n2),
then we say that A is asymptotically better than B
(although for a small time B may have lower running
time than A).

❑ To illustrate the importance of the asymptotic point of


view, let us consider three algorithms that perform
the same operation, where the running time (in µs) is
as follows, where n is the size of the problem:
◼ Algorithm1: 400n

◼ Algorithm2: 2n2

◼ Algorithm3: 2n

Analysis of Algorithms 47
Asymptotic Algorithm Analysis
◼ Which of the three algorithms is faster?
 Notice that Algorithm1 has a very large constant factor
compared to the other two algorithms!
Running Time Maximum Problem Size (n) that can be solved in:
( µs)
1 Second 1 Minute 1 Hour
Algorithm 1 2,500 150,000 9 Million
400n (400 * 2,500 = 1000,000)

Algorithm 2 707 5,477 42,426


2n2 (2 * 7072 ≈ 1000,000)

Algorithm 3 19 25 31
2n (only 19, since 220 would
exceed 1000,000)
Analysis of Algorithms 48
Asymptotic Algorithm Analysis
❑ Let us further illustrate
asymptotic analysis with 35
X
two algorithms that would 30
compute prefix averages. A
25

❑ Given an array X storing n 20


numbers, we need to 15
construct an array A such 10
that:
5
◼ A[i] is the average of X[0] +

X[1] + … + X[i]) 0
X 1 2 3 4 5 6 7
10 16 4 18 7 23 27 39

A 10 13 10 12 11 13 15 18
Analysis of Algorithms 49
Asymptotic Algorithm Analysis
❑ That is, the i-th prefix 35
average of an array X is X
average of the first (i + 1) 30 A
elements of X: 25
A[i] = (X[0] + X[1] + … + X[i])/(i+1)
20
15
❑ Computing prefix average 10
has applications to financial
analysis; for instance the 5
average annual return of a 0
mutual fund for the last 1 2 3 4 5 6 7
year, three years, ten years,
etc.
Analysis of Algorithms 50
Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time (n2) by applying the definition
Algorithm prefixAverages1(X, n)
Input array X of n integers
Output array A of prefix averages of X # of operations
A  new array of n integers n
for i  0 to n − 1 do n
s  X[0] n
for j  1 to i do 1 + 2 + …+ (n − 1)
s  s + X[j] 1 + 2 + …+ (n − 1)
A[i]  s / (i + 1) n
return A 1
Analysis of Algorithms 51
Prefix Averages (Quadratic)
❑ Hence, to calculate the sum of n integers, the
algorithm needs (from the segment that has the
two loops) n(n + 1) / 2 operations.

❑ In other words, prefixAverages1 is


O(1 + 2 + …+ n) = O(n(n + 1) / 2) ➔ O(n2)

Analysis of Algorithms 52
Prefix Averages (Linear)
The following algorithm computes prefix averages in
linear time (n) by keeping a running sum
Algorithm prefixAverages2(X, n)
Input array X of n integers
Output array A of prefix averages of X # of operations
A  new array of n integers n
s0 1
for i  0 to n − 1 do n
s  s + X[i] n
A[i]  s / (i + 1) n
return A 1
Algorithm prefixAverages2 runs in O(n) time, which is
clearly better than prefixAverages1.
Analysis of Algorithms 53
Big-Omega, Big-Theta &
Plain English!
❑ In addition to Big-O, there are two other notations
that are used for algorithm analysis: Big-Omega and
Big-Theta.

❑ While Big-O provides an upper bound of a function,


Big-Omega provides a lower bound.

❑ In other words, while Big-O indicates that an


algorithm behavior “cannot be any worse than”, Big-
Omega indicates that it “cannot be any better than”.

Analysis of Algorithms 54
Big-Omega, Big-Theta &
Plain English!
❑ Logically, we are often interested in worst-case
estimations, however knowing the lower bound can
be significant when trying to achieve an optimal
solution.

big-Omega
Big-Omega is defined as follows: Given functions f(n)
and g(n), we say that f(n) is (g(n)) if there are positive
constants c and n0 such that
f(n)  cg(n) for n  n0

Analysis of Algorithms 55
Big-Omega, Big-Theta &
Plain English!
❑ Example: 3n log n + 2n is (n log n)
Proof:
❑ 3n log n + 2n  3n log n  n log n
for every n  1

❑ Example: 3n log n + 2n is (n)


Proof:
❑ 3n log n + 2n  2n  n
for every n  1

Analysis of Algorithms 56
Big-Omega, Big-Theta &
Plain English!
❑ Notice that: … (2n)  (n3)  (n2)  (n log n) 
(n)  (n½)  …  (log n)  …  (1)

Where the symbol “”, indicates “is contained in”.

❑ It should be noted that in “many” cases, a method


that is O( ) is also ( ).

Analysis of Algorithms 57
Big-
❑ Example 1 (Revised): What are O() and ()
if f(n) = 2n + 10?
→ As seen before, the method is O(n).
❑ Now,
◼ 2n + 10  2n  n for n  0
So, for any n  0
◼ 2n + 10  n ➔ consider c = 1, n0 = 0 ➔ (n) = n

Consequently, the above f(n) is O(n),


and is also (n) .

Analysis of Algorithms 58
Big-
❑ Example 2 (Revised): What are O() and () if
f(n) = 3n4 + 6n3 + 10n2 + 5n + 4?
As seen before, the method is O(n4).
❑ Now,

◼ 3n4 + 6n3 + 10n2 + 5n + 4  3n4 for n  0


◼ 3n  n for n  0
4 4

So, for any n  0


◼ 3n4 + 6n3 + 10n2 + 5n + 4  n4

➔ consider c = 1, n0 = 0 ➔ g(n) = n4
Consequently, the above f(n) is
O(n4) and is also (n4) .
Analysis of Algorithms 59
Big-
❑ However, Big-O and Big-Omega are distinct.
❑ That is, there are cases when a function may have
different O( ) and ( ).

❑ A simple, and somehow artificial, proof of that can


be provided as follows:
◼ Assume f(n) = n for n = 0, 1,2, …

◼ Clearly f(n) is O(n), and hence is O(n )


2

◼ Yet, f(n) is NOT (n2)

◼ Also, since f(n) is (n), it is also (1)

◼ Yet, f(n) is NOT O(1)

Analysis of Algorithms 60
Big-
❑ In fact, as seen, the hierarchy of () is just the
reverse of the one of O()

❑ For example, the following code segment:


for (int j = 0; j < n; j++)
System.out.println (j);

is: O(n), O(n log n), O(n2), …


And is also:
(n), (log n), (1)
Analysis of Algorithms 61
Big-O, Big-Omega, Big-Theta
&Plain English!
❑ Big-O provides and upper bound of a function, while
Big- provides a lower bound of it.

❑ In many, if not most, cases, there is often a need of


one function that would serve as both lower and
upper bounds; that is Big-Theta (Big-).

Analysis of Algorithms 62
Big-O, Big-Omega, Big-Theta
&Plain English!
big-Theta
❑ Big-Theta is defined as follows: Given functions f(n)
and g(n), we say that f(n) is (g(n)) if there are
positive constants c1, c2 and n0 such that
c1g(n)  f(n)  c2g(n) for n  n0

❑ Simply put, if a function f(n) is (g(n)), then it is


bounded above and below by some constants time
g(n); in other words, it is, roughly, bounded above
and below by g(n).
❑ → Notice that if f(n) is (g(n)), then it is hence both O(g(n))
and (g(n)).
Analysis of Algorithms 63
Big-
❑ Example 2 (Revised Again): What is () of the
following function:
f(n) = 3n4 + 6n3 + 10n2 + 5n + 4?

As seen before, the function is O(n4) and also is (n4)


➔ Hence, it is (n4).

Analysis of Algorithms 64
Big-O, Big- & Big-
Quick Examples

◼ 5n2 is (n2)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1
such that f(n)  c•g(n) for n  n0
let c = 5 and n0 = 1
◼ 5n2 is (n)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1
such that f(n)  c•g(n) for n  n0
let c = 1 and n0 = 1
◼ 5n2 is (n2)
f(n) is (g(n)) if it is (n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and an
integer constant n0  1 such that f(n) < c•g(n) for n  n0
Let c = 5 and n0 = 1
• Notice that 5n2 is NOT (n) since it is not O(n)
Analysis of Algorithms 65
Plain English
❑ Sometimes, it might be easier to just indicate the
behavior of a method through natural language
equivalence of Big-.

❑ For instance
◼ if f(n) is (n), we indicate that f is “linear in n”.
◼ if f(n) is (n2), we indicate that f is “quadratic in n”.

Analysis of Algorithms 66
Plain English
❑ The following table shows the English-language equivalence of
Big-

Big- English

(c) Constant
for some constant c  0
(log n) Logarithmic in n

(n) Linear in n

(n log n) Linear-logarithmic in n

(n2) Quadratic in n

Analysis of Algorithms 67
Big-O
❑ We may just prefer to use plain English, such as
“linear”, “quadratic”, etc..

❑ However, in practice, there are MANY occasions


when all we specify is an upper bound to the
method, which is namely:
Big-O

Analysis of Algorithms 68
Intuition for Asymptotic
Notation
Big-O
◼ f(n) is O(g(n)) if f(n) is asymptotically less

than or equal to g(n)

Big-Omega
◼ f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)

Big-Theta
◼ f(n) is (g(n)) if f(n) is asymptotically
equal to g(n)

Analysis of Algorithms 69
Big-O and Growth Rate
❑ The big-O notation gives an upper bound on the
growth rate of a function
❑ The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
❑ We can use the big-O notation to rank functions
according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
Analysis of Algorithms 70
Big-O and Growth Rate
❑ Again, we are specifically interested in how rapidly a
function increases based on its classification.

❑ For instance, suppose that we have a method whose


worstTime() estimate is linear in n, what will be the
effect of doubling the problem size?
◼ worstTime(n) ≈ c * n, for some constant c, and

sufficiently large value of n


◼ If the problem size doubles then worstTime(n) ≈ c *
2n ≈ 2* worstTime(n)
❑ In other words, if n is doubled, then worst time is
doubled.
Analysis of Algorithms 71
Big-O and Growth Rate
❑ Now, suppose that we have a method whose
worstTime() estimate is quadratic in n, what will be
the effect of doubling the problem size?
◼ worstTime(n) ≈ c * n
2

◼ If the problem size doubles then worstTime(2n) ≈ c


* (2n)2 = c * 4 * n2 ≈ 4* worstTime(n)

❑ In other words, if n is doubled, then worst time is


quadrupled.

Analysis of Algorithms 72
Big-O and Growth Rate
worstTime(n)

O(n2)
O(n log n) O(n)

O(log n)

O(1)

n
Analysis of Algorithms 73
Big-O and Growth Rate
❑ Again, remember that the Big-O differences
eventually dominate constant factors.

❑ For instance if n is sufficiently large, 100 n log n will


still be smaller than n2 / 100.

❑ So, the relevance of Big-O, Big- or Bog- may


actually depend on how large the problem size may
get (i.e. 100,000 or more in the above example).

Analysis of Algorithms 74
Big-O and Growth Rate
❑ The following table provides estimate of needed
execution time for various functions of n, if n =
1000,000,000, running on a machine executing
1000,000 statements per second.

Function o f n Time Estimate


log2 n .0024 Seconds
n 17 Minutes
n log2 n 7 Hours
n2 300 Years
Analysis of Algorithms 75
Math you need to Review
Summations
Logarithms and Exponents ❑ properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba = logxa/logxb
Log22n = log2n + 1
Log24n = log2n + 2
❑ properties of exponentials:
Proof techniques a(b+c) = aba c
abc = (ab)c
Basic probability ab /ac = a(b-c)
b = a logab
bc = a c*logab
Analysis of Algorithms 76

You might also like