1 Analysis of Algorithms
1 Analysis of Algorithms
These slides has been extracted, modified and updated from original slides of :
• Data Structures and Algorithms in Java, 5th edition. John Wiley& Sons, 2010. ISBN 978-0-470-38326-1.
• Dr. Hanna’s slides (http://aimanhanna.com/concordia/comp352/index.htm)
Analysis of Algorithms 1
How to Estimate Efficiency?
❑ Correctness of a method depends merely on whether
the algorithm performs what it is supposed to do.
❑ Clearly, efficiency is hence different than
correctness.
❑ Efficiency can be measured in many ways; i.e:
◼ Less memory utilization
◼ Faster execution time
◼ Quicker release of allocated recourses
◼ etc.
❑ How efficiency can then be measured?
◼ Measurement should be independent of used software (i.e.
compiler, language, etc.) and hardware (CPU speed,
memory size, etc.)
◼ Particularly, run-time analysis can have serious weaknesses
Analysis of Algorithms 2
Experimental Studies
❑ Write a program 9000
algorithm. 7000
Time (ms)
inputs of varying size and 5000
composition. 4000
❑ Use a method like 3000
System.currentTimeMillis() to 2000
get an accurate measure
1000
of the actual running time.
0
❑ Plot the results. 0 50 100
Input Size
Analysis of Algorithms 3
Limitations of Experiments
❑ It is necessary to implement the algorithm, which may
be difficult/costly.
Running Time
80
60
❑ Given a method of a
problem of size n, find 40
worstTime(n) , which is 20
What is worstTime(n)?
Analysis of Algorithms 8
Estimating Running Time
Statement Worst Case Number
of Executions
i=0 1
i<n–1 n
i++ n-1
a[i] > a[i+1] n-1
System.out.println() n-1
❑ What is worstTime(n)?
Analysis of Algorithms 10
Estimating Running Time
worstTime(n ) of aboveMeanCount() method
Statements Worst # of Executions
double[] a, double mean 1+1
(assignment of 1st & 2nd arguments)
Analysis of Algorithms 12
Pseudocode Details
❑ Control flow ❑ Method call
◼ if … then … [else …] var.method (arg [, arg…])
◼ while … do … ❑ Return value
◼ repeat … until … return expression
◼ for … do … ❑ Expressions
◼ Indentation replaces braces Assignment
(like = in Java)
❑ Method declaration = Equality testing
Algorithm method (arg [, arg…]) (like == in Java)
Input … n2 Superscripts and other
Output … mathematical
formatting allowed
Analysis of Algorithms 13
Seven Important Functions
❑ Seven functions often
1E+30
appear in algorithm 1E+28 Cubic
1E+26
analysis: 1E+24
Quadratic
Linear
1E+22
◼ Constant 1 1E+20
◼ Logarithmic log n 1E+18
1E+16
T(n)
◼ Linear n 1E+14
1E+12
◼ N-Log-N n log n 1E+10
◼ Quadratic n2 1E+8
1E+6
◼ Cubic n3 1E+4
1E+2
◼ Exponential 2n 1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
Analysis of Algorithms 14
Functions Graphed Slide by Matt Stallmann
included with permission.
g(n) = n2
g(n) = lg n
g(n) = n
g(n) = n3
Analysis of Algorithms 15
The Random Access Machine
(RAM) Model
❑ A CPU.
❑ A potentially unbounded
bank of memory cells, 2
1
each of which can hold an 0
arbitrary number or
character.
Memory cells are numbered and accessing
any cell in memory takes unit time.
Analysis of Algorithms 16
Primitive Operations
❑ Basic computations
❑ Examples:
performed by an algorithm.
◼ Evaluating an
❑ Identifiable in pseudocode. expression
❑ Largely independent from the ◼ Assigning a value
to a variable
programming language.
◼ Indexing into an
❑ Exact definition not important array
(we will see why later). ◼ Calling a method
Returning from a
❑ Assumed to take a constant ◼
method
amount of time in the RAM
model.
Analysis of Algorithms 17
Counting Primitive Operations
❑ By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by
an algorithm, as a function of the input size.
Algorithm compareValues(A, n)
Input array A of n integers
Output display all elements larger than following ones
# of operations
for i 0 to n − 2 do n−1
if A[i] A[i+1] then n−1
display i n−1
increment counter i n−1
Total 4n − 4
Analysis of Algorithms 18
Estimating Running Time
❑ Algorithm compareValues executes 4n − 4
primitive operations in the worst case.
❑ Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
❑ Let T(n) be the running time of compareValues.
Then
a (4n − 4) T(n) b(4n − 4)
❑ Hence, the running time T(n) is bounded by two
linear functions
Analysis of Algorithms 19
Growth Rate of Running Time
❑ Changing the hardware/software
environment
◼ Affects T(n) by a constant factor, but
◼ Does not alter the growth rate of T(n)
❑ The linear growth rate of the running
time T(n) is an intrinsic property of
algorithm compareValues
Analysis of Algorithms 20
Slide by Matt Stallmann
included with permission.
cn c (n + 1) 2c n 4c n
c 2n c 2 n+1 c 2 2n c 2 4n
Analysis of Algorithms 21
Slide by Matt Stallmann
included with permission.
T(n)
❑ Examples 1E+12
1E+10
◼ 102n + 105 is a linear
1E+8
function 1E+6
◼ 105n2 + 108n is a 1E+4
quadratic function 1E+2
1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
Analysis of Algorithms 23
Big-O Notation
10,000
❑ We do NOT need to 3n
requirements. 100
by means of “Big-O”
notation. 1
1 10 100 1,000
❑ That is quite satisfactory n
since it gives us
approximation of an
approximation! Analysis of Algorithms 24
Big-O Notation
❑ The basic idea is to determine an upper bound for the
behavior of the algorithm/function.
Analysis of Algorithms 25
Big-O Notation
❑ Specifically, Big-O is defined as follows: Given
functions f(n) and g(n), we say that f(n) is O(g(n)) if
there are positive constants c and n0 such that
f(n) cg(n) for n n0
Analysis of Algorithms 26
Big-O Notation
❑ Further, by a standard abuse of notation, we often
associate a function with the value is calculates.
Analysis of Algorithms 27
Big-O Notation
❑ Example 1: What is O() if f(n) = 2n + 10?
◼ 2n 2n for n 0
◼ 10 10n for n 1
So, for any n 1
◼ 2n + 10 12n ➔ consider c = 12, n0 = 1 ➔ g(n) = n
Analysis of Algorithms 28
Big-O Notation
❑ In general, if f(n) is a polynomial function, which is of
the form:
aini + ai-1ni–1 + ai-2ni–2 + … + a1n + a0
Proof:
Choose
◼ n0 = 1, and
◼ c = |ai| + |ai–1| + |ai–2| + … + |a1| + |a0|
Analysis of Algorithms 29
Big-O Notation
❑ Example 2: What is O() if
f(n) = 3n4 + 6n3 + 10n2 + 5n + 4?
◼ 3n 3n for n 0
4 4
◼ 4 4n for n 1
4
Analysis of Algorithms 32
Big-O Notation
❑ Nested loops are significant when estimating O().
❑ Example 4:
Consider the following loop segment, what is O()?
for (int i = 0; i < n; i++ )
for (int j = 0; j < n; j++ )
……………
◼ The outer loop has 1 + (n + 1) + n executions.
◼ The inner loop has n(1 + (n + 1) + n ) executions.
◼ Total is: 2n2 + 4n + 2 ➔ O(n2).
Analysis of Algorithms 35
Big-O Notation
❑ The following table provides some examples:
Sample Functions Order of O()
Example:
for (int j = 0; j < 10000; j++ )
System.out.println(j);
Analysis of Algorithms 38
Finding Big-O Estimates Quickly
❑ Case 2: The splitting rule ➔ O(log n)
❑ Example:
while(n > 1)
{
n = n / 2;
…;
}
Example:
See the binary search method in Recursion6.java
& Recursion7.java
Analysis of Algorithms 39
Finding Big-O Estimates Quickly
❑ Case 3: Single loop, dependent on n ➔ O(n)
❑ Example:
for (int j = 0; j < n; j++ )
System.out.println(j);
Analysis of Algorithms 40
Finding Big-O Estimates Quickly
❑ Case 4: Double looping dependent on n & splitting
➔ O(n log n)
❑ Example:
for (int j = 0; j < n; j++ )
{
m = n;
while (m > 1)
{
m = m / 2;
…;
// Does not matter how many statements are here
}
}
Analysis of Algorithms 41
Finding Big-O Estimates Quickly
❑ Case 4: Double looping dependent on n
➔ O(n2)
❑ Example:
for (int i = 0; i < n; i++ )
for (int j = 0; j < n; j++ )
{
…;
// Does not matter how many statements are here
}
Analysis of Algorithms 42
Finding Big-O Estimates Quickly
❑ Case 4 (Continues): Double looping dependent on n
➔ O(n2)
❑ Example:
for (int i = 0; i < n; i++ )
for (int j = i; j < n; j++ )
{
…;
// Does not matter how many statements are here
}
The number of executions of the code segment is as follows:
n + (n-1) + (n-2) + … + 3 + 2 + 1
Which is: n(n + 1) / 2 = ½ n2 + ½ n ➔ O(n2)
Analysis of Algorithms 43
Finding Big-O Estimates Quickly
❑ Case 5: Sequence of statements with different O( )
O(g1(n)) + O(g2(n)) + … = O(g1(n) + g2(n) + …)
❑ Example:
for (int i = 0; i < n; i++ )
{ ...
}
for (int i = 0; i < n; i++ )
for (int j = i; j < n; j++ )
{ ...
}
The first loop is O(n) and the second is O(n2). The entire segment
is hence O(n) + O(n2), which is equal to O(n + n2), which is in
this case O(n2 ).
Analysis of Algorithms 44
Asymptotic Algorithm Analysis
❑ In computer science and applied mathematics,
asymptotic analysis is a way of describing limiting
behaviours (may approach ever nearer but never
crossing!).
Analysis of Algorithms 45
Asymptotic Algorithm Analysis
❑ Example:
◼ We determine that algorithm compareValues
executes at most 4n − 4 primitive operations
Analysis of Algorithms 46
Asymptotic Algorithm Analysis
❑ If two algorithms A & B exist for solving the same
problem, and, for instance, A is O(n) and B is O(n2),
then we say that A is asymptotically better than B
(although for a small time B may have lower running
time than A).
◼ Algorithm2: 2n2
◼ Algorithm3: 2n
Analysis of Algorithms 47
Asymptotic Algorithm Analysis
◼ Which of the three algorithms is faster?
Notice that Algorithm1 has a very large constant factor
compared to the other two algorithms!
Running Time Maximum Problem Size (n) that can be solved in:
( µs)
1 Second 1 Minute 1 Hour
Algorithm 1 2,500 150,000 9 Million
400n (400 * 2,500 = 1000,000)
Algorithm 3 19 25 31
2n (only 19, since 220 would
exceed 1000,000)
Analysis of Algorithms 48
Asymptotic Algorithm Analysis
❑ Let us further illustrate
asymptotic analysis with 35
X
two algorithms that would 30
compute prefix averages. A
25
X[1] + … + X[i]) 0
X 1 2 3 4 5 6 7
10 16 4 18 7 23 27 39
A 10 13 10 12 11 13 15 18
Analysis of Algorithms 49
Asymptotic Algorithm Analysis
❑ That is, the i-th prefix 35
average of an array X is X
average of the first (i + 1) 30 A
elements of X: 25
A[i] = (X[0] + X[1] + … + X[i])/(i+1)
20
15
❑ Computing prefix average 10
has applications to financial
analysis; for instance the 5
average annual return of a 0
mutual fund for the last 1 2 3 4 5 6 7
year, three years, ten years,
etc.
Analysis of Algorithms 50
Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time (n2) by applying the definition
Algorithm prefixAverages1(X, n)
Input array X of n integers
Output array A of prefix averages of X # of operations
A new array of n integers n
for i 0 to n − 1 do n
s X[0] n
for j 1 to i do 1 + 2 + …+ (n − 1)
s s + X[j] 1 + 2 + …+ (n − 1)
A[i] s / (i + 1) n
return A 1
Analysis of Algorithms 51
Prefix Averages (Quadratic)
❑ Hence, to calculate the sum of n integers, the
algorithm needs (from the segment that has the
two loops) n(n + 1) / 2 operations.
Analysis of Algorithms 52
Prefix Averages (Linear)
The following algorithm computes prefix averages in
linear time (n) by keeping a running sum
Algorithm prefixAverages2(X, n)
Input array X of n integers
Output array A of prefix averages of X # of operations
A new array of n integers n
s0 1
for i 0 to n − 1 do n
s s + X[i] n
A[i] s / (i + 1) n
return A 1
Algorithm prefixAverages2 runs in O(n) time, which is
clearly better than prefixAverages1.
Analysis of Algorithms 53
Big-Omega, Big-Theta &
Plain English!
❑ In addition to Big-O, there are two other notations
that are used for algorithm analysis: Big-Omega and
Big-Theta.
Analysis of Algorithms 54
Big-Omega, Big-Theta &
Plain English!
❑ Logically, we are often interested in worst-case
estimations, however knowing the lower bound can
be significant when trying to achieve an optimal
solution.
big-Omega
Big-Omega is defined as follows: Given functions f(n)
and g(n), we say that f(n) is (g(n)) if there are positive
constants c and n0 such that
f(n) cg(n) for n n0
Analysis of Algorithms 55
Big-Omega, Big-Theta &
Plain English!
❑ Example: 3n log n + 2n is (n log n)
Proof:
❑ 3n log n + 2n 3n log n n log n
for every n 1
Analysis of Algorithms 56
Big-Omega, Big-Theta &
Plain English!
❑ Notice that: … (2n) (n3) (n2) (n log n)
(n) (n½) … (log n) … (1)
Analysis of Algorithms 57
Big-
❑ Example 1 (Revised): What are O() and ()
if f(n) = 2n + 10?
→ As seen before, the method is O(n).
❑ Now,
◼ 2n + 10 2n n for n 0
So, for any n 0
◼ 2n + 10 n ➔ consider c = 1, n0 = 0 ➔ (n) = n
Analysis of Algorithms 58
Big-
❑ Example 2 (Revised): What are O() and () if
f(n) = 3n4 + 6n3 + 10n2 + 5n + 4?
As seen before, the method is O(n4).
❑ Now,
➔ consider c = 1, n0 = 0 ➔ g(n) = n4
Consequently, the above f(n) is
O(n4) and is also (n4) .
Analysis of Algorithms 59
Big-
❑ However, Big-O and Big-Omega are distinct.
❑ That is, there are cases when a function may have
different O( ) and ( ).
Analysis of Algorithms 60
Big-
❑ In fact, as seen, the hierarchy of () is just the
reverse of the one of O()
Analysis of Algorithms 62
Big-O, Big-Omega, Big-Theta
&Plain English!
big-Theta
❑ Big-Theta is defined as follows: Given functions f(n)
and g(n), we say that f(n) is (g(n)) if there are
positive constants c1, c2 and n0 such that
c1g(n) f(n) c2g(n) for n n0
Analysis of Algorithms 64
Big-O, Big- & Big-
Quick Examples
◼ 5n2 is (n2)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 5 and n0 = 1
◼ 5n2 is (n)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 1 and n0 = 1
◼ 5n2 is (n2)
f(n) is (g(n)) if it is (n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and an
integer constant n0 1 such that f(n) < c•g(n) for n n0
Let c = 5 and n0 = 1
• Notice that 5n2 is NOT (n) since it is not O(n)
Analysis of Algorithms 65
Plain English
❑ Sometimes, it might be easier to just indicate the
behavior of a method through natural language
equivalence of Big-.
❑ For instance
◼ if f(n) is (n), we indicate that f is “linear in n”.
◼ if f(n) is (n2), we indicate that f is “quadratic in n”.
Analysis of Algorithms 66
Plain English
❑ The following table shows the English-language equivalence of
Big-
Big- English
(c) Constant
for some constant c 0
(log n) Logarithmic in n
(n) Linear in n
(n2) Quadratic in n
Analysis of Algorithms 67
Big-O
❑ We may just prefer to use plain English, such as
“linear”, “quadratic”, etc..
Analysis of Algorithms 68
Intuition for Asymptotic
Notation
Big-O
◼ f(n) is O(g(n)) if f(n) is asymptotically less
Big-Omega
◼ f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)
Big-Theta
◼ f(n) is (g(n)) if f(n) is asymptotically
equal to g(n)
Analysis of Algorithms 69
Big-O and Growth Rate
❑ The big-O notation gives an upper bound on the
growth rate of a function
❑ The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
❑ We can use the big-O notation to rank functions
according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
Analysis of Algorithms 70
Big-O and Growth Rate
❑ Again, we are specifically interested in how rapidly a
function increases based on its classification.
Analysis of Algorithms 72
Big-O and Growth Rate
worstTime(n)
O(n2)
O(n log n) O(n)
O(log n)
O(1)
n
Analysis of Algorithms 73
Big-O and Growth Rate
❑ Again, remember that the Big-O differences
eventually dominate constant factors.
Analysis of Algorithms 74
Big-O and Growth Rate
❑ The following table provides estimate of needed
execution time for various functions of n, if n =
1000,000,000, running on a machine executing
1000,000 statements per second.