Lecture 3 - Complexity Analysis
Lecture 3 - Complexity Analysis
Fall 2023
3. Complexity Analysis
Comparing Algorithms
• Time complexity
– The amount of time that an algorithm needs to run to completion
– Better algorithm is the one which runs faster
Has smaller time complexity
• Space complexity
– The amount of memory an algorithm needs to run
sorting
5 3 1 2 1 2 3 5
algorithm
sorting
5 3 1 2 4 6 1 2 3 4 5 6
algorithm
• Even on inputs of the same size, running time can be very different
1 2 3 4 5 6
• Even on inputs of the same size, running time can be very different
6 5 4 3 2 1
• Even on inputs of the same size, running time can be very different
3 2 1 4 5 6
• Even on inputs of the same size, running time can be very different
best case
• Best case running time is usually
average case
not very useful worst case
120
Running Time
80
60
• Worst case running time is easier 40
to analyze
20
– Crucial for real-time applications
such as games, finance and 0
1000 2000 3000 4000
robotics
Input Size
7000
Time (ms)
varying size 5000
4000
• Use clock methods to get an 3000
accurate measure of the actual 2000
running time
1000
0
• Plot the results 0 50 100
Input Size
• Idea: Use abstract machine that uses steps of time instead of secs
– Each elementary operation takes 1 steps
• Example of operations
– Retrieving/storing variables from memory
– Variable assignment =
– Integer operations + - * / % ++ --
– Logical operations && || !
– Bitwise operations &|^~
– Relational operations == != < <= => >
– Memory allocation and deallocation new delete
– n = 10 => 54 steps
– n = 100 => 504 steps
– n = 1,000 => 5004 steps
– n = 1,000,000 => 5,000,004 steps
• Example:
– f(n) = n2
– g(n) = n2 – 3n + 2
f(1000) g(1000)
0.002998 0.3%
f(1000)
– The difference goes to zero as n → ∞
• Example:
– f(n) = n2
– g(n) = n2 – 3n + 2
For n = 1
% of running time due to n2 = 1/(1+3+2)*100 = 16.66%
% of running time due to 3n = 3/(1+3+2)*100 = 50%
% of running time due to 2 = 2/(1+3+2)*100 = 33.33%
• Indicates the upper or highest growth rate that the algorithm can
have
– Ignore constant factors and lower order terms
– Focus on main components of a function which affect its growth
• Examples
– 55 = O(1)
– 25c + 32k = O(1) // if c,k are constants
– 5n + 6 = O(n)
– n2 – 3n + 2 = O(n2)
– 7n + 2nlog(5n) = O(nlogn)
• Simple Assignment
– a = b
– O(1) // Constant time complexity
• Simple loops
– for (i=0; i<n; i++) { s; }
– O(n) // Linear time complexity
• Nested loops
– for (i=0; i<n; i++)
for (j=0; j<n; j++) { s; }
– O(n2) // Quadratic time complexity
𝑛
𝑛 (𝑛 + 1)
𝑖 =
2
𝑖=1
– O(n2)
𝑂(𝑛 + 𝑚)
𝑂(𝑛2 )
𝑂(𝑛 log 2 𝑛)
𝑂(log 2 𝑛)
𝑂(log 𝑘 𝑛)
𝑂(𝑛)
Definition
• Given two positive-valued functions f and g:
𝑓 𝑛 = 𝑂(𝑔 𝑛 )
if there exist positive numbers c and N such that
𝑓 𝑛 ≤ 𝑐 ∙ 𝑔 𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑁
• 𝒇 is big-O of 𝒈 if there is a positive number 𝒄 such that 𝒇 is not
larger than 𝒄 ∙ 𝒈 for sufficiently large 𝒏; that is, for all 𝒏 larger than
some number 𝑵
𝒄𝒈(𝒏)
𝒇(𝒏)
𝒏
𝑵
• Usually infinitely many pairs of 𝒄 and 𝑵 that can be given for the
same pair of functions 𝒇 and 𝒈
• In 𝒇(𝒏), the only candidates for the largest term are 𝟐𝒏𝟐 and 𝟑𝒏;
these terms can be compared using the inequality 𝟐𝒏𝟐 > 𝟑𝒏 that
holds for 𝒏 > 𝟏. 𝟓
𝟑
• Thus, the chosen values are 𝑵 = 𝟐 and 𝒄 ≥ 𝟑
𝟒
N is always a point
where the functions
𝑐𝑔(𝑛) and 𝑓
intersect each other
Proof:
By the Big-O definition, T n is 𝑂(𝑛3 ) if
T n ≤ 𝑐 ∙ 𝑛3 for some 𝑛 ≥ 𝑁
Check the condition: 𝒏𝟑 + 𝟐𝟎𝒏 + 𝟏 ≤ 𝐜 ∙ 𝒏𝟑
𝟐𝟎 𝟏
or equivalently 𝟏 + + ≤𝒄
𝒏𝟐 𝒏𝟑
Therefore, the Big-O condition holds for 𝐧 ≥ 𝐍 = 𝟏 and 𝐜 ≥
𝟐𝟐 (= 1 + 20 + 1)
Larger values of 𝐍 result in smaller factors 𝐜 (e.g., for 𝐍 = 𝟏𝟎,
𝐜 ≥ 𝟏. 𝟐𝟎𝟏 and so on) but in any case the above statement is
valid
Proof:
By the Big-O definition, T n is 𝑂(𝑛2 ) if
T n ≤ 𝑐 ∙ 𝑛2 for some 𝑛 ≥ 𝑁
Therefore, the Big-O condition cannot hold since the left side of the
latter inequality is growing infinitely, i.e., there is no such
constant factor 𝒄
Conclusion:
• Big-Omega
– 𝑓(𝑛) is Ω(𝑔 𝑛 )
– If there is a constant 𝑐 > 0 and an integer constant 𝑛0 ≥ 1
– Such that 𝑓 𝑛 ≥ 𝑐 ∙ 𝑔(𝑛) for 𝑛 ≥ 𝑛0
• Big-Theta
– 𝑓(𝑛) is Θ(𝑔 𝑛 )
– if there are constants 𝑐1 > 0 and 𝑐2 > 0 and an integer constant
𝑛0 ≥ 1
– such that 𝑐1 ∙ 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝑐2 ∙ 𝑔(𝑛) for 𝑛 ≥ 𝑛0