DSA2 Chapter 2 Algorithm Complexity
DSA2 Chapter 2 Algorithm Complexity
Chapter 2
Algorithm Complexity
Chapter Outline
• Algorithms efficiency
• Algorithm analysis:
– Machine-dependent vs Machine-independent
• Function ordering
– Weak Order; Big-Oh (asymptotically ≤); Big-
(asymptotically ≥); Big- (asymptotically =); Little-oh
(asymptotically <).
• Algorithm complexity analysis
– Rules for complexity analysis
– Analysis of 4 algorithms for the max subsequence sum
• Master Theorem 2
Design challenges
• Designing an algorithmic solution is (almost) good:
– The algorithm should not take “ages”
– It should not consume “too much” memory.
• In this chapter, we will discuss the following:
– How to estimate the time required for a program.
– How to reduce the running time of a program from
days or years to fractions of a second.
– The results of careless use of recursion.
n (n/e)n
It will be impossible to run the algorithm for n = 30 5
(roughly 1130 paths)
What is efficiency of an algorithm?
• Run time in the computer is machine-dependent
• Example: Multiply two positive integers a and b
• Subroutine 1: Multiply a and b
• Subroutine 2:
v = a, w = b
While w > 1
{ v = v + a;
w = w - 1}
Output v
6
Machine-Dependent Analysis
• First subroutine has 1 multiplication.
• Second subroutine has b additions and b
subtractions.
• For some architectures, 1 multiplication is more
expensive than b additions and b subtractions.
• Ideally:
– programme all alternative algorithms
– run them on the target machine
– find which is more/most efficient!
7
Machine-Independent Analysis
• We will assume that every basic operation takes
constant time
• Example of Basic Operations:
– Addition, Subtraction, Multiplication, Memory
Access
• Non-basic Operations:
– Sorting, Searching
• Efficiency of an algorithm is thus measured in terms
of the number of basic operations it performs
– We do not distinguish between the basic operations.
8
• Subroutine 1 uses 1 basic operation
• Subroutine 2 uses 2b basic operations
Subroutine 1 is more efficient.
• This measure is good for all large input sizes
• In fact, we will not worry about the exact number of
operations, but will look at ``broad classes’ of values.
– Let there be n inputs.
– If an algorithm needs n basic operations and another
needs 2n basic operations, we will consider them to
be in the same efficiency category.
– However, we distinguish between exp(n), n, log(n)
• We worry about algorithms speed for large input sizes (i.e.
9
rate of growth of algorithm speed based on input size)
10
https://devopedia.org/algorithmic-complexity
Weak ordering
• In this chapter, f and g are functions from the
set of natural numbers to itself.
• Consider the following definitions:
– We will consider two functions to be equivalent,
f ~ g, if lim f ( n) c where 0 c
n g ( n)
f ( n)
– We will state that f < g if lim 0
n g ( n)
14
Examples of Functions
sqrt(n) , n, 2n, ln (n), exp(n), n + sqrt(n) , n + n2
limn 2n /n = 2, 2n is O(n)
15
Other examples:
limn ln(n) /n = 0 ln(n) is O(n)
limn n/ln(n) = infinity n is not O(ln(n))
20
Other Complexity: Little-oh Notation
Definition 2.4:
f(n) = o(g(n)) if, for all positive constants c, there exists
an n0 such that f(n) < c * g(n) when n > n0. (``asymptotic
strict inequality’’)
23
Recap
f( n )
f( n) o(g( n)) lim
n g( n )
0
f( n )
f( n ) O(g( n )) lim
n g( n )
f( n )
f( n ) Θ(g( n )) 0 lim
n g( n )
f( n) Ω(g( n))
f( n )
lim 0
n g( n )
24
Example Functions
sqrt(n) , n, 2n, ln n, exp(n), n + sqrt(n) , n + n2
limn 2n /n = 2 2n is (n)
limn n /2n = 1/2 n is (2n) 25
Terminology
The most common classes are given names:
Q(1) constant
Q(ln(n)) logarithmic
Q(n) linear
Q(n ln(n)) “n log n”
Q(n2) quadratic
Q(n3) cubic
2n, en, 4n, ... exponential
26
Little-o as a Weak Ordering
• We can show that, for example
ln( n ) = o( np ) for any p > 0
• Proof: Using l’Hôpital’s rule,
• So we have:
ln( n) 1/ n 1 1
lim
n p
lim
n p 1
lim
n p
lim
n
n p
027
n pn pn p
Algorithms Analysis
30
Rules for arithmetic with big-O symbols
• If T1(n) = O(f (n)) and T2(n) = O(g(n)), then
– (a) T1(n) + T2(n) = O(f (n) + g(n)) (intuitively and
less formally it is O(max(f (n), g(n)))),
– (b) T1(n) ∗ T2(n) = O(f (n) ∗ g(n)).
• If T(n) is a polynomial of degree k, then T(n) =
(nk).
• logk n = O(n) for any constant k.
This tells us that logarithms grow very slowly.
31
Rules for arithmetic with big-O symbols
• If f(n) = O(g(n)), then c * f(n) = O(g(n)) for
any constant c.
• If f1(n) = O(g(n)) but f2(n) = o(g(n)), then
f1(n) + f2(n) = O(g(n)).
• If f(n) = O(g(n)), and g(n) = o(h(n)), then
f(n) = o(h(n)). (complexity of g o h )
• These are not all of the rules, but they’re
enough for most purposes.
32
Complexity of a Problem vs
Complexity of an Algorithm
33
Algorithm Complexity Analysis
• We define Tavg(N) and Tworst(N), as the average and
worst-case running time, resp., used by an algorithm on
input of size N. Clearly, Tavg(N) ≤ Tworst(N).
• Occasionally, the best-case performance of an algorithm
is analyzed.
– but of little interest: does not represent the typical behavior.
• Average-case performance often reflects typical behavior
• Worst-case performance represents a guarantee for
performance on any possible input
• We are interested in algorithm analysis not programme
analysis: implementation issues/details/inefficiencies, etc.
34
Algorithm Complexity Analysis
Consider the following algorithm
diff = sum = 0;
For (k=0: k < N; k++)
sum sum + 1;
diff diff - 1;
For (k=0: k < 3N; k++)
sum sum - 1;
35
• First line takes 2 basic steps
• Every iteration of first loop takes 2 basic steps.
• First loop runs N times
• Every iteration of second loop takes 1 basic
step
• Second loop runs for 3N times
• Overall, 2 + 2N + 3N steps
• This is O(N)
36
Rules for Complexity Analysis
Complexity of a loop:
O(Number of iterations in a loop * maximum
complexity of each iteration)
Nested Loops:
Analyze the innermost loop first,
Complexity of next outer loop =
number of iterations in this loop * complexity of inner loop, etc.
sum = 0;
Inner loop: O(N)
For (i=0; i < N; i++)
Outer loop: N iterations
For (j=0; j < N; j++) sum sum + 1;
Overall: O(N2)
37
First loop: O(N)
Inner loop: O(N)
Outer loop: N iterations
Overall: O(N2) + O(N)
O(N2)
If (Condition)
S1
Else S2 Maximum of the two complexities
If (yes)
Algo 1
else Ago 2 O(Algo 1)
38
Analysis of recursion
• Suppose we have the code (not a good one):
Long fib (int n) {
if (n == 0 || n == 1) 1
return 1; 2
else
return fib(n – 1) + fib(n – 2); 3
}
T(0) = T(1) = 1;
n>=2 T(n) = cost of constant op at 1 + cost of line 3 work
T(n) = 1 op + (addition + 2 function calls) = O(1) + (addition +
cost of fib(n-1) + cost fib(n-2)) = 2 + T(n-1) + T(n-2)
39
40
Analysis of recursion
42
Running time of 4 algorithms for max
subsequence sum (in seconds) [Weiss, Fig 2.2]
43
Algorithm 1
44
Analysis of Algorithm 1
Precise analysis is obtained from the sum
i=0N-1 j=iN-1 k=ij 1 # of times
thisSum += a[ k ]; # is executed
1. First: k=ij 1 = j – i +1
2. Inner loop: # sum of first N-i elements
Overall: O(N3)
Note: The innermost loop can be made more
efficient leading to O(N2)
46
Algorithm 2
47
Divide and Conquer
49
Maximum subsequence sum by
divide and conquer
• Divide the array into two parts: left part, right
part each to be solved recursively
• Max. subsequence lies completely in left, or
completely in right or spans the middle.
52
53
Algorithm 3 Analysis
• If N=1; lines 8 to 12 executed; taken to be one unit
T(1) = 1
• N>1: 2 recursive calls + 2 for loops + some
bookkeeping ops (e.g. lines 14, 34)
– The 2 for loops (lines 19 to 32): clearly O(N)
– Lines 8, 14, 18, 26, 34: constant time; ignored compared
to O(N)
– Recursive calls made on half the array size each.
2 * T(N/2)
SO: programme time is 2 * T(N/2) + O(N) with T(1) =1
54
Complexity Analysis
T(1) = 1
T(n) = 2T(n/2) + c.n
= 2.c.n/2 + 4T(n/4) + c.n
= 4T(n/4) + 2c.n
= 8T(n/8) + 3c.n
=…………..
= 2iT(n/2i) + i.c.n
=………………… (reach a point when n = 2i i=log n
= n.T(1) + c n log n
56
Algorithm 4 Analysis
• T(N) = O(N) Obvious!
• What is not obvious at first sight is the logic
of the algorithm.
57
Binary Search
• You have a sorted list of numbers
• You need to search the list for a specific number
• If the number exists
– Then find its position
– Else detect that
• In Binary search, subdivide the list into 2
• If number is in the center, done;
else: double-recursively search into the left one
58
and into the right one.
Search(num, A[],left, right)
{
if (left == right)
{
if (A[left ] == num) return(left) and exit;
else conclude NOT PRESENT and exit;
}
center = (left + right)/2;
If (A[center] num)
Search(num, A[], center + 1, right);
If (A[center] > num)
Search(num, A[], left, center );
If (A[center] == num) return(center) and exit;
} 59
Complexity Analysis
T(n) = T(n/2) + c
Similar logic as for the analysis of Algorithm
3 above
O(log n) complexity
60
Master Theorem
• Master Theorem: used to calculate time complexity of
divide-and-conquer algorithms.
• Simple and quick way to calculate the time complexity of
a recursive relation.
• It applies to recurrence relations of the form:
T(n) = aT(n/b) + f(n)
where
– n is the size of the input;
– a is the number of subproblems in the recursion;
– n/b is the size of each subproblem (all assumed to have
the same size);
– f(n): cost of work done outside recursive calls; includes
cost of dividing the problem and solutions merge cost.
61
This slide and the following two are taken from:
Su, J. CS 161 Lecture 3, University of Stanford,
62
/https://web.stanford.edu/class/archive/cs/cs161/cs161.1168/lecture3.pdf , (Retrieved 02/10/2023)
Examples of Master Theorem
• Example 1: T(n) = 9 T(n/3) + n.
Here a = 9, b = 3, f(n) = n, and nlogb a = nlog3 9 =
(n2).
Since f(n) = O(nlog3 9-ε) for ε = 1, case 1 of the
Master Theorem applies, so T(n) = (n2).