0% found this document useful (0 votes)
16 views

Ads Complexity

Uploaded by

j.p.boon-1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Ads Complexity

Uploaded by

j.p.boon-1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Introduction to Algorithm Complexity

ADC/ADS

Rom Langerak
E-mail: [email protected]

c GfH&JPK&RL
Introduction to Algorithm Complexity ADC/ADS

The word “Algorithm”

Origin
Abu Ja’far Muhammad ibn Musa Al-Khwarizmi
(lived in Baghdad from about 780 to about 850)

Meaning
An algorithm is a scheme of steps to go
from given data (input)
to a desired result (output).

A step can be an arithmetic operation, or a yes-no decision, or a choice


between alternatives ... (example: looking up a word in a dictionary)

c GfH&JPK&RL 1
Introduction to Algorithm Complexity ADC/ADS

Quality of algorithms

The quality of an algorithm is judged in a number of ways.

1. The steps always lead to a correct result.

2. The scheme wastes no steps, it is an efficient way to reach the


result.

3. The scheme wastes no resources, e.g. it uses a reasonable amount


of memory.

c GfH&JPK&RL 2
Introduction to Algorithm Complexity ADC/ADS

The word “Complexity”

By the complexity of an algorithm we do not mean that it is difficult to


understand how the scheme of steps works

If we speak of the complexity of an algorithm we discuss the relation


between
the size of the input for the algorithm and
the effort needed to reach a result
(maybe a lot of steps, maybe a lot of memory ...)

c GfH&JPK&RL 3
Introduction to Algorithm Complexity ADC/ADS

The complexity of algorithms

Effort needed to reach a result has always two aspects

• the number of steps taken time complexity

• the amount of space used space complexity

time complexity 6= space complexity

c GfH&JPK&RL 4
Introduction to Algorithm Complexity ADC/ADS

The complexity of algorithms

Assess complexity independent of type of computer used, programming


language, programming skills, etc.

• Technology improves things by a constant factor only

• Even a supercomputer cannot rescue a “bad” algorithm


– a faster algorithm on a slower computer will always win
for sufficiently large inputs

c GfH&JPK&RL 5
Introduction to Algorithm Complexity ADC/ADS

The time complexity of algorithms

Analysis is based on choice of basic operations such as:

• “comparing two numbers” for sorting an array of numbers

• “multiplying two real numbers” for matrix multiplication

This approach works, but be careful

• # basic operations should be good estimate of total # steps

• # basic operations constitutes basis for determining the rate of growth


of the number of steps as the input gets larger

c GfH&JPK&RL 6
Introduction to Algorithm Complexity ADC/ADS

Time complexity in practice – Actual run times

Complexity 33n 46n log n 13n2 3.4n3 2n

n Solution time

10 .00033 sec .0015 sec .0013 sec .0034 sec .001 sec
102 .0033 sec .03 sec .13 sec 3.4 sec 4·1016 yr
103 .033 sec .45 sec 13 sec .94 hour
104 .33 sec 6.1 sec 1300 sec 39 days
105 3.3 sec 1.3 min 1.5 days 108 yr

Note: impact of high constant factors diminishes if n grows

c GfH&JPK&RL 7
Introduction to Algorithm Complexity ADC/ADS

In practice — Maximum solvable input size

Complexity 33n 46n log n 13n2 3.4n3 2n

time allowed Maximum solvable input size

1 sec 30,000 2,000 280 67 20


1 min 1,800,000 82,000 2,170 260 26
1 hour 108,000,000 1,180,800 16,818 1,009 32

We cannot handle input 60 times larger if we increase time (or speed) by factor 60

c GfH&JPK&RL 8
Introduction to Algorithm Complexity ADC/ADS

In practice – The effect of faster computers


Let N be the maximum input size that can be handled in a fixed time
What happens to N if we take a computer that is K times faster?

# steps performed maximum feasible input size

on input of size n Nfast

log n NK
n K ·N

n2 K ·N
2n N + log K

c GfH&JPK&RL 9
Introduction to Algorithm Complexity ADC/ADS

Average, best and worst case (time) complexity –


Intuition
Consider a given algorithm A

• The worst case complexity of A is the maximum # basic operations


performed by A on any input of a certain size

• The best case complexity of A is the minimum # basic operations


performed by A on any input of a certain size (often not so interesting
- why?)

• The average case complexity of A is the average # basic operations


performed by A on any input of a certain size

Each of these complexities defines a function: number of operations


(steps) versus input size

c GfH&JPK&RL 10
Introduction to Algorithm Complexity ADC/ADS

Average, best and worst case complexity – in a picture

W (n)

Run time A(n)

B(n)

Input size n

c GfH&JPK&RL 11
Introduction to Algorithm Complexity ADC/ADS

Average, best and worst case (time) complexity


Dn = set of inputs of size n
t(I) = # basic operations needed for input I
known by analysis the algorithm
Pr(I) = the probability that input I occurs
known by experience, or assumption,
e.g., “all inputs occur equally frequent”)
Now formally:

• The worst case complexity: W (n) = max{ t(I) | I ∈ Dn }

• The best case complexity: B(n) = min{ t(I) | I ∈ Dn }


X
• The average case complexity: A(n) = Pr(I) · t(I)
I∈Dn

c GfH&JPK&RL 12
Introduction to Algorithm Complexity ADC/ADS

Linear search

Input: array E with n entries and item K to be looked up


Output: does K occur in the array E?

def linsearch(E, n, K):


i = 0
found = False
while i<n and not found:
found = (E[i]==K)
i = i+1
return found

c GfH&JPK&RL 13
Introduction to Algorithm Complexity ADC/ADS

Analyzing linear search

Basic operation = comparison of integer K with array element E[. . .]

• Dn = all permutations of n elements out of a set of N > n elements

• W (n) = n, as in worst case K is last element in array, or is not found

• B(n) = 1, as in best case K is the first element in the array

?? A(n) ≈ 21 n, as on average half of the array needs to be checked? No

c GfH&JPK&RL 14
Introduction to Algorithm Complexity ADC/ADS

Average case complexity for linear search – 1

There are two scenarios:

• K occurs in array E ; this yields average complexity Asucc(n)


• K does not occur in array E ; this yields average complexity Af ail (n)

A(n) = Pr{K in E} · Asucc(n) + Pr{K not in E} · Af ail (n)

Pr{K not in E} = 1 − Pr{K in E}

Af ail(n) = n

c GfH&JPK&RL 15
Introduction to Algorithm Complexity ADC/ADS

Average case complexity for linear search – 2


How about Asucc(n)?

• assume all elements in the array E are distinct

• note that if E[i] == K the search takes i comparisons

n
X
Then Asucc(n) = Pr{E[i] == K | K in E} · i
i=1
≡ (* assume K can equally well be at any index if it occurs in E *)
n n
X 1 1 X
Asucc(n) = · (i) = · i
i=1
n n i=1
≡ (* standard series *)
1 (n + 1) n+1
Asucc(n) = · ·n=
n 2 2

c GfH&JPK&RL 16
Introduction to Algorithm Complexity ADC/ADS

Average case complexity for linear search – 3

Putting the results together yields:

n+1
A(n) = Pr{K in E} · + (1 − Pr{K in E}) · n
2

Note that if Pr{K in E} equals

n+1
• 1, then A(n) = 2 = Asucc(n) ≈ 50% of E checked

• 0, then A(n) = n = W (n) E is entirely checked

• 12 , then A(n) = 3·n


4 + 14 ≈ 75% of E checked

c GfH&JPK&RL 17
Introduction to Algorithm Complexity ADC/ADS

Asymptotic analysis

Exactly determining A(n), B(n) and W (n) is very hard, and not so
useful

Typically no exact analysis, but asymptotic analysis

• look at growth of execution time for n → ∞


• thus ignoring small inputs and constant factors
• intuition: drop lower order terms, e.g.,
4 3 4
W (n) = 5n + 3n + 10 is like n

• thus, we obtain lower/upper bounds on A(n), B(n) and W (n) now!


• mathematical ingredient: asymptotic order of functions (classes O , Ω and Θ)

c GfH&JPK&RL 18
Introduction to Algorithm Complexity ADC/ADS

Classes O, Ω and Θ – 1
O c·f (n) Ω
g ( n) g ( n)
Run time

Run time
c·f (n)

n0 Input size n n0 Input size n

Θ c′ ·f (n)
g ( n)
Run time

c·f (n)

n0 Input size n

c GfH&JPK&RL 19
Introduction to Algorithm Complexity ADC/ADS

Classes O, Ω and Θ – 2

O(f ) Θ(f ) Ω(f )

Functions that grow


no faster than f Functions that grow Functions that grow
at the same rate as f at least as fast as f

c GfH&JPK&RL 20
Introduction to Algorithm Complexity ADC/ADS

Classes O, Ω and Θ – 3
Let f and g be functions from IN (input size) to IR>0 (step count), let
c > 0, and n0 > 0

• O(f ) is the set of functions that grow no faster than f


– g ∈ O(f ) means c · f (n) is an upper bound on g(n), n > n0

• Ω(f ) is the set of functions that grow at least as fast as f


– g ∈ Ω(f ) means c · f (n) is a lower bound on g(n), n > n0

• Θ(f ) is the set of functions that grow at the same rate as f


– g ∈ Θ(f ) means c′ · f (n) is an upper bound on g(n), n > n0 and
c · f (n) is a lower bound on g(n), n > n0

c GfH&JPK&RL 21
Introduction to Algorithm Complexity ADC/ADS

The class big-oh

Formally, g ∈ O(f ) iff ∃c > 0, n0 such that ∀n > n0 : 0 6 g(n) 6 c · f (n)

Handy alternative:
g ∈ O(f ) if limn→∞ fg(n)
(n) 6= ∞

Example: consider g(n) = 3n2 + 10n + 6. We have:

• g 6∈ O(n) since limn→∞ g(n)/n = ∞


• g ∈ O(n2) since limn→∞ g(n)/n2 = 3
• g ∈ O(n3) since limn→∞ g(n)/n3 = 0

Tightest upper bounds are of most use! g ∈ O(n2) says more than
g ∈ O(n3)

c GfH&JPK&RL 22
Introduction to Algorithm Complexity ADC/ADS

The class big-omega

Formally, g ∈ Ω(f ) iff ∃c > 0, n0 such that ∀n > n0 : c · f (n) 6 g(n)

Handy alternative:
g ∈ Ω(f ) if limn→∞ fg(n)
(n) 6= 0

Example: consider g(n) = 3n2 + 10n + 6. We have:

• g ∈ Ω(n) since limn→∞ g(n)/n = ∞


• g ∈ Ω(n2) since limn→∞ g(n)/n2 = 3
• g 6∈ Ω(n3) since limn→∞ g(n)/n3 = 0

Tightest lower bounds are of most use! g ∈ Ω(n2) says more than
g ∈ Ω(n)

c GfH&JPK&RL 23
Introduction to Algorithm Complexity ADC/ADS

The class big-theta

g ∈ Θ(f ) iff ∃c, c′ > 0, n0 such that ∀n > n0 : c · f (n) 6 g(n) 6 c′ · f (n)

Handy alternative: g ∈ Θ(f ) if limn→∞ fg(n)


(n) = c for some 0 < c < ∞

• recall g ∈ Θ(f ) if and only if g ∈ O(f ) and g ∈ Ω(f )

Example: consider g(n) = 3n2 + 10n + 6. We have:

• g 6∈ Θ(n) since limn→∞ g(n)/n = ∞ > 0


• g ∈ Θ(n2) since limn→∞ g(n)/n2 = 3
• g 6∈ Θ(n3) since limn→∞ g(n)/n3 = 0

c GfH&JPK&RL 24
Introduction to Algorithm Complexity ADC/ADS

Help in finding limits

Note that
if f, g are differentiable, and
both limn→∞ f (n) = ∞ and limn→∞ g(n) = ∞
g ′ (n)
then limn→∞ fg(n)
(n) = limn→∞ f ′(n)

This is the l’Hôpital’s rule

c GfH&JPK&RL 25
Introduction to Algorithm Complexity ADC/ADS

Some elementary properties


• Reflexivity:
– f ∈ O(f )
– f ∈ Ω(f )
– f ∈ Θ(f )
• Transitivity:
– f ∈ O(g) and g ∈ O(h) imply f ∈ O(h)
– f ∈ Ω(g) and g ∈ Ω(h) imply f ∈ Ω(h)
– f ∈ Θ(g) and g ∈ Θ(h) imply f ∈ Θ(h)
• Symmetry:
– f ∈ Θ(g) if and only if g ∈ Θ(f )
• Relation between O and Ω:
– f ∈ O(g) if and only if g ∈ Ω(f )

c GfH&JPK&RL 26
Introduction to Algorithm Complexity ADC/ADS

Fibonacci numbers
Consider the growth of a rabbit population, e.g.:
• suppose we have two rabbits, one of each sex
• rabbits have bunnies once a month after they are 2 months old
• they always give birth to twins, one of each sex
• they never die and never stop propagating

The # (pairs of) rabbits after n months is computed by:


Fib (0) = 0
Fib (1) = 1
Fib (n+2) = Fib (n+1) + Fib (n) for n > 0

We thus obtain the sequence:


n 0 1 2 3 4 5 6 7 8 9 ...
Fib (n) 0 1 1 2 3 5 8 13 21 34 ...

c GfH&JPK&RL 27
Introduction to Algorithm Complexity ADC/ADS

Fibonacci – A naive algorithm


def fibRec(n):
if (n==0) or (n==1):
return n
else:
return fibRec(n-2)+fibRec(n-1)

The # arithmetic steps TfibRec (n) needed to compute fibRec(n) is:

TfibRec (0) = 0
TfibRec (1) = 0
TfibRec (n+2) = TfibRec (n+1) + TfibRec (n) + 3 for n > 0

This is a recurrence equation!


To decide the complexity class of fibRec we solve this equation

c GfH&JPK&RL 28
Introduction to Algorithm Complexity ADC/ADS

Fibonacci – Counting steps for the naive algorithm

TfibRec (0) = 0
TfibRec (1) = 0
TfibRec (n+2) = TfibRec (n+1) + TfibRec (n) + 3 for n > 0

Using induction we can prove (check!) that TfibRec (n) = 3·Fib(n+1) − 3

How large is Fib(n)? Can we give upper/lower bounds?

It follows (by induction) that 2(n−2)/2 6 Fib(n) for n > 2

But, this means that the complexity of fibRec is exponential:


√ n
TfibRec (n) ∈ Ω(( 2) )

c GfH&JPK&RL 29
Introduction to Algorithm Complexity ADC/ADS

Fibonacci revisited – An iterative algorithm


def fibIter(n):
if (n==0) or (n==1):
return n
else:
a = 0
b = 1
for i in range(2, n+1):
a, b = b, a+b
return b

The # arithmetic steps TfibIter (n+2) = n+1 for n > 0 and 0 otherwise
So, the complexity of fibIter is linear:
TfibIter (n) ∈ Θ(n)

c GfH&JPK&RL 30
Introduction to Algorithm Complexity ADC/ADS

General problem – Solving recurrence equations


For simple cases closed-form solutions exist, e.g.:

• T (n) = c·T (n−1) with T (0) = k has unique solution T (n) = k·cn
Pn
• T (n) = T (n−1) + f (n) has unique solution T (n) = T (0) + i=1 f (i)

Sometimes solutions can be guessed (and then proven). For the


general case no general solution does exist.
Typical case:
n

T (n) = a·T b + f (n) with a > 1 and b > 1

• problem is divided into a similar problems of size nb each


• non-recursive cost f (n) to split problem or combine solutions of sub-problems

c GfH&JPK&RL 31
Introduction to Algorithm Complexity ADC/ADS

The recursion tree method – 1


Visualize the back-substitution process as a tree, keeping track of

• the size of the remaining arguments in the recurrence


• the nonrecursive costs
⇒ very useful to obtain a good guess for substitution method

The recurrence tree of T (n) = 3·T (n/4) + n looks like:


remaining argument T (n) n
for recurrence
nonrecursive cost

T (n/4) n/4 T (n/4) n/4 T (n/4) n/4

T (n/16) n/16 T (n/16) n/16 T (n/16) n/16 .................. T (n/16) n/16 T (n/16) n/16 T (n/16) n/16

c GfH&JPK&RL 32
Introduction to Algorithm Complexity ADC/ADS

Intermezzo: Recursion tree properties

n

T (n) = a·T b + f (n) with a > 1 and b > 1

The level of the recursion tree for T (n) is the least k such that bk > n.
So the level of this recursion tree is ⌈blog n⌉ = ⌈log n/ log b⌉.

The number of nodes at level m in the recursion tree for T (n) is am.

The number of leaves in the recursion tree for T (n) is


a(log n/ log b) = n(log a/ log b).

x z z log y z log x
(Calculus: log y = log y/z log x and x =y )

c GfH&JPK&RL 33
Introduction to Algorithm Complexity ADC/ADS

The recursion tree method – 2


T ( n) n n

T (n/4)n/4 T (n/4)n/4 T (n/4)n/4 3n/4


4 log n

T (n/16)n/16 T (n/16)n/16 T (n/16)n/16 .................. T (n/16)n/16 T (n/16)n/16 T (n/16)n/16 9n/16

T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) .................. T (1) T (1) T (1) T (1) T (1) T (1) T (1)
4
n log 3 leaves

4 log n−1  i
X 3 4 log 3
T (n) = ·n + |c·n{z }
4
| i=0
{z } | level cost {z } total cost leafs
sum levels

c GfH&JPK&RL 34
Introduction to Algorithm Complexity ADC/ADS

The recursion tree method – 3


An upper bound on the complexity can now be obtained by:
4 log n−1  i
X 3 4 log 3
T (n) = ·n + c·n
i=0
4
⇒ (* ignore the smaller terms *)
∞  i
X 3 4 log 3
T (n) < ·n + c·n
i=0
4
≡ (* geometric series *)
1 4 log 3
T (n) < ·n + c·n
1 − (3/4)
≡ (* calculus *)
4 log 3
T (n) < 4·n + c·n
≡ (* obtain the order *)
T (n) ∈ O(n)
The algorithm thus has a linear worst-case complexity

c GfH&JPK&RL 35
Introduction to Algorithm Complexity ADC/ADS

The Master theorem


n

Consider: T (n) = a · T b + f (n) with a > 1 and b > 1

• problem is divided into a similar problems of size nb each


• non-recursive cost f (n) to split problem or combine solutions of sub-problems
• we saw that # leafs in the recursion tree is nE with E = log a/ log b

The Master theorem says:


if then
1. f (n) ∈ O(nE−ε) for some ε > 0 T (n) ∈ Θ(nE )
2. f (n) ∈ Θ(nE ) T (n) ∈ Θ(f (n)· log n)
3. f (n) ∈ Ω(nE+ε) for some ε > 0
and af ( nb ) 6 cf (n) for some c < 1 T (n) ∈ Θ(f (n))

If none of the cases apply, the Master theorem gives no clue!

c GfH&JPK&RL 36
Introduction to Algorithm Complexity ADC/ADS

Applying the Master theorem


T (n) = 4·T (n/2) + n

• so a = 4, b = 2 and f (n) = n; E = log 4/ log 2 = 2


• since f (n) = n ∈ O(n2−ε), case 1 applies: T (n) ∈ Θ(n2)

T (n) = 4·T (n/2) + n2

• so a = 4, b = 2 and f (n) = n2; E = log 4/ log 2 = 2


• since n2 6∈ O(n2−ε), case 1 does not apply
• but since f (n) = n2 ∈ Θ(n2), case 2 applies: T (n) ∈ Θ(n2· log n)

T (n) = 4·T (n/2) + n3

• so a = 4, b = 2 and f (n) = n3; E = log 4/ log 2 = 2


• clearly, cases 1 and 2 do not apply since E = 2
• since f (n) = n3 ∈ Ω(n2+ε) for ε = 1, case 3 might apply
• . . . and as 4(n/2)3 = 1/2n3 6 cf (n) for c = 1/2, case 3 applies:
T (n) ∈ Θ(n3)

c GfH&JPK&RL 37
Introduction to Algorithm Complexity ADC/ADS

The Master theorem does not always apply


n2
T (n) = 4·T (n/2) + log n

• we have a = 4, b = 2 and f (n) = n2/(log n); E = 2


• n2/(log n) 6∈ O(n2−ε) since f (n)/n2 = (log n)−1 6∈ O(n−ε)
⇒ case 1 of the Master theorem does not apply
• n2/(log n) 6∈ Θ(n2)
⇒ case 2 of the Master theorem does not apply
• f (n) 6∈ Ω(n2+ε) since f (n)/n2 = (log n)−1 6∈ Ω(n+ε)
⇒ case 3 of the Master theorem does not apply

[⇒] The Master theorem does not apply to this case at all!

By substitution one obtains: T (n) ∈ Θ(n2· log(log n))

c GfH&JPK&RL 38
Introduction to Algorithm Complexity ADC/ADS

Understanding the Master theorem – 1


n

Consider: T (n) = a · T b + f (n) with a > 1 and b > 1

Suppose f (n) is small enough, i.e., f (n) << nE

• to get a feeling let f (n) = 0, then


• only the costs at the leafs will count!
• there will be b log n levels in the recursion tree
• the # nodes at level k equals ak
b log n
⇒ T (n) = a = nE

This yields case 1 of the Master theorem

f (n) < nE is insufficient; f (n) must be exponentially smaller than nE

c GfH&JPK&RL 39
Introduction to Algorithm Complexity ADC/ADS

Understanding the Master theorem – 2

n

Consider: T (n) = a·T b + f (n) with a > 1 and b > 1

Suppose f (n) is large enough, i.e., f (n) >> nE

• if f (n) is large enough, it will exceed a·f (n/b)


• for instance f (n) = n3 > (n/3)3 + (n/3)3 + (n/3)3 unless a > 27

This yields case 3 of the Master theorem

c GfH&JPK&RL 40
Introduction to Algorithm Complexity ADC/ADS

Understanding the Master theorem – 3

If f (n) and nE grow equally fast, the depth of the tree is important

• this perfect balance is what one wants to achieve with divide-and-conquer


algorithms!

This yields case 2 of the Master theorem

c GfH&JPK&RL 41

You might also like