0% found this document useful (0 votes)
0 views

ASYMPTOTIC NOTATION

The lecture covers the correctness of algorithms, focusing on partial and total correctness, assertions, and loop invariants. It introduces asymptotic analysis and notation, including big-Oh, big-Omega, and big-Theta, to evaluate algorithm efficiency. The divide and conquer method is exemplified through the MergeSort algorithm, detailing its steps of dividing, conquering, and combining.

Uploaded by

kashafbutt72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

ASYMPTOTIC NOTATION

The lecture covers the correctness of algorithms, focusing on partial and total correctness, assertions, and loop invariants. It introduces asymptotic analysis and notation, including big-Oh, big-Omega, and big-Theta, to evaluate algorithm efficiency. The divide and conquer method is exemplified through the MergeSort algorithm, detailing its steps of dividing, conquering, and combining.

Uploaded by

kashafbutt72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 49

Algorithms and Data

Structures
Lecture II
Simonas Šaltenis
Nykredit Center for Database
Research
Aalborg University
[email protected]
This Lecture
 Correctness of algorithms
 Growth of functions and asymptotic
notation
 Some basic math revisited
 Divide and conquer example – the
merge sort
Correctness of Algorithms
 The algorithm is correct if for any legal
input it terminates and produces the
desired output.
 Automatic proof of correctness is not
possible
 But there are practical techniques and
rigorous formalisms that help to
reason about the correctness of
algorithms
Partial and Total
Correctness
 Partial correctness
IF this point is reached, THEN this is the desired output

Any legal Algorithm Output


input
 Total correctness
INDEED this point is reached, AND this is the desired output

Any legal Algorithm Output


input
Assertions
 To prove partial correctness we associate a
number of assertions (statements about
the state of the execution) with specific
checkpoints in the algorithm.
 E.g., A[1], …, A[k] form an increasing sequence
 Preconditions – assertions that must be
valid before the execution of an algorithm
or a subroutine
 Postconditions – assertions that must be
valid after the execution of an algorithm or
a subroutine
Loop Invariants
 Invariants – assertions that are valid any
time they are reached (many times during
the execution of an algorithm, e.g., in loops)
 We must show three things about loop
invariants:
 Initialization – it is true prior to the first iteration
 Maintenance – if it is true before an iteration, it
remains true before the next iteration
 Termination – when loop terminates the
invariant gives a useful property to show the
correctness of the algorithm
Example of Loop Invariants
(1)
 Invariant: at the start for
for j=2
j=2 to
to length(A)
length(A)
do key=A[j]
of each for loop, A[1… do key=A[j]
i=j-1
i=j-1
j-1] consists of while
while i>0
i>0 and
and A[i]>key
A[i]>key
elements originally in do
do A[i+1]=A[i]
A[i+1]=A[i]
A[1…j-1] but in sorted i--
i--
A[i+1]:=key
A[i+1]:=key
order
Example of Loop Invariants
(2)
 Invariant: at the start for
for j=2
j=2 to
to length(A)
length(A)
do key=A[j]
of each for loop, A[1… do key=A[j]
i=j-1
i=j-1
j-1] consists of while
while i>0
i>0 and
and A[i]>key
A[i]>key
elements originally in do
do A[i+1]=A[i]
A[i+1]=A[i]
A[1…j-1] but in sorted i--
i--
A[i+1]:=key
A[i+1]:=key
order
 Initialization: j = 2, the invariant trivially holds
because A[1] is a sorted array 
Example of Loop Invariants
(3)
 Invariant: at the start for
for j=2
j=2 to
to length(A)
length(A)
do key=A[j]
of each for loop, A[1… do key=A[j]
i=j-1
i=j-1
j-1] consists of while
while i>0
i>0 and
and A[i]>key
A[i]>key
elements originally in do
do A[i+1]=A[i]
A[i+1]=A[i]
A[1…j-1] but in sorted i--
i--
A[i+1]:=key
A[i+1]:=key
order
 Maintenance: the inner while loop moves
elements A[j-1], A[j-2], …, A[j-k] one position right
without changing their order. Then the former A[j]
element is inserted into k-th position so that A[k-1] 
A[k]  A[k+1].
A[1…j-1] sorted + A[j] A[1…j] sorted
Example of Loop Invariants
(4)
 Invariant: at the start for
for j=2
j=2 to
to length(A)
length(A)
do key=A[j]
of each for loop, A[1… do key=A[j]
i=j-1
i=j-1
j-1] consists of while
while i>0
i>0 and
and A[i]>key
A[i]>key
elements originally in do
do A[i+1]=A[i]
A[i+1]=A[i]
A[1…j-1] but in sorted i--
i--
A[i+1]:=key
A[i+1]:=key
order
 Termination: the loop terminates, when j=n+1.
Then the invariant states: “A[1…n] consists of
elements originally in A[1…n] but in sorted order”

Asymptotic Analysis
 Goal: to simplify analysis of running time by
getting rid of ”details”, which may be
affected by specific implementation and
hardware

like “rounding”: 1,000,001  1,000,000

3n2  n2
 Capturing the essence: how the running time
of an algorithm increases with the size of the
input in the limit.

Asymptotically more efficient algorithms are best
for all but small inputs
Asymptotic Notation
 The “big-Oh” O-Notation
 asymptotic upper bound
 f(n) = O(g(n)), if there
exists constants c and n0,
s.t. f(n)  c g(n) for n  c g (n)
n0 f (n )

Running Time
 f(n) and g(n) are functions
over non-negative
integers
 Used for worst-case n0 Input Size

analysis
Asymptotic Notation (2)
 The “big-Omega”
Notation
 asymptotic lower bound
 f(n) = (g(n)) if there exists
constants c and n0, s.t. c g(n)
f (n )

Running Time
 f(n) for n  n0
c g (n)
 Used to describe best-case
running times or lower
bounds of algorithmic
problems n0 Input Size
 E.g., lower-bound of searching
in an unsorted array is (n).
September 17,
2001
Asymptotic Notation (3)
 Simple Rule: Drop lower order terms
and constant factors.
 50 n log n is O(n log n)
 7n - 3 is O(n)
 8n2 log n + 5n2 + n is O(n2 log n)
 Note: Even though (50 n log n) is
O(n5), it is expected that such an
approximation be of as small an order
as possible
Asymptotic Notation (4)
 The “big-Theta”
Notation
 asymptoticly tight bound
 f(n) = (g(n)) if there exists
constants c1, c2, and n0, s.t. c 2 g (n )
f (n )

Running Time
c1 g(n)  f(n)  c2 g(n) for
n  n0 c1 g (n )
 f(n) = (g(n)) if and only if
f(n) = (g(n)) and f(n) =
n0
(g(n)) Input Size

 O(f(n)) is often misused


instead of (f(n))
Asymptotic Notation (5)
 Two more asymptotic notations
 "Little-Oh" notation f(n)=o(g(n))
non-tight analogue of Big-Oh
 For every c, there should exist n0 , s.t. f(n) 
c g(n) for n  n0

Used for comparisons of running times. If
f(n)=o(g(n)), it is said that g(n) dominates
f(n).
 "Little-omega" notation f(n)=(g(n))
non-tight analogue of Big-Omega
Asymptotic Notation (6)
 Analogy with real numbers
 f(n) = O(g(n)) f g
 f(n) = (g(n)) f g
 f(n) = (g(n)) f g
 f(n) = o(g(n)) f g
 f(n) = (g(n)) f g
 Abuse of notation: f(n) = O(g(n))
actually means f(n) O(g(n))
A Quick Math Review
 Geometric progression
 given an integer n0 and a real number 0<
a  a1i 1  a  a 2  ...  a n 
n
1  a n 1


i 0 1 a

 geometric progressions exhibit


exponential growth
 Arithmetic
n
progression n2  n
i 1  2  3  ...  n 
i 0 2
A Quick Math Review (2)
Summations
 The running time of insertion sort is
determined by a nested loop
for
for j2
j2 to
to length(A)
length(A)
keyA[j]
keyA[j]
ij-1
ij-1
while
while i>0
i>0 and
and A[i]>key
A[i]>key
A[i+1]A[i]
A[i+1]A[i]
ii-1
ii-1
A[i+1]key
A[i+1]key

 Nested loops correspond to


 j 2
n
summations
( j  1) O ( n 2
)
Proof by Induction
 We want to show that property P is true for all
integers n  n0
 Basis: prove that P is true for n0
 Inductive step: prove that if P is true for all k
such that n0  k  n – 1 then P is also true for n
Example
n
n(n  1)
S (n)  i 

for n 1
i 0 2

Basis S (1)  i 1(1 1)


1


i 0 2
Proof by Induction (2)
 Inductive Step
k
k (k  1)
S (k )  i  for 1 k n  1
i 0 2
n n 1
S (n)  i  i  n S (n  1)  n 
i 0 i 0

( n  1 1) ( n 2  n  2 n)
( n  1) n  
2 2
n(n  1)

2
Divide and Conquer
 Divide and conquer method for
algorithm design:
 Divide: If the input size is too large to deal
with in a straightforward manner, divide
the problem into two or more disjoint
subproblems
 Conquer: Use divide and conquer
recursively to solve the subproblems
 Combine: Take the solutions to the
subproblems and “merge” these solutions
into a solution for the original problem
MergeSort: Algorithm
 Divide: If S has at least two elements (nothing
needs to be done if S has zero or one elements),
remove all the elements from S and put them
into two sequences, S1 and S2 , each containing
about half of the elements of S. (i.e. S1 contains
the first n/2elements and S2 contains the
remaining n/2elements.
 Conquer: Sort sequences S1 and S2 using
MergeSort.
 Combine: Put back the elements into S by
merging the sorted sequences S1 and S2 into one
sorted sequence
Merge Sort: Algorithm
Merge-Sort(A,
Merge-Sort(A, p, p, r)
r)
if
if pp << rr then
then
q(p+r)/2
q(p+r)/2
Merge-Sort(A,
Merge-Sort(A, p, p, q)
q)
Merge-Sort(A,
Merge-Sort(A, q+1,q+1, r)r)
Merge(A,
Merge(A, p, p, q,
q, r)
r)

Merge(A,
Merge(A, p,
p, q,
q, r)
r)
Take
Take the
the smallest
smallest ofof the
the two
two topmost
topmost elements
elements of
of
sequences
sequences A[p..q]
A[p..q] and
and A[q+1..r]
A[q+1..r] andand put
put into
into the
the
resulting
resulting sequence.
sequence. Repeat
Repeat this,
this, until
until both
both sequences
sequences
are
are empty.
empty. Copy
Copy the
the resulting
resulting sequence
sequence into
into A[p..r].
A[p..r].
MergeSort (Example) - 1
MergeSort (Example) - 2
MergeSort (Example) - 3
MergeSort (Example) - 4
MergeSort (Example) - 5
MergeSort (Example) - 6
MergeSort (Example) - 7
MergeSort (Example) - 8
MergeSort (Example) - 9
MergeSort (Example) - 10
MergeSort (Example) - 11
MergeSort (Example) - 12
MergeSort (Example) - 13
MergeSort (Example) - 14
MergeSort (Example) - 15
MergeSort (Example) - 16
MergeSort (Example) - 17
MergeSort (Example) - 18
MergeSort (Example) - 19
MergeSort (Example) - 20
MergeSort (Example) - 21
MergeSort (Example) - 22
Recurrences
 Recursive calls in algorithms can be
described using recurrences
 A recurrence is an equation or
inequality that describes a function in
terms of its value on smaller inputs
 Example: Merge Sort
 (1) if n 1
T (n) 
2T (n / 2)  (n) if n  1
Next Week
 Solving recurrences

You might also like