0% found this document useful (0 votes)
7 views

Algorithms and c coding for informatic olympiads.

The document provides an overview of mathematical concepts relevant to object-oriented programming and algorithm analysis, including exponents, logarithms, recursive definitions, function growth, and proofs. It discusses various proof techniques such as direct proof, induction, counterexample, contradiction, and contraposition, along with Big-O notation and its applications in analyzing algorithm complexity. The document emphasizes classifying functions by their asymptotic growth and provides examples to illustrate these concepts.

Uploaded by

havinaslan282
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Algorithms and c coding for informatic olympiads.

The document provides an overview of mathematical concepts relevant to object-oriented programming and algorithm analysis, including exponents, logarithms, recursive definitions, function growth, and proofs. It discusses various proof techniques such as direct proof, induction, counterexample, contradiction, and contraposition, along with Big-O notation and its applications in analyzing algorithm complexity. The document emphasizes classifying functions by their asymptotic growth and provides examples to illustrate these concepts.

Uploaded by

havinaslan282
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 85

CMPE 160: Introduction to

Object Oriented
Programming

Algorithm
Analysis

These are the slides of the textbook by Mark Allen Weiss,


modified/corrected as necessary.
MATHEMATICAL REVIEW
Mathematical Review
• Exponents
• Logarithms
• Recursive Definitions
• Function Growth
• Proofs
Exponents
• X0 = 1 by definition
• XaXb = X (a+b)
• Xa / Xb = X (a-b)
Show that: X-n = 1 / Xn

• (Xa )b = Xab
Logarithms
• logaX = Y  aY = X , a > 0, X > 0
e.g.: log28 = 3; 23 = 8
• loga1 = 0 because a0 = 1
logX means log2X
lgX means log10X
lnX means logeX,
where ‘e’ is the natural
number
Logarithms
• loga(XY) = logaX + logaY
• loga(X/Y) = logaX – logaY
• loga(Xn) = nlogaX
• loga b = (log2 b)/ (log2a)
•a log x
a =x
Recursive Definitions
• Basic idea: To define objects,
processes and properties in terms of

simpler objects,
simpler processes or

properties of simpler
objects/processes.
Recursive Definitions
• Terminating rule - defining the object
explicitly.

• Recursive rules - defining the object in


terms of a simpler object.
Example
• Factorial
f(n) = n!

f(0) = 1 i.e. 0! = 1
f(n) = n * f(n-1)
i.e. n! = n * (n-1)!
Example
• Fibonacci numbers
F(0) = 1
F(1) = 1
F(k+1) = F(k) + F(k-1)

1, 1, 2, 3, 5, 8, …
Function Growth
• lim ( n) = ∞, n → ∞
• lim ( na ) = ∞, n → ∞, a > 0
• lim ( 1/n) = 0, n → ∞
• lim ( 1 / (na) ) = 0, n → ∞, a > 0
• lim ( log( n )) = ∞, n → ∞
• lim ( an ) = ∞, n → ∞, a > 0
Function Growth
• lim (f(x) + g(x)) = lim (f(x)) + lim
(g(x))

• lim (f(x) * g(x)) = lim (f(x)) * lim (g(x))

• lim (f(x) / g(x)) = lim (f(x)) / lim (g(x))

• lim (f(x) / g(x)) = lim (f '(x) / g '(x))


Example
• lim (n/ n2 ) = 0, n → ∞
• lim (n2 / n) = ∞, n → ∞
• lim (n2 / n3 ) = 0, n → ∞
• lim (n3 / n2 ) = ∞, n → ∞
• lim (n / ((n+1)/2) = 2, n → ∞
Some Derivatives
• (logan)' = (1/n) logae
• (an)' = (an) ln(a)
Proofs
• Direct proof
• Proof by induction
• Proof by counterexample
• Proof by contradiction
• Proof by contraposition
Direct Proof
• Based on the definition of the object/
property
Example:
Prove that if a number is divisible by 6
then it is divisible by 2
Proof: Let m divisible by 6.
Therefore, there exists q such that m =
6q
6=2.3
m = 6q = 2.3.q = 2r, where r = 3q
Therefore m is divisible by 2
Proof by Induction
• We use proof by induction when our claim concerns
a sequence of cases, which can be numbered

• Inductive base:
• Show that the claim is true for the smallest
case, usually k = 0 or k = 1.

• Inductive hypothesis:
– Assume that the claim is true for some k
– Prove that the claim is true for k+1
Example of Proof by
Induction
Prove by induction that
S(N) = Σ 2i = 2 (N+1) - 1, for any integer N ≥ 0
i=0 to N

Inductive base
Let n = 0. S(0) = 20 = 1
On the other hand, by the formula S(0) = 2 (0+1)
– 1 = 1.
Therefore the formula is true for n = 0

2. Inductive hypothesis
Assume that S(k) = 2 (k+1) – 1
We have to show that S(k+1) = 2(k + 2) -1
By the definition of S(n):
S(k+1) = S(k) + 2(k+1) = 2 (k+1) – 1 + 2(k+1)
= 2. 2(k+1) – 1 = 2(k+2) – 1
Proof by Counterexample
• Used when we want to prove that a
statement is false.
Types of statements: a claim that
refers to all members of a class.

• Example: The statement "all odd


numbers are prime" is false.

• A counterexample is the number 9: it


is odd and it is not prime.
Proof by Contradiction
• Assume that the statement is false,
i.e. its negation is true.

• Show that the assumption implies that


some known property is false - this
would be the contradiction
• Example: Prove that there is no
largest prime number
Proof by Contraposition
Used when we have to prove a statement
of the form P  Q.
Instead of proving P  Q, we prove its
equivalent ~Q  ~P
Example: Prove that if the square of an
integer is odd then the integer is odd
We can prove using direct proof the
statement:
If an integer is even then its square is
even.
BIG-OH AND OTHER
NOTATIONS IN
ALGORITHM ANALYSIS
Good News / Bad News
• Good news: You now know OOP
• Bad news: You need to learn good
programming
Example
• Question: Given a group of N
numbers, how do you find the kth
largest?
– Sol #1: Read into an array, sort, return kth
– Sol #2: Use an array of size k and while
reading always keep the largest k values;
return kth
• Problem: What if N=1,000,000 and
k=500,000?
Big-Oh and Other Notations in
Algorithm Analysis
• Classifying Functions by Their
Asymptotic Growth
• Theta, Little oh, Little omega
• Big Oh, Big Omega
• Rules to manipulate Big-Oh
expressions
• Typical Growth Rates
Classifying Functions by Their
Asymptotic Growth
• Asymptotic growth: The rate of growth
of a function

• Given a particular differentiable


function f(n), all other differentiable
functions fall into three classes:

– growing with the same rate


– growing faster
– growing slower
Theta
f(n) and g(n) have
same rate of growth, if

lim( f(n) / g(n) ) = c,


0 < c < ∞, n -> ∞

Notation: f(n) = Θ( g(n) )


pronounced "theta"
Little oh
f(n) grows slower than g(n)
(or g(n) grows faster than f(n))
if

lim( f(n) / g(n) ) = 0, n → ∞

Notation: f(n) = o( g(n) )


pronounced "little oh"
Little omega
f(n) grows faster than g(n)
(or g(n) grows slower than f(n))
if

lim( f(n) / g(n) ) = ∞, n -> ∞

Notation: f(n) = ω (g(n))


pronounced "little omega"
Little omega and Little oh
if g(n) = o( f(n) )

then f(n) = ω( g(n) )

Examples: Compare n and n2

lim( n/n2 ) = 0, n → ∞, n = o(n2)


lim( n2/n ) = ∞, n → ∞, n2 = ω(n)
Theta: Relation of
Equivalence
R: "having the same rate of
growth":
relation of equivalence gives a
partition over the set of all differentiable
functions - classes of equivalence.

Functions in one and the same


class are equivalent with respect to
their growth.
Algorithms with Same
Complexity
• Two algorithms have same
complexity, if the functions
representing the number of operations
have same rate of growth.

• Among all functions with same rate of


growth we choose the simplest one
to represent the complexity.
Example
Compare n and (n+1)/2

lim( n / ((n+1)/2 )) = 2,
same rate of growth

(n+1)/2 = Θ(n)
- rate of growth of a linear function
Example
Compare n2 and n2+ 6n

lim( n2 / (n2+ 6n ) )= 1
same rate of growth.

n2+6n = Θ(n2)
rate of growth of a quadratic function
Example
Compare log n and log n2

lim( log n / log n2 ) = ½


same rate of growth.

log n2 = Θ(log n)
logarithmic rate of growth
Example
Θ(n3): n3
5n3+ 4n
105n3+ 4n2 + 6n

Θ(n2): n2
5n2+ 4n + 6
n2 + 5
Θ(log n): log n
log n2
log (n + n3)
Comparing Functions
• same rate of growth: g(n) = Θ(f(n))
• different rate of growth:
either g(n) = o (f(n))
g(n) grows slower than f(n),
and hence f(n) = ω(g(n))

or g(n) = ω (f(n))
g(n) grows faster than f(n),
and hence f(n) = o(g(n))
The Big-Oh Notation
f(n) = O(g(n))
if f(n) grows with
same rate or slower than g(n).

f(n) = Θ(g(n)) or
f(n) = o(g(n))
Example
n+5 = Θ(n) = O(n) = O(n2)
= O(n3) = O(n5)

the closest estimation: n+5 = Θ(n)


the general practice is to use
the Big-Oh notation:
n+5 = O(n)
The Big-Omega Notation
The inverse of Big-Oh is Ω

If g(n) = O(f(n)),
then f(n) = Ω (g(n))

f(n) grows faster or with the same rate


as g(n): f(n) = Ω (g(n))
Rules to manipulate
Big-Oh expressions
Rule 1:
a. If
T1(N) = O(f(N))
and
T2(N) = O(g(N))
then
T1(N) + T2(N) =
max( O( f(N) ), O( g(N) ) )
Rules to manipulate
Big-Oh expressions
b. If
T1(N) = O( f(N) )
and
T2(N) = O( g(N) )
then

T1(N) * T2(N) = O( f(N)* g(N) )


Rules to manipulate
Big-Oh expressions
Rule 2:

If T(N) is a polynomial of degree k,


then
T(N) = Θ( Nk )

Rule 3:
log k N = O(N) for any constant k.
i.e., logarithms grow very slowly
Examples
n2 + n = O(n2)
we disregard any lower-order term

nlog(n) = O(nlog(n))

n2 + nlog(n) = O(n2)
Typical Growth Rates
C constant, we write O(1)
logN logarithmic
log2N log-squared
N linear
NlogN
N2 quadratic
N3 cubic
2N exponential
N! factorial
Problems
N2 = O(N2) true
2N = O(N2) true
N= O(N2) true

N2 = O(N) false
2N = O(N) true
N= O(N) true
Problems
N2 = Θ (N2) true
2N = Θ (N2) false
N = Θ (N2) false

N2 = Θ (N) false
2N = Θ (N) true
N = Θ (N) true
Running Time Calculations
APPLICATION OF BIG-OH
TO PROGRAM ANALYSIS
Background
• The work done by an algorithm, i.e.
its complexity, is determined by the
number of the basic operations
necessary to solve the problem.
The Task
Determine how the number of
operations depend on the size of input:
N - size of input
F(N) - number of operations
Basic operations in an
algorithm
•Problem: Find x in an array

•Operation: Comparison of x with


an entry in the array

•Size of input: The number of the


elements in the array
Basic operations in an
algorithm
Problem: Multiplying two matrices
with real entries

Operation:
Multiplication of two real numbers

Size of input:
The dimensions of the matrices
Basic operations in an
algorithm
Problem: Sort an array of numbers

Operation: Comparison of two array


entries plus moving elements in the
array

Size of input: The number of


elements in the array
Counting the number of
operations
A. for loops O(n)
The running time of a for loop is at most
the running time of the statements inside
the loop times the number of iterations.
for loops
sum = 0;
for (i=0; i<n; i++)
sum = sum + i;

The running time is O(n).


Counting the number of
operations
B. Nested loops
The total running time is the running time
of the inside statements times the product
of the sizes of all the loops.
Nested loops
sum = 0;
for (i=0; i<n; i++)
for (j=0; j<n; j++)
sum++;

The running time is O(n2)


Counting the number of
operations
C. Consecutive program fragments
Total running time:
the maximum of the running time of the
individual fragments
Consecutive program
fragments
sum = 0; O(n)
for (i=0; i < n; i++)
sum = sum + i;

sum = 0; O(n2)
for (i=0; i < n; i++)
for (j=0; j<2n; j++)
sum++;

The maximum is O(n2)


Counting the number of
operations
D. If statement
if C
S1;
else
S2;

The running time is the maximum of the


running times of S1 and S2.
Example

O(n3)
sum = 0;
for (i=0; i<n; i++)
for (j=0; j<n*n; j++)
sum++;
Example

O(n2)
sum = 0;
for (i=0; i<n; i++)
for (j=0; j<i; j++)
sum++;
Example

O(n3*log n)
for(j = 0; j < n*n; j++)
compute_val(j);

The complexity of compute_val(x) is


given to be O(n*logn)
Search in an unordered array of
elements

O(n)
for (i=0; i<n; i++)
if (a[i]==x)
return 1;
return -1;
Search in a table nxm

O(n*m)
for (i=0; i<n; i++)
for (j=0; j<m; j++)
if (a[i][j]==x)
return 1 ;
return -1;
LOGARITHMS IN RUNNING
TIME
Logarithms in Running Time
• Binary search
• Euclid’s algorithm
• Exponentials
• Rules to count operations
Divide-and-Conquer
Algorithms
• Subsequently reducing the problem by
a factor of two requires O(logN)
operations
Why log N?
• A complete binary tree with N leaves
has logN levels.

• Each level in the divide-and-conquer


algorithm corresponds to an operation

• Hence, the number of operations is


O(logN)
Example: 8 leaves, 3 levels
Binary Search
Solution 1:
Scan all elements from left to right,
each time comparing with X
O(N) operations.
Binary Search
Solution 2: O(logN)
Find the middle element Amid in the list
and compare it with X
If they are equal, stop
If X < Amid consider the left part
If X > Amid consider the right part

Do until the list is reduced to one element


Euclid's Algorithm
Finding the greatest common divisor
(GCD)

GCD of M and N (s.t. M > N)


= GCD of N and M%N
GCD and Recursion
Recursion:
If M%N = 0 return N
Else return GCD(N, M%N)

The answer is the last nonzero


remainder.
GCD and Recursion
M N rem
24 15 9

15 9 6

9 6 3

6 3 0
Euclid’s Algorithm
(non-recursive implementation)
long gcd(long m, long n) {
long rem;
while (n!=0) {
rem = m%n;
m = n;
n = rem;
}
return m;
}
Why O(logN)
• M % N <= M / 2
• After 1st iteration N appears as first
argument, the remainder is less than
N/2
• After 2nd iteration the remainder
appears as first argument and will be
reduced by a factor of two
• Hence, O(logN)
Computing XN
XN = X*(X2) N/2
, N is odd

XN = (X2) N/2
, N is even
Computing XN
long pow (long x, int n) {
if (n == 0)
return 1;
if (is_Even(n))
return pow(x*x, n/2);
else
return x * pow(x*x, n/2);
}
Why O(LogN)
If N is odd : two multiplications

The operations are at most 2logN:


O(logN)
Another recursion for XN
• Another recursive definition that
reduces the power just by 1:

• XN = X*XN -1

• Here the operations are N-1, i.e. O(N)


and the algorithm is less efficient than
the divide-and-conquer algorithm.
How to count operations
• Single statements (not function calls):
constant O(1) = 1

• Sequential fragments: the maximum


of the operations of each fragment
How to count operations
• Single loop running up to N, with
single statements in its body: O(N)

• Single loop running up to N,


with the number of operations in the
body O(f(N)):
O( N * f(N) )
How to count operations
• Two nested loops each running up to
N, with single statements: O(N2)

• Divide-and-conquer algorithms with


input size N: O(logN)

Or O(N*logN) if each step requires


additional processing of N elements
Example: What is the probability two
numbers to be relatively prime?
tot=0; rel=0;
for (i=0; i<=n; i++)
for (j=i+1; j<=n; j++) {
tot++;
if (gcd(i,j)==1)
rel++;
}
return (rel/tot);
Running time = ?

You might also like