0% found this document useful (0 votes)
3 views

DAA_Unit-1

Uploaded by

dharmthumar17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DAA_Unit-1

Uploaded by

dharmthumar17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Elementary Algorithmic

Disclaimer: Content in the PPT is prepared from Books, the internet, videos, and blogs
Introduction

What is Analysis of an Algorithm?


✓ Analyzing an algorithm means calculating/predicting the resources that the
algorithm requires.
✓ Analysis provides theoretical estimation for the required resources of an
algorithm to solve a specific computational problem.
✓ Two most important resources are computing time (time complexity) and
storage space (space complexity).
Why Analysis is required?
✓ By analyzing some of the candidate algorithms for a problem, the most
efficient one can be easily identified.

2
Efficiency of Algorithm
 The efficiency of an algorithm is a measure of the amount of resources consumed in solving a
problem of size 𝑛.
 An algorithm must be analyzed to determine its resource usage.
 Two major computational resources are execution time and memory space.
 Memory Space requirement can not be compared directly, so the important resource is
computational time required by an algorithm.
 To measure the efficiency of an algorithm requires to measure its execution time using any of
the following approaches:
1. Empirical Approach: To run it and measure how much processor time is needed.
2. Theoretical Approach: Mathematically computing how much time is needed as a function of input size.

3
How Analysis is Done?

Empirical (posteriori) approach Theoretical (priori) approach

▪ Programming different competing ▪ Determining mathematically the


techniques & running them on resources needed by each
various inputs using computer. algorithm.
▪ Implementation of different ▪ Uses the algorithm instead of an
techniques may be difficult. implementation.
▪ The same hardware and software ▪ The speed of an algorithm can be
environments must be used for determined independent of the
comparing two algorithms. hardware/software environment.
▪ Results may not be indicative of ▪ Characterizes running time as a
the running time on other inputs function of the input size 𝒏,
not included in the experiment. considers all possible values.

4
Time Complexity
 Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as
a function of the length of the input.
 Running time of an algorithm depends upon,
1. Input Size
2. Nature of Input
 Generally time grows with the size of input, for example, sorting 100 numbers will take less
time than sorting of 10,000 numbers.
 So, running time of an algorithm is usually measured as a function of input size.
 Instead of measuring actual time required in executing each statement in the code, we
consider how many times each statement is executed.
 So, in theoretical computation of time complexity, running time is measured in terms of
number of steps/primitive operations performed.

5
Linear Search
 Suppose, you are given a jar containing some business cards.
 You are asked to determine whether the name “Bill Gates" is in the jar.
 To do this, you decide to simply go through all the cards one by one.
 How long this takes?
 Can be determined by how many cards are in the jar, i.e., Size of Input.
 Linear search is a method for finding a particular value from the given list.
 The algorithm checks each element, one at a time and in sequence, until the desired element
is found.
 Linear search is the simplest search algorithm.
 It is a special case of brute-force search.

7
Linear Search – Example

Search for 𝟏 in given array 𝟐 𝟗 𝟑 𝟏 𝟖

Comparing value of ith index with the given element one by one, until we get the required
element or end of the array
Step 1: i=1 Step 3: i=3

𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖

i i
𝟐≠𝟏 𝟑≠𝟏
Step 2: i=2 Step 4: i=4

𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖

i i
𝟗≠𝟏 Element found at ith index, i=4
8
Linear Search - Algorithm
# Input : Array A, element x
# Output : First index of element x in A or -1 if not found

Algorithm: Linear_Search
for i = 1 to last index of A
if A[i] equals element x
return i
return -1

9
Linear Search - Analysis
 The required element in the given array can be found at,
1. E.g. 2: It is at the first position
Best Case: minimum comparison is required
2. E.g. 3 or 1: Anywhere after the first position
Average Case: average number of comparison is required
3. E.g. 7: Last position or element does not found at all
Worst Case: maximum comparison is required

Worst Case

Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕

Best Case
Average Case

10
Linear Search - Analysis
 The required element in the given array can be found at,

Case 1: element 2 which is at Case 2: element 3 anywhere Case 3: element 7 at last


the first position so after the first position so, an position or element does not
minimum comparison is average number of found at all, maximum
required comparison is required comparison is required

Best Case Average Case Worst Case

Worst Case

Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕

Best Case
Average Case

11
Analysis of Algorithm

Best Case Average Case Worst Case


Resource usage is minimum Resource usage is average Resource usage is
maximum
Algorithm’s behavior under Algorithm’s behavior under Algorithm’s behavior under
optimal condition random condition the worst condition
Minimum number of steps Average number of steps or Maximum number of steps
or operations operations or operations
Lower bound on running Average bound on running Upper bound on running
time time time
Generally do not occur in Average and worst-case performances are the most used
real in algorithm analysis.

12
Book Finder Example ▪ Suppose, you are writing a program to find a
book from the shelf.
▪ For any required book, it will start checking
books one by one from the bottom.
▪ If you wanted Harry Potter 3, it would only
take 3 actions (or tries) because it’s the third
book in the sequence.
▪ If Harry Potter 7 — it’s the last book so it
would have to check all 7 books.
▪ What if there are total 10 books? How about
10,00,000 books? It would take 1 million
tries.

13
Number Sorting - Example
 Suppose you are sorting numbers in Ascending / Increasing order.
 The initial arrangement of given numbers can be in any of the following three orders.
1. Numbers are already in required order, i.e., Ascending order
No change is required – Best Case
2. Numbers are randomly arranged initially.
Some numbers will change their position – Average Case
3. Numbers are initially arranged in Descending or Decreasing order.
All numbers will change their position – Worst Case

𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓

14
Number Sorting - Example
 Suppose you are sorting numbers in Ascending / Increasing order.
 The initial arrangement of given numbers can be in any of the following three orders.

Case 1: Numbers are already Case 2: Numbers are Case 3: Numbers are initially
in required order, i.e., randomly arranged initially. arranged in Descending
Ascending order Some numbers will change order so, all numbers will
No change is required their position change their position

Best Case Average Case Worst Case

𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓

15
Best, Average, & Worst Case

Problem Best Case Average Case Worst Case


Linear Search Element at the first Element in any of the Element at last position
position middle positions or not present
Book Finder The first book Any book in-between The last book

Sorting Already sorted Randomly arranged Sorted in reverse order

16
What is a Good Algorithm?
 Efficient
 Running time
 Space used
 Efficiency as a function of input size
 The number of bits in an input number
 Number of data elements(Numbers and Points)

17
Measuring the Running Time
 How should we measure the running time of an algorithm?
 Experimental Study
 Write a program that implements the algorithm
 Run the program with data sets of varying size and composition.
 Use a method like System.currentTimeMillis() to get an accurate measure of the actual running time

18
Limitations of Experimental Studies
 It is necessary to implement and test the algorithm in order to determine its running time.
 Experiments can be done only on a limited set of inputs, and may not be indicative of the
running time on other inputs not included in the experiment.
 In order to compare two algorithms, the same hardware and software environments should be
used.

19
Best/Worst/Average Case
 For a specific size of input n, investigate running times for different input instances:

20
Best/Worst/Average Case
 For inputs of all sizes:

21
Asymptotic Notations
Given two algorithms A1 and A2 for a problem, how do we decide
 which one runs faster?
 What we need is a platform independent way of comparing algorithms.
 Solution: Count the worst-case number of basic operations b(n) for inputs of size n and then
analyse how this function b(n) behaves as n grows. This is known as worst-case analysis.
 Observations regarding worst-case analysis:
 Usually, the running time grows with the input size n.
 Consider two algorithm A1 and A2 for the same problem. A1 has a worst-case running time (100n + 1) and
A2 has a worst-case running time (2n2 + 3n + 1). Which one is better?
▪ A2 runs faster for small inputs (e.g., n = 1, 2)
▪ A1 runs faster for all large inputs (for all n ≥ 49)
 We would like to make a statement independent of the input size.
 Solution: Asymptotic analysis
▪ We consider the running time for large inputs.
▪ A1 is considered better than A2 since A1 will beat A2 eventually

23
continue..
Solution: Do an asymptotic worst-case analysis.
 Observations regarding asymptotic worst-case analysis:
 It is difficult to count the number of operations at an extremely fine level and keep track of
these constants.
 Asymptotic analysis means that we are interested only in the rate of growth of the running
time function w.r.t. the input size. For example, note that the rates of growth of functions (n2
+ 5n + 1) and (n2 + 2n + 5) is determined by the n2 (quadratic) term. The lower order terms are
insignificant. So, we may as well drop them.
 The nature of growth rate of functions 2n2 and 5n2 are the same. Both are quadratic
functions. It makes sense to drop these constants too when one is interested in the nature of
the growth functions.
 These constants typically depends upon system you are using, such as hardware, compiler
etc.
 We need a notation to capture the above ideas.
24
Introduction
 The theoretical (priori) approach of analyzing an algorithm to measure the efficiency does not
depend on the implementation of the algorithm.
 In this approach, the running time of an algorithm is describes as Asymptotic Notations.
 Computing the running time of algorithm’s operations in mathematical units of computation
and defining the mathematical formula of its run-time performance is referred to as
Asymptotic Analysis.
 An algorithm may not have the same performance for different types of inputs. With the
increase in the input size, the performance will change.
 Asymptotic analysis accomplishes the study of change in performance of the algorithm with
the change in the order of the input size.
 Using Asymptotic analysis, we can very well define the best case, average case, and worst
case scenario of an algorithm.

25
Asymptotic Notations
 Asymptotic notations are mathematical notations used to represent the time complexity of
algorithms for Asymptotic analysis.
 Following are the commonly used asymptotic notations to calculate the running time
complexity of an algorithm.
1. Ο Notation
2. Ω Notation
3. θ Notation
 This is also known as an algorithm’s growth rate.
 Asymptotic Notations are used,
1. To characterize the complexity of an algorithm.
2. To compare the performance of two or more algorithms solving the same problem.

26
1. 𝐎-Notation (Big 𝐎 notation) (Upper Bound)
 The notation Ο(𝑛) is the formal way to express the upper bound of an algorithm's running
time.
 It measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
 For a given function 𝑔(𝑛), we denote by Ο(𝑔(𝑛)) the set of functions,

Ο(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n0 ≤
n}

27
Big(𝐎) Notation ▪ 𝑔(𝑛) is an asymptotically upper bound for
𝑓(𝑛).

▪ 𝑓(𝑛) = 𝑂(𝑔(𝑛)) implies:


𝒄. 𝒈(𝒏)
𝒇 𝒏 “ ≤ ” 𝒄. 𝒈(𝒏)

▪ For any value of n, the running time of an


𝒇(𝒏)
algorithm does not cross the time provided
by O(g(n)).

▪ Time taken by a known algorithm to solve a


problem with worse case input gives the
upper bound.
𝒏
𝒏𝟎 𝒇(𝒏) = 𝑶(𝒈(𝒏))

28
Example
For a function f(n) and g(n) there are positive constants c and n0 such that :
f(n) ≤ c.g(n) for n ≥ n0

Conclusion: 2n+6 is O(n)

29
Example
On the other hand n2 is not O(n) because
there is no c and n0 such that:
n2 ≤ cn for n ≥ n0

The graph shows that no matter how large


a c is chosen there is an n big enough
that n2 > cn.

30
Simple rule
 Drop lower order term and constant factor

 50 n log n is O(n log n)


 7n -3 is O(n)
 8n2 log n + 5n2 + n is O(n2 log n )

 Use O-notations to express number of primitive operations executed as function of input


size.
 Comparing asymptotic running time:
▪ an algorithm that runs O(n) times is better than one that runs in O(n2) times
▪ similarly, O(logn) is better than O(n)
▪ Hierarchy of function: log n < n < n2 < n3 < 2n

31
2. 𝛀-Notation (Omega notation) (Lower Bound)
 Big Omega notation (Ω ) is used to define the lower bound of any algorithm or we can say the
best case of any algorithm.
 This always indicates the minimum time required for any algorithm for all input values,
therefore the best case of any algorithm.
 When a time complexity for any algorithm is represented in the form of big-Ω, it means that
the algorithm will take at least this much time to complete it's execution. It can definitely take
more time than this too.
 For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,

Ω(g(n)) = {f(n):there exist positive constants c and 𝑛0 such that 0 ≤ cg n ≤ f n for all 𝑛0 ≤
n}

32
Big(𝛀) Notation ▪ 𝑔(𝑛) is an asymptotically lower bound for
𝑓(𝑛).

𝒇(𝒏) ▪ 𝑓(𝑛) = Ω(𝑔(𝑛)) implies:


𝒇(𝒏)“ ≥ ” 𝒄. 𝒈(𝒏)

𝒄. 𝒈(𝒏) ▪ if there exists a positive constant c such that


it lies above cg(n), for sufficiently large n.

▪ For any value of n, the minimum time


required by the algorithm is given by Omega
Ω(g(n)).

𝒏
𝒏𝟎 𝒇(𝒏) = 𝜴(𝒈(𝒏))

33
3. 𝛉-Notation (Theta notation) (Same order)
 The notation θ(n) is the formal way to enclose both the lower bound and the upper bound of
an algorithm's running time.
 Since it represents the upper and the lower bound of the running time of an algorithm, it is
used for analyzing the average case complexity of an algorithm.
 The time complexity represented by the Big-θ notation is the range within which the actual
running time of the algorithm will be.
 So, it defines the exact Asymptotic behavior of an algorithm.
 For a given function 𝑔(𝑛), we denote by θ(𝑔(𝑛)) the set of functions,

θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants c1 , c2 and n0 such that 0 ≤ 𝑐1 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝑐2 𝑔 𝑛 for all 𝑛0 ≤ 𝑛}

34
𝛉-Notation ▪ 𝜃(𝑔(𝑛)) is a set, we can write
𝒄𝟐 . 𝒈(𝒏)
𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to indicate that 𝑓(𝑛) is a
member of 𝜃(𝑔(𝑛)).

𝒇(𝒏) ▪ 𝑔(𝑛) is an asymptotically tight bound for


𝑓(𝑛).
𝒄𝟏 . 𝒈(𝒏)
▪ 𝑓(𝑛) = 𝜃(𝑔(𝑛)) implies:
𝒇(𝒏)“ = ” 𝒄. 𝒈(𝒏)

If a function f(n) lies anywhere in between


c1g(n) and c2g(n) for all n ≥ n0, then f(n) is said
to be asymptotically tight bound.
𝒏
𝒏𝟎 𝒇(𝒏) = 𝜽(𝒈(𝒏))
f(n) is Θ(g(n)) if and only if f(n) is Ο(g(n)) and
f(n) is Ω(g(n))
35
Asymptotic Notations
1. O-Notation (Big O notation) (Upper Bound)

Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such 𝐟(𝐧) = 𝐎(𝐠(𝐧))
that 𝟎 ≤ 𝒇(𝒏) ≤ 𝒈(𝒏) for all 𝑛0 ≤ 𝑛}

2. Ω-Notation (Omega notation) (Lower Bound)

Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such that 𝐟 𝐧 = Ω(𝐠(𝐧))
𝟎 ≤ 𝒄𝒈 𝒏 ≤ 𝒇 𝒏 for all 𝑛0 ≤ 𝑛}

3. θ-Notation (Theta notation) (Same order)

θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐1 , 𝑐2 and 𝑛0 such 𝐟(𝐧) = 𝛉(𝐠(𝐧))
that 𝟎 ≤ 𝐜𝟏 𝐠 𝐧 ≤ 𝐟 𝐧 ≤ 𝐜𝟐 𝐠 𝐧 for all 𝑛0 ≤ 𝑛}

36
Asymptotic Notations – Examples
 Example 1:  Example 2:
𝑓(𝑛) = 𝑛2 and 𝑔 𝑛 = 𝑛 𝑓 𝑛 = 𝑛 and 𝑔 𝑛 = 𝑛2

Algo. 1 Algo. 2 Algo. 1 Algo. 2


running time running time running time running time

𝑓 𝑛 ≥ 𝑔 𝑛 ⟹ 𝑓 𝑛 = Ω(𝑔(𝑛)) 𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = O(𝑔(𝑛))

𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝒏 𝒏 𝒇(𝒏) = 𝒏 𝒈(𝒏) = 𝒏𝟐


1 1 1 1 1 1
2 4 2 2 2 4
3 9 3 3 3 9
4 16 4 4 4 16
5 25 5 5 5 25

37
Asymptotic Notations – Examples
 Example 3: 𝑓 𝑛 = 𝑛2 and 𝑔 𝑛 = 2𝑛
𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = O(𝑔(𝑛))

𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝟐𝒏
1 1 2 𝑓(𝑛) < 𝑔(𝑛)
2 4 4 𝑓(𝑛) = 𝑔(𝑛)
3 9 8 𝑓(𝑛) > 𝑔(𝑛)
4 16 16 𝑓(𝑛) = 𝑔(𝑛)
Here for 𝑛 ≥ 4,
5 25 32 𝑓(𝑛) < 𝑔(𝑛)
𝑓 𝑛 ≤𝑔 𝑛
6 36 64 𝑓(𝑛) < 𝑔(𝑛)
𝑠𝑜, 𝑛0 = 4
7 49 128 𝑓(𝑛) < 𝑔(𝑛)

38
▪ Example 4:
Asymptotic Notations – Examples 𝐟(𝐧) = 𝟑𝟎𝐧 + 𝟖 is in the order of n, or
O(n)
𝐠(𝐧) = 𝒏𝟐 + 𝟏 is order n2 , or O(n2 )
𝒇(𝒏) = 𝑶(𝒈(𝒏))
g (n)=n2+1
Value of function →

f(n)=30n+8

▪ In general, any 𝑂(𝑛2 ) function is faster-


growing than any 𝑂(𝑛) function.

Base value 𝑛0
Increasing n →

39
Common Orders of Magnitude

𝒏 𝒍𝒐𝒈 𝒏 𝒏𝒍𝒐𝒈 𝒏 𝒏𝟐 𝒏𝟑 𝟐𝒏 𝒏!
4 2 8 16 64 16 24
16 4 64 256 4096 65536 2.09 x 1013
64 6 384 4096 262144 1.84 × 1019 1.26 x 1029
256 8 2048 65536 16777216 1.15 × 1077 ∞
1024 10 10240 1048576 1.07 × 109 1.79 × 10308 ∞
4096 12 49152 16777216 6.87 × 1010 101233 ∞

40
Comparison
 O(c) < O(log logn) < O(logn) < O(n1/2) < O(n) < O(nlogn) < O(n2) < O(n3) < O(nk) < O(2n) < O(nn) <
O(22^n)

41
Asymptotic Notations in Equations
 Consider an example of buying elephants and goldfish:
Cost = cost_of_elephants + cost_of_goldfish
Negligible
Cost ≈ cost_of_elephants (approximation)

 Maximum Rule: Let, 𝑓, 𝑔: 𝑁 → 𝑅+ the max rule says that:


𝑂( 𝑓(𝑛)+𝑔(𝑛))=𝑂(max(𝑓(𝑛),𝑔(𝑛)))

1. n4 + 100n2 + 10n + 50 is 𝐎(𝐧𝟒 )


2. 10n3 + 2n2 is 𝐎(𝐧𝟑 )
3. n3 - n2 is 𝐎(𝐧𝟑 )

 The low order terms in a function are relatively insignificant for large 𝒏
𝑛4 + 100𝑛2 + 10𝑛 + 50 ≈ 𝑛4
43
Methods of proving Asymptotic Notations
1) Proof by definition : In this method, we apply the formal definition of the
asymptotic notation, and find out the values of constants c > 0 and n0 > 0, such that the required
notation is proved.

2) Proof by Limit Rules : In this method, we apply certain rules of limit, and then
prove the required notation.

44
Proof by definition
 Prove the following statements :
1. n2 + n = O(n2) ≈ O(n3)
According to the formal definition, let f(n) = n2 + n and g(n) = n2
Find the values of constants c > 0 and n0 > 0, such that 0 ≤ f(n) ≤ c g(n), for all n ≥ n0 (condition for Big-O
notation)
2. n3 + 4n2 = Ω(n2) ≈ Ω(n)
According to the formal definition, let f(n) = n3 + 4n2 and g(n) = n2
Find the values of constants c > 0 and n0 > 0, such that 0 ≤ c g(n) ≤ f(n), for all n ≥ n0 (condition for Big-Ω
notation)
3. n2 + n = Θ(n2)
According to the formal definition, let f(n) = n2 + n and g(n) = n2
Find the values of constants c1 > 0, c2 > 0, and n0 > 0, such that 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n), for all n ≥ n0
(condition for Θ notation)

45
Proof by Limit Rules
If f(n) and g(n) are asymptotically increasing functions, then the following rules hold true:

Prove that : √n grows asymptotically faster than log n.


Proof: Let us consider f(n) = √n and g(n) = log n

We compute and then, based on the result, the specific


“Limit Rule” proves the desired result.

f(n)=n³ and g(n)=2n³+n


46
Little-Oh and Little-Omega
f(n)=o(g(n)) => For every c, there should exist n0 , s.t. f(n) ≤ c g(n) for n ≥ n0
f(n)=ω(g(n)) => For every c, there should exist n0 , s.t. f(n) ≥ c g(n) for n ≥ n0

Analogy with real numbers


f(n) = O(g(n)) ≅ f ≤ g
f(n) = Ω(g(n)) ≅ f ≥ g
f(n) = Θ(g(n)) ≅ f = g
f(n) = o(g(n)) ≅ f < g
f(n) = ω(g(n)) ≅ f > g

47
Examples
 Example: Find upper bound of running time of constant function f(n) = 6993.
 To find the upper bound of f(n), we have to find c and n0 such that 0 ≤ f (n) ≤ c.g(n) for all n ≥ n0
0 ≤ f (n) ≤ c × g (n)
0 ≤ 6993 ≤ c × g (n)
0 ≤ 6993 ≤ 6993 x 1
So, c = 6993 and g(n) = 1
Any value of c which is greater than 6993, satisfies the above inequalities, so all such values of c are possible.
0 ≤ 6993 ≤ 8000 x 1 → true
0 ≤ 6993 ≤ 10500 x 1 → true
Function f(n) is constant, so it does not depend on problem size n. So n0= 1
f(n) = O(g(n)) = O(1) for c = 6993, n0 = 1
f(n) = O(g(n)) = O(1) for c = 8000, n0 = 1 and so on.

48
Find upper bound of running time of a linear function f(n) = 6n + 3.
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f (n) ≤ c × g (n) for all n ≥ n0
0 ≤ f (n) ≤ c × g (n)
0 ≤ 6n + 3 ≤ c × g (n)
0 ≤ 6n + 3 ≤ 6n + 3n, for all n ≥ 1 (There can be such infinite possibilities)
0 ≤ 6n + 3 ≤ 9n
So, c = 9 and g (n) = n, n0 = 1

Tabular Approach
0 ≤ 6n + 3 ≤ c × g (n)
0 ≤ 6n + 3 ≤ 7 n
Now, manually find out the proper n0, such that f (n) ≤ c.g (n)

From Table, for n ≥ 3, f (n) ≤ c × g (n) holds true. So, c = 7, g(n) =


n and n0 = 3, There can be such multiple pair of (c, n0)

f(n) = O(g(n)) = O(n) for c = 9, n0 = 1


f(n) = O(g(n)) = O(n) for c = 7, n0 = 3
49
 Example: Find lower bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
 To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ³ n0
 0 ≤ c × g(n) ≤ f(n)
 0 ≤ c × g(n) ≤ 3n2 + 2n + 4
 0 ≤ 3n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
 0 ≤ n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
 Above both inequalities are true and there exists such infinite inequalities.
 So, f(n) = Ω (g(n)) = Ω (n2) for c = 3, n0 = 1
 f(n) = Ω (g(n)) = Ω (n2) for c = 1, n0 = 1

50
 Find tight bound of running time of a cubic function f(n) = 2n3 + 4n + 5.
 To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1 × g(n) ≤ f(n) ≤ c2 × g(n) for all
n ≥ n0
 0 ≤ c1 × g(n) ≤ 2n3 + 4n + 5 ≤ c2 × g(n)
 0 ≤ 2n3 ≤ 2n3 + 4n + 5 ≤ 11n3, for all n ≥ 1
 Above inequality is true and there exists such infinite inequalities. So,
 f(n) = Θ(g(n)) = Θ(n3) for c1 = 2, c2 = 11, n0 = 1

51
General Problems
 Example: Show that : (i) 3n + 2 = Θ(n) (ii) 6*2n + n2 = Θ(2n)
 (i) 3n + 2 = Θ(n)
 To prove above statement, we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 g(n) for
all n ≥ n0
 0 ≤ c1× g(n) ≤ 3n + 2 ≤ c2 × g(n)
 0 ≤ 3n ≤ 3n + 2 ≤ 5n, for all n ≥ 1
 So, f(n) = Θ(g(n)) = Θ(n) for c1 = 3, c2 = 5 n0 = 1
 (ii) 6*2n + n2 = Θ(2n)
 To prove above statement, we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 g(n) for
all n ≥ n0
 0 ≤ c1× g(n) ≤ 6*2n + n2 ≤ c2 × g(n)
 0 ≤ 6.2n ≤ 6*2n + n2 ≤ 7*2n, for all n ≥ 1
 So, f(n) = Θ(g(n)) = Θ(2n) for c1 = 6, c2 = 7 n0 = 1

52
Exercises
1. Express the function 𝑛3/1000 − 100𝑛2 − 100𝑛 + 3 in terms of θ notation.
2. Express 20𝑛3 + 10𝑛 log 𝑛 + 5 in terms of O notation.
3. Express 5𝑛 log 𝑛 + 2𝑛 in terms of O notation.
4. Prove or disprove (i) Is 2n+1 = O(2n) (ii) Is 22n = O(2n)
5. Check the correctness for the following equality, 5n3 + 2n = O(n3 )
6. Find θ notation for the following function
a. F(𝑛) = 3 ∗ 2𝑛 + 4𝑛2 + 5𝑛 + 2
7. Find O notation for the following function
a. F(n) = 2n + 6n2 + 3n
b. F(n) = 4n3 + 2n + 3
8. Find Ω notation for the following function
a. F(n) = 4 ∗ 2n + 3n
b. F(n) = 5n3 + n2 + 3n + 2

53
Math You Need to Review
Properties of logarithms:

Properties of exponentials:

Geometric progression:

Arithmetic progression:

54
Googol-to-One Gear Ratio
 Daniel Bruin built a machine with 100 gears with a 10-to-1 gear ratio…meaning that the
overall gear ratio is a googol-to-one. (A googol is 1 with 100 zeros.)

1 rotation of 1st gear =10 second

10 rotation of ith gear turns 1


rotation of (i+1)th

55
Puzzles
 1. You are provided with 10 identical balls and a measuring instrument. 9 of the eight balls
are equal in weight, and one of the 10 given balls is defective and weighs less. The task is to
find the defective ball in minimum measurements.
 2. Case: defective but don’t know (weight less or more)

 3. You are blindfolded, and 10 coins are placed in front of you on the table. You are allowed to
touch the coins but can’t tell which way up they are by feel. You are told that there are 5 coins
head up, and 5 coins tails up but not which ones are which.
 Can you make two piles of coins, each with the same number of heads up? You can flip the
coins any number of times.

56
 4. There is a room with a door (closed) and three light bulbs inside the room. Outside the
room, there are three switches, connected to the bulbs. You may manipulate the switches as
you wish, but once you open the door, you can’t change them. All bulbs are in working
condition, and you can open the door only once. Identify each switch with respect to its bulb.

 5. There are 25 horses, among which you need to find out the fastest 3 horses. You can
conduct a race among at most 5 to find out their relative speed. At no point, can you find out
the actual speed of the horse in a race. Find out the minimum no. of races that are required to
get the top 3 horses.

57
Thank You!

You might also like