0% found this document useful (0 votes)
34 views

Notes On Aymptotic Notation

Asymptotic notation represents the complexity of an algorithm using mathematical terms that ignore less significant parts of the complexity. The three main types are Big-O for worst case, Big-Omega for best case, and Big-Theta for average case. Amortized analysis considers the total cost of a sequence of operations, and probabilistic data structures use randomness to process large datasets.

Uploaded by

shift2cs22
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Notes On Aymptotic Notation

Asymptotic notation represents the complexity of an algorithm using mathematical terms that ignore less significant parts of the complexity. The three main types are Big-O for worst case, Big-Omega for best case, and Big-Theta for average case. Amortized analysis considers the total cost of a sequence of operations, and probabilistic data structures use randomness to process large datasets.

Uploaded by

shift2cs22
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

What is Asymptotic Notation?

Asymptotic notation of an algorithm is a mathematical representation of its


complexity.

Note - In asymptotic notation, when we want to represent the complexity of an algorithm,


we use only the most significant terms in the complexity of that algorithm and ignore least
significant terms in the complexity of that algorithm (Here complexity can be Space
Complexity or Time Complexity).
For example, consider the following time complexities of two algorithms...

 Algorithm 1 : 5n2 + 2n + 1
 Algorithm 2 : 10n2 + 8n + 3

Generally, when we analyze an algorithm, we consider the time complexity for larger
values of input data (i.e. 'n' value). In above two time complexities, for larger value
of 'n' the term '2n + 1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term '10n2'.
Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very
larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value
of 'n' we ignore the least significant terms to represent overall time required by an
algorithm. In asymptotic notation, we use only the most significant terms to represent the
time complexity of an algorithm.

Majorly, we use THREE types of Asymptotic Notations and those are as follows...

1. Big - Oh (O)
2. Big - Omega (Ω)
3. Big - Theta (Θ)

Big - Oh Notation (O)


Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
That means Big - Oh notation always indicates the maximum time required by an
algorithm for all input values. That means Big - Oh notation describes the worst case of
an algorithm time complexity.
Big - Oh Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).

f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis
In above graph after a particular input value n0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values
of C > 0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)

Big - Omege Notation (Ω)


Big - Omega notation is used to define the lower bound of an algorithm in terms of Time
Complexity.
That means Big-Omega notation always indicates the minimum time required by an
algorithm for all input values. That means Big-Omega notation describes the best case of
an algorithm time complexity.
Big - Omega Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).

f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

In above graph after a particular input value n 0, always C g(n) is less than f(n) which
indicates the algorithm's lower bound.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values
of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)

Big - Theta Notation (Θ)


Big - Theta notation is used to define the average bound of an algorithm in terms of
Time Complexity.
That means Big - Theta notation always indicates the average time required by an
algorithm for all input values. That means Big - Theta notation describes the average
case of an algorithm time complexity.
Big - Theta Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >=
1. Then we can represent f(n) as Θ(g(n)).

f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

The following 2 more asymptotic notations are used to represent the time
complexity of algorithms.
Little ο asymptotic notation
Big-Ο is used as a tight upper bound on the growth of an algorithm’s
effort (this effort is described by the function f(n)), even though, as
written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used
to describe an upper bound that cannot be tight.
Definition: Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if
for any real constant c > 0, there exists an integer constant n0 ≥ 1 such

that 0 ≤ f(n) < c*g(n).


Thus, little o() means loose upper-bound of f(n). Little o is a rough
estimate of the maximum order of growth whereas Big-Ο may be the
actual order of growth.
In mathematical relation, f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
Examples:
Is 7n + 8 ∈ o(n2)?
In order for that to be true, for any c, we have to be able to find an n0 that
makes
f(n) < c * g(n) asymptotically true.
lets took some example,
If c = 100,we check the inequality is clearly true. If c = 1/100 , we’ll have
to use
a little more imagination, but we’ll be able to find an n0. (Try n0 = 1000.)
From
these examples, the conjecture appears to be correct.
then check limits,
lim f(n)/g(n) = lim (7n + 8)/(n2) = lim 7/2n = 0 (l’hospital)
n→∞ n→∞ n→∞
hence 7n + 8 ∈ o(n2)
Little ω asymptotic notation
Definition : Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for
any real constant c > 0, there exists an integer constant n0 ≥ 1 such that
f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.
f(n) has a higher growth rate than g(n) so main difference between Big
Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big
Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little
omega, it is true for 0<=c*g(n)<f(n).
The relationship between Big Omega (Ω) and Little Omega (ω) is similar
to that of Big-Ο and Little o except that now we are looking at the lower
bounds. Little Omega (ω) is a rough estimate of the order of the growth
whereas Big Omega (Ω) may represent exact order of growth. We use ω
notation to denote a lower bound that is not asymptotically tight. And, f(n)
∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).
In mathematical relation,
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
Example:
Prove that 4n + 6 ∈ ω(1);
the little omega(ο) running time can be proven by applying limit formula
given below.

if lim f(n)/g(n) = ∞ then functions f(n) is ω(g(n))


n→∞
here,we have functions f(n)=4n+6 and g(n)=1
lim (4n+6)/(1) = ∞
n→∞
and,also for any c we can get n0 for this inequality 0 <= c*g(n) < f(n), 0 <=
c*1 < 4n+6
Hence proved.

Amortize Analysis
This analysis is used when the occasional operation is very slow, but most of
the operations which are executing very frequently are faster. Data structures
we need amortized analysis for Hash Tables, Disjoint Sets etc.
In the Hash-table, the most of the time the searching time complexity is O(1),
but sometimes it executes O(n) operations. When we want to search or insert
an element in a hash table for most of the cases it is constant time taking the
task, but when a collision occurs, it needs O(n) times operations for collision
resolution.

Aggregate Method
The aggregate method is used to find the total cost. If we want to add a
bunch of data, then we need to find the amortized cost by this formula.
For a sequence of n operations, the cost is −

Example on Amortized Analysis


For a dynamic array, items can be inserted at a given index in O(1) time. But
if that index is not present in the array, it fails to perform the task in constant
time. For that case, it initially doubles the size of the array then inserts the
element if the index is present.
For the dynamic array, let = cost of ith insertion.

Probabilistic data structure works with large data set, where we want to
perform some operations such as finding some unique items in given
data set or it could be finding the most frequent item or if some items
exist or not. To do such an operation probabilistic data structure uses
more and more hash functions to randomize and represent a set of data.
The more number of hash function the more accurate result.
Things to remember
A deterministic data structure can also perform all the operations that a
probabilistic data structure does but only with low data sets. As stated
earlier, if the data set is too big and couldn’t fit into the memory, then the
deterministic data structure fails and is simply not feasible. Also in case of
a streaming application where data is required to be processed in one go
and perform incremental updates, it is very difficult to manage with the
deterministic data structure.
Use Cases
1. Analyze big data set
2. Statistical analysis
3. Mining tera-bytes of data sets, etc
Popular probabilistic data structures
1. Bloom filter
2. Count-Min Sketch
3. HyperLogLog
Binary Tree is a special datastructure used for data storage purposes. A binary tree has
a special condition that each node can have a maximum of two children. A binary tree
has the benefits of both an ordered array and a linked list as search is as quick as in a
sorted array and insertion or deletion operation are as fast as in linked list.

Important Terms
Following are the important terms with respect to tree.
Path − Path refers to the sequence of nodes along the edges of a tree.
Root − The node at the top of the tree is called root. There is only one root per tree
and one path from the root node to any node.
Parent − Any node except the root node has one edge upward to a node called parent.
Child − The node below a given node connected by its edge downward is called its
child node.
Leaf − The node which does not have any child node is called the leaf node.
Subtree − Subtree represents the descendants of a node.
Visiting − Visiting refers to checking the value of a node when control is on the node.
Traversing − Traversing means passing through nodes in a specific order.
Levels − Level of a node represents the generation of a node. If the root node is at
level 0, then its next child node is at level 1, its grandchild is at level 2, and so on.
keys − Key represents a value of a node based on which a search operation is to be
carried out for a node.

Tree Node
The code to write a tree node would be similar to what is given below. It has a data
part and references to its left and right child nodes.

struct node {
int data;
struct node *leftChild;
struct node *rightChild;};

In a tree, all nodes share common construct.


Binary Search Tree(BST)

Binary search tree is a data structure that quickly allows us to maintain a sorted
list of numbers.It is called a binary tree because each tree node has a maximum
of two children.It is called a search tree because it can be used to search for the
presence of a number in O(log(n)) time.

The properties that separate a binary search tree from a regular binary tree is
 All nodes of left subtree are less than the root node
 All nodes of right subtree are more than the root node
 Both subtrees of each node are also BSTs i.e. they have the above two
properties.
Binary Search Tree Application
 In multilevel indexing in the database
 For dynamic sorting
 For managing virtual memory areas in Unix kernel
Operations:
The operations of Binary search tree are
1. Insert
2. Delete
3. Search

Search Operation
The algorithm depends on the property of BST that if each left subtree has
values below root and each right subtree has values above the root.
If the value is below the root, we can say for sure that the value is not in the right
subtree; we need to only search in the left subtree and if the value is above the
root, we can say for sure that the value is not in the left subtree; we need to only
search in the right subtree.
Algorithm:
If root=null
return null
if number==root->data
return root->data
if number<root->data
return search(root->left)
if number>root->data
return search(root->right)
Insert Operation
Inserting a value in the correct position is similar to searching because we try to
maintain the rule that the left subtree is lesser than root and the right subtree is
larger than root.
We keep going to either right subtree or left subtree depending on the value and
when we reach a point left or right subtree is null, we put the new node there.
Algorithm:
If node=null
Return create node(data)if (data<node->data)
Node->left=insert(node->left,data);elseif(data->data->node->data)

Binary Search Tree Complexities

Best Case Average Case Worst Case


Operation
Complexity Complexity Complexity
Search O(log n) O(log n) O(n)

Insertion O(log n) O(log n) O(n)

Deletion O(log n) O(log n) O(n)

Time Complexity
Here, n is the number of nodes in the tree.

Space Complexity

The space complexity for all the operations is O(n) .

You might also like