1.Introduction
1.Introduction
&
Algorithm
Introduction
For doing this is – we implement both the algorithms and run the two programs on your computer for
different inputs and see which one takes less time. There are many problems with this approach for
analysis of algorithms.
1) It might be possible that for some inputs, first algorithm performs better than the second. And for so
me inputs second performs better.
2) It might also be possible that for some inputs, first algorithm perform better on one machine and the
second works better on other machine for some other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing algorithms. In Asymptotic
Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an algorithm increases with the
input size.
Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value.
Asymptotic Analysis contd.
Let us consider the search problem (searching a given item) in a sorted array.
One way to search is Linear Search (order of growth is linear) and the other way is Binary Search
(order of growth is logarithmic).
To understand how Asymptotic Analysis solves the above mentioned problems in analyzing algorithms
let us say we run the Linear Search on a fast computer A and Binary Search on a slow computer B
and we pick the constant values for the two computers so that it tells us exactly how long it takes for
the given machine to perform the search in seconds.
For small values of input array size n, the fast computer may take less time. But, after a certain value
of input array size, the Binary Search will definitely start taking less time compared to the Linear
Search even though the Binary Search is being run on a slow machine because of the order of growth
of Binary Search with respect to input size is logarithmic while the order of growth of Linear Search is
linear.
So the machine dependent constants can always be ignored after a certain value of input size.
Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm. There are two fundamental
parameters based on which we can analysis the algorithm:
➢ Space Complexity: The space complexity can be understood as the amount of space required by
an algorithm to run to completion.
➢ Time Complexity: Time complexity is a function of input size n that refers to the amount of time
needed by an algorithm to run to completion.
It would be best to analyze every algorithm in terms of Time that relates to which one could execute
faster and Memory corresponding to which one will take less memory.
So, the Design and Analysis of Algorithm talks about how to design various algorithms and how to
analyze them. After designing and analyzing, choose the best algorithm that takes the least time and
the least memory and then implement it.
Here, the main focus is on time rather than space because time is instead a more limiting parameter
in terms of the hardware.
However, memory is relatively more flexible. We can increase the memory as when required by simply
adding a memory card.
Time Complexity
The term time complexity in algorithm measures how many steps are required by
the algorithm to solve the given problem.
Generally, we make three types of analysis, which is as follows:
❑ Worst-case time complexity: For 'n' input size, the worst-case time complexity
can be defined as the maximum amount of time needed by an algorithm to
complete its execution. Thus, it is nothing but a function defined by the maximum
number of steps performed on an instance having an input size of n.
❑ Average case time complexity: For 'n' input size, the average-case time
complexity can be defined as the average amount of time needed by an algorithm
to complete its execution. Thus, it is nothing but a function defined by the average
number of steps performed on an instance having an input size of n.
❑ Best case time complexity: For 'n' input size, the best-case time complexity
can be defined as the minimum amount of time needed by an algorithm to comple
te its execution. Thus, it is nothing but a function defined by the minimum number
of steps performed on an instance having an input size of n.
Big-O Notation (O-notation)
Big-O notation represents the upper bound of the
running time of an algorithm.
Thus, it gives the worst-case complexity of an
algorithm.
getchar();
return 0;
Typical Complexities of an Algorithm
1. Constant Complexity: Complexity of O(1). It undergoes an execution of a constant number of steps like
1, 5, 10, etc. for solving a given problem.
2. Logarithmic Complexity: Complexity of O(log(N)). It undergoes the execution of the order of log(N) steps.
To perform operations on N elements, it often takes the logarithmic base as 2.
3. Linear Complexity: complexity of O(N). For example, if there exist 500 elements, then it will take about
500 steps. Basically, in linear complexity, the number of elements linearly depends on the number of
steps.
4. Quadratic Complexity: It imposes a complexity of O(n2). For N input data size, it undergoes the order
of N2 count of operations on N number of elements for solving a given problem.
5. Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it executes the order
of N3 steps on N elements to solve a given problem. For example, if there exist 100 elements, it is
going to execute 1,000,000 steps.
6. Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N elements, it will
execute the order of count of operations that is exponentially dependable on the input data size.
For example, if N = 10, then the exponential function 2N will result in 1024.
How to approximate the time taken by
the Algorithm?
There are two types of algorithms:
➢ Iterative Algorithm
➢ Recursive Algorithm A()
{
For Iterative Programs,
int i, j:
for (i=1 to n)
In this case, firstly, the outer loop will run n times, such that for
each time, the inner loop will also run n times. Thus, the time for (j=1 to n)
complexity will be O(n2). printf("Edward");
}
Contd..
For Recursive Program, consider the following recursive programs.
Here we will see the simple Back Substitution method to solve the above problem.
T(n) = 1 + T(n-1) …Eqn. (1)
Step1: Substitute n-1 at the place of n in Eqn. (1)
T(n-1) = 1 + T(n-2) ...Eqn. (2) A(n)
Step2: Substitute n-2 at the place of n in Eqn. (1) {
T(n-2) = 1 + T(n-3) …Eqn. (3) if (n>1)
Step3: Substitute Eqn. (2) in Eqn. (1) return (A(n-1))
T(n)= 1 + 1+ T(n-2) = 2 + T(n-2) …Eqn. (4) }
Step4: Substitute eqn. (3) in Eqn. (4)
T(n) = 2 + 1 + T(n-3) = 3 + T(n-3) = …... = k + T(n-k) …Eqn. (5)
Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1. Basically, n will start fro
m a very large number, and it will decrease gradually. So, when T(n) = 1, the algorithm eventually stops,
and such a terminating condition is called anchor condition, base condition or stopping condition.
Thus, for k = n-1, the T(n) will become.
Step5: Substitute k = n-1 in eqn. (5)
T(n) = (n-1) + T(n-(n-1)) = (n-1) + T(1) = n-1+1
Hence, T(n) = n or O(n).