0% found this document useful (0 votes)
3 views

1.Introduction

The document provides an introduction to algorithms, detailing their definitions, characteristics, and the importance of analyzing their efficiency through concepts like time and space complexity. It discusses asymptotic analysis, various notations (Big-O, Omega, Theta), and how to evaluate algorithms based on worst-case, average-case, and best-case scenarios. Additionally, it outlines typical complexities and methods for approximating the time taken by iterative and recursive algorithms.

Uploaded by

souravbag9883
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

1.Introduction

The document provides an introduction to algorithms, detailing their definitions, characteristics, and the importance of analyzing their efficiency through concepts like time and space complexity. It discusses asymptotic analysis, various notations (Big-O, Omega, Theta), and how to evaluate algorithms based on worst-case, average-case, and best-case scenarios. Additionally, it outlines typical complexities and methods for approximating the time taken by iterative and recursive algorithms.

Uploaded by

souravbag9883
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Design Analysis

&
Algorithm
Introduction

Presented by Nabanita Das


What is Algorithm?
A finite set of instructions that specifies a sequence of operation is to be carried out in order to solve a
specific problem or class of problems is called an Algorithm. An algorithm must satisfy the following criteria:

✓ Input: It should externally supply zero or more quantities.


✓ Output: At least one quantity is produced.
✓ Definiteness: Each instruction should be clear and unambiguous.
✓ Finiteness: An algorithm should terminate after executing a finite number of steps.
✓ Effectiveness: Every instruction should be fundamental to be carried out, in principle, by a person using
only pen and paper.
✓ Feasible: It must be feasible enough to produce each instruction.
✓ Flexibility: It must be flexible enough to carry out desired changes with no efforts.
✓ Efficient: The term efficiency is measured in terms of time and space required by an algorithm to
implement. Thus, an algorithm must ensure that it takes little time and less memory space meeting the
acceptable limit of development time.
✓ Independent: An algorithm must be language independent, which means that it should mainly focus on
the input and the procedure required to derive the output instead of depending upon the language.
Asymptotic Analysis
Given two algorithms for a task, how do we find out which one is better?

For doing this is – we implement both the algorithms and run the two programs on your computer for
different inputs and see which one takes less time. There are many problems with this approach for
analysis of algorithms.
1) It might be possible that for some inputs, first algorithm performs better than the second. And for so
me inputs second performs better.
2) It might also be possible that for some inputs, first algorithm perform better on one machine and the
second works better on other machine for some other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing algorithms. In Asymptotic
Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an algorithm increases with the
input size.
Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value.
Asymptotic Analysis contd.
Let us consider the search problem (searching a given item) in a sorted array.
One way to search is Linear Search (order of growth is linear) and the other way is Binary Search
(order of growth is logarithmic).
To understand how Asymptotic Analysis solves the above mentioned problems in analyzing algorithms
let us say we run the Linear Search on a fast computer A and Binary Search on a slow computer B
and we pick the constant values for the two computers so that it tells us exactly how long it takes for
the given machine to perform the search in seconds.
For small values of input array size n, the fast computer may take less time. But, after a certain value
of input array size, the Binary Search will definitely start taking less time compared to the Linear
Search even though the Binary Search is being run on a slow machine because of the order of growth
of Binary Search with respect to input size is logarithmic while the order of growth of Linear Search is
linear.
So the machine dependent constants can always be ignored after a certain value of input size.
Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm. There are two fundamental
parameters based on which we can analysis the algorithm:
➢ Space Complexity: The space complexity can be understood as the amount of space required by
an algorithm to run to completion.
➢ Time Complexity: Time complexity is a function of input size n that refers to the amount of time
needed by an algorithm to run to completion.
It would be best to analyze every algorithm in terms of Time that relates to which one could execute
faster and Memory corresponding to which one will take less memory.
So, the Design and Analysis of Algorithm talks about how to design various algorithms and how to
analyze them. After designing and analyzing, choose the best algorithm that takes the least time and
the least memory and then implement it.

Here, the main focus is on time rather than space because time is instead a more limiting parameter
in terms of the hardware.
However, memory is relatively more flexible. We can increase the memory as when required by simply
adding a memory card.
Time Complexity
The term time complexity in algorithm measures how many steps are required by
the algorithm to solve the given problem.
Generally, we make three types of analysis, which is as follows:
❑ Worst-case time complexity: For 'n' input size, the worst-case time complexity
can be defined as the maximum amount of time needed by an algorithm to
complete its execution. Thus, it is nothing but a function defined by the maximum
number of steps performed on an instance having an input size of n.
❑ Average case time complexity: For 'n' input size, the average-case time
complexity can be defined as the average amount of time needed by an algorithm
to complete its execution. Thus, it is nothing but a function defined by the average
number of steps performed on an instance having an input size of n.
❑ Best case time complexity: For 'n' input size, the best-case time complexity
can be defined as the minimum amount of time needed by an algorithm to comple
te its execution. Thus, it is nothing but a function defined by the minimum number
of steps performed on an instance having an input size of n.
Big-O Notation (O-notation)
Big-O notation represents the upper bound of the
running time of an algorithm.
Thus, it gives the worst-case complexity of an
algorithm.

O(g(n)) = { f(n): there exist positive constants c


and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

The above expression can be described as a funct


ion f(n) belongs to the set O(g(n)) if there exists a
positive constant c such that it lies between 0 an
d cg(n), for sufficiently large n.
Omega Notation (Ω-notation)
Omega notation represents the lower bound
of the running time of an algorithm. Thus, it
provides the best case complexity of an
algorithm.

Ω(g(n)) = { f(n): there exist positive constants c


and n such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n }
0 0

The above expression can be described as a


function f(n) belongs to the set Ω(g(n)) if there
exists a positive constant c such that it lies
above cg(n), for sufficiently large n.
For any value of n, the minimum time required by
the algorithm is given by Omega Ω(g(n)).
Theta Notation (Θ-notation)
Theta notation encloses the function from above and
below. Since it represents the upper and the lower bo
und of the running time of an algorithm, it is used for a
nalyzing the average-case complexity of an algorithm.

Θ(g(n)) = { f(n): there exist positive constants c , c and


1 2

n such that 0 ≤ c g(n) ≤ f(n) ≤ c g(n) for all n ≥ n }


0 1 2 0

The above expression can be described as a


function f(n) belongs to the set Θ(g(n)) if there exist
positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n), for
sufficiently large n.
If a function f(n) lies anywhere in
between c1g(n) and c2g(n) for all n ≥ n0, then f(n) is
said to be asymptotically tight bound.
To analyze an algorithm
int search(int arr[], int n, int x)
• For the worst case analysis, we calculate upper bound on {
running time of an algorithm. We must know the case that int i;
for (i = 0; i < n; i++) {
causes maximum number of operations to be executed. if (arr[i] == x)
For Linear Search, the worst case happens when the return i;
}
element to be searched (x in the above code) is not present return -1;
in the array. When x is not present, the search() functions }
compares it with all the elements of arr [ ] one by one. /* Driver program to test above
Therefore, the worst case time complexity of linear search functions*/
int main()
would be Θ(n). {
• For best case analysis, we calculate lower bound on running int arr[] = { 1, 10, 30, 15 };
int x = 30;
time of an algorithm. We must know the case that causes int n = sizeof(arr) / sizeof(arr[0]);
minimum number of operations to be executed. printf("%d is present at index %d", x,
search(arr, n, x));
In the linear search problem, the best case occurs when x is
present at the first location. So time complexity in the best case getchar();
return 0;
would be Θ(1).
To analyze an algorithm contd..
int search(int arr[], int n, int x)
{
In average case analysis, we take all possible inputs and calculate int i;
for (i = 0; i < n; i++) {
computing time for all of the inputs. Sum all the calculated values if (arr[i] == x)
return i;
and divide the sum by total number of inputs. For the linear search }
problem, let us assume that all cases are uniformly distributed return -1;
}
(including the case of x not being present in array). So we sum
all the cases and divide the sum by (n+1). Following is the value /* Driver program to test above
functions*/
of average case time complexity. int main()
{
int arr[] = { 1, 10, 30, 15 };
Average Case Time = = = Θ(n) int x = 30;
int n = sizeof(arr) /
sizeof(arr[0]);
printf("%d is present at index
%d", x,
search(arr, n, x));

getchar();
return 0;
Typical Complexities of an Algorithm
1. Constant Complexity: Complexity of O(1). It undergoes an execution of a constant number of steps like
1, 5, 10, etc. for solving a given problem.

2. Logarithmic Complexity: Complexity of O(log(N)). It undergoes the execution of the order of log(N) steps.
To perform operations on N elements, it often takes the logarithmic base as 2.

3. Linear Complexity: complexity of O(N). For example, if there exist 500 elements, then it will take about
500 steps. Basically, in linear complexity, the number of elements linearly depends on the number of
steps.

4. Quadratic Complexity: It imposes a complexity of O(n2). For N input data size, it undergoes the order
of N2 count of operations on N number of elements for solving a given problem.

5. Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it executes the order
of N3 steps on N elements to solve a given problem. For example, if there exist 100 elements, it is
going to execute 1,000,000 steps.

6. Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N elements, it will
execute the order of count of operations that is exponentially dependable on the input data size.
For example, if N = 10, then the exponential function 2N will result in 1024.
How to approximate the time taken by
the Algorithm?
There are two types of algorithms:
➢ Iterative Algorithm
➢ Recursive Algorithm A()
{
For Iterative Programs,
int i, j:
for (i=1 to n)
In this case, firstly, the outer loop will run n times, such that for
each time, the inner loop will also run n times. Thus, the time for (j=1 to n)
complexity will be O(n2). printf("Edward");
}
Contd..
For Recursive Program, consider the following recursive programs.
Here we will see the simple Back Substitution method to solve the above problem.
T(n) = 1 + T(n-1) …Eqn. (1)
Step1: Substitute n-1 at the place of n in Eqn. (1)
T(n-1) = 1 + T(n-2) ...Eqn. (2) A(n)
Step2: Substitute n-2 at the place of n in Eqn. (1) {
T(n-2) = 1 + T(n-3) …Eqn. (3) if (n>1)
Step3: Substitute Eqn. (2) in Eqn. (1) return (A(n-1))
T(n)= 1 + 1+ T(n-2) = 2 + T(n-2) …Eqn. (4) }
Step4: Substitute eqn. (3) in Eqn. (4)
T(n) = 2 + 1 + T(n-3) = 3 + T(n-3) = …... = k + T(n-k) …Eqn. (5)
Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1. Basically, n will start fro
m a very large number, and it will decrease gradually. So, when T(n) = 1, the algorithm eventually stops,
and such a terminating condition is called anchor condition, base condition or stopping condition.
Thus, for k = n-1, the T(n) will become.
Step5: Substitute k = n-1 in eqn. (5)
T(n) = (n-1) + T(n-(n-1)) = (n-1) + T(1) = n-1+1
Hence, T(n) = n or O(n).

You might also like