0% found this document useful (0 votes)
4 views

Analysis of algorithms

An algorithm is a finite set of instructions for solving a problem, characterized by input, output, unambiguity, finiteness, effectiveness, and language independence. Algorithms are essential for scalability and performance, and they can be analyzed through various approaches such as brute force, divide and conquer, and dynamic programming. The performance of algorithms is measured in terms of time and space complexity, which can be analyzed using asymptotic notations.

Uploaded by

antomotongori
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Analysis of algorithms

An algorithm is a finite set of instructions for solving a problem, characterized by input, output, unambiguity, finiteness, effectiveness, and language independence. Algorithms are essential for scalability and performance, and they can be analyzed through various approaches such as brute force, divide and conquer, and dynamic programming. The performance of algorithms is measured in terms of time and space complexity, which can be analyzed using asymptotic notations.

Uploaded by

antomotongori
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ALGORITHMS ANALYSIS

What is an Algorithm?

An algorithm is a process or a set of rules required to perform calculations or some other


problemsolving operations especially by a computer. The formal definition of an algorithm is that it
contains the finite set of instructions which are being carried in a specific order to perform the specific
task. It is not the complete program or code; it is just a solution (logic) of a problem, which can be
represented either as an informal description using a Flowchart or Pseudocode.

Characteristics of an Algorithm

o Input: An algorithm has some input values. We can pass 0 or some input value to an algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the instructions in an
algorithm should be clear and simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness means that the algorithm
should contain a limited number of instructions, i.e., the instructions should be countable.
o Effectiveness: An algorithm should be effective as each instruction in an algorithm affects
the overall process.
o Language independent: An algorithm must be language-independent so that the instructions
in an algorithm can be implemented in any of the languages with the same output.

Dataflow of an Algorithm

o Problem: A problem can be a real-world problem or any instance from the real-world problem
for which we need to create a program or the set of instructions. The set of instructions is
known as an algorithm.
o Algorithm: An algorithm will be designed for a problem which is a step by step procedure.
o Input: After designing an algorithm, the required and the desired inputs are provided to the
algorithm.
o Processing unit: The input will be given to the processing unit, and the processing unit will
produce the desired output.
o Output: The output is the outcome or the result of the program.
Why do we need Algorithms?

We need algorithms because of the following reasons:


o Scalability: It helps us to understand the scalability. When we have a big real-world problem,
we need to scale it down into small-small steps to easily analyze the problem.
o Performance: The real-world is not easily broken down into smaller steps. If the problem can
be easily broken into smaller steps means that the problem is feasible.

Factors of an Algorithm

The following are the factors that we need to consider for designing an algorithm:

o Modularity: If any problem is given and we can break that problem into small-small modules
or small-small steps, which is a basic definition of an algorithm, it means that this feature has
been perfectly designed for the algorithm.
o Correctness: The correctness of an algorithm is defined as when the given inputs produce the
desired output, which means that the algorithm has been designed algorithm. The analysis of
an algorithm has been done correctly.
o Maintainability: Here, maintainability means that the algorithm should be designed in a very
simple structured way so that when we redefine the algorithm, no major change will be done
in the algorithm.
o Functionality: It considers various logical steps to solve the real-world problem. o
Robustness: Robustness means that how an algorithm can clearly define our problem.
o User-friendly: If the algorithm is not user-friendly, then the designer will not be able to
explain it to the programmer.
o Simplicity: If the algorithm is simple then it is easy to understand.
o Extensibility: If any other algorithm designer or programmer wants to use your algorithm
then it should be extensible.

Importance of Algorithms

1. Theoretical importance: When any real-world problem is given to us and we break the
problem into small-small modules. To break down the problem, we should know all the
theoretical aspects.

2. Practical importance: As we know that theory cannot be completed without the practical
implementation. So, the importance of algorithm can be considered as both theoretical and
practical.
Issues of Algorithms

The following are the issues that come while designing an algorithm:

o How to design algorithms: As we know that an algorithm is a step-by-step procedure so we


must follow some steps to design an algorithm. o How to analyze algorithm efficiency

Approaches of Algorithm

The following are the approaches used after considering both the theoretical and practical
importance of designing an algorithm:

o Brute force algorithm: The general logic structure is applied to design an algorithm. It is also
known as an exhaustive search algorithm that searches all the possibilities to provide the
required solution. Such algorithms are of two types:

1. Optimizing: Finding all the solutions of a problem and then take out the best solution
or if the value of the best solution is known then it will terminate if the best solution
is known.

2. Satisficing: As soon as the best solution is found, then it will stop.


o Divide and conquer: It is a very implementation of an algorithm. It allows you to design an
algorithm in a step-by-step variation. It breaks down the algorithm to solve the problem in
different methods. It allows you to break down the problem into different methods, and valid
output is produced for the valid input. This valid output is passed to some other function. o
Greedy algorithm: It is an algorithm paradigm that makes an optimal choice on each iteration
with the hope of getting the best solution. It is easy to implement and has a faster execution
time. But, there are very rare cases in which it provides the optimal solution.
o Dynamic programming: It makes the algorithm more efficient by storing the intermediate
results. It follows five different steps to find the optimal solution for the problem:

1. It breaks down the problem into a subproblem to find the optimal solution.
2. After breaking down the problem, it finds the optimal solution out of these
subproblems.

3. Stores the result of the subproblems is known as memorization.

4. Reuse the result so that it cannot be recomputed for the same subproblems.

5. Finally, it computes the result of the complex program.


o Branch and Bound Algorithm: The branch and bound algorithm can be applied to only
integer programming problems. This approach divides all the sets of feasible solutions into
smaller subsets. These subsets are further evaluated to find the best solution.
o Randomized Algorithm: As we have seen in a regular algorithm, we have predefined input
and required output. Those algorithms that have some defined set of inputs and required
output, and follow some described steps are known as deterministic algorithms. What happens
that when the random variable is introduced in the randomized algorithm?. In a randomized
algorithm, some random bits are introduced by the algorithm and added in the input to
produce the output, which is random in nature. Randomized algorithms are simpler and
efficient than the deterministic algorithm.
o Backtracking: Backtracking is an algorithmic technique that solves the problem recursively
and removes the solution if it does not satisfy the constraints of a problem.

The major categories of algorithms are given below:

o Sort: Algorithm developed for sorting the items in a certain order.


o Search: Algorithm developed for searching the items inside a data structure.
o Delete: Algorithm developed for deleting the existing element from the data structure.
o Insert: Algorithm developed for inserting an item inside a data structure. o Update:
Algorithm developed for updating the existing element inside a data structure.

Algorithm Analysis

The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm, and second is
after creating the algorithm. The following are the two analysis of an algorithm:

o Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm which is done
before implementing the algorithm. Various factors can be considered before implementing
the algorithm like processor speed, which has no effect on the implementation part.

o Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm. The practical
analysis is achieved by implementing the algorithm using any programming language. This
analysis basically evaluate that how much running time and space taken by the algorithm.

Algorithm Complexity

The performance of the algorithm can be measured in two factors:


o Time complexity: The time complexity of an algorithm is the amount of time required to
complete the execution. The time complexity of an algorithm is denoted by the big O notation.
Here, big O notation is the asymptotic notation to represent the time complexity. The time
complexity is mainly calculated by counting the number of steps to finish the execution. Let's
understand the time complexity through an example.
1. sum=0;
2. // Suppose we have to calculate the sum of n numbers.
3. for i=1 to n
4. sum=sum+i;
5. // when the loop ends then sum holds the sum of the n numbers
6. return sum;

In the above code, the time complexity of the loop statement will be atleast n, and if the value of n
increases, then the time complexity also increases. While the complexity of the code, i.e., return sum
will be constant as its value is not dependent on the value of n and will provide the result in one step
only. We generally consider the worst-time complexity as it is the maximum time taken for any given
input size.

o Space complexity: An algorithm's space complexity is the amount of space required to solve a
problem and produce an output. Similar to the time complexity, space complexity is also
expressed in big O notation. When an algorithm is run on a computer, it necessitates a certain
amount of memory space. The amount of memory used by a program to execute it is
represented by its space complexity. Because a program requires memory to store input data
and temporal values while running, the space complexity is auxiliary and input space.

For an algorithm, the space is required for the following purposes:


1. To store program instructions
2. To store constant values

3. To store variable values

4. To track the function calls, jumping statements, etc.


Auxiliary space: The extra space required by the algorithm, excluding the input size, is known as an
auxiliary space. The space complexity considers both the spaces, i.e., auxiliary space, and space used
by the input.
So,
Space complexity = Auxiliary space + Input size.
Recursion
A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and
which obtains the result for the current input by applying simple operations to the returned value for
the smaller (or simpler) input. More generally if a problem can be solved utilizing solutions to
smaller versions of the same problem, and the smaller versions reduce to easily solvable cases, then
one can use a recursive algorithm to solve that problem. For example, the elements of a recursively
defined set, or the value of a recursively defined function can be obtained by a recursive algorithm.

If a set or a function is defined recursively, then a recursive algorithm to compute its members or
values mirrors the definition. Initial steps of the recursive algorithm correspond to the basis clause of
the recursive definition and they identify the basis elements. They are then followed by steps
corresponding to the inductive clause, which reduce the computation for an element of one
generation to that of elements of the immediately preceding generation.

In general, recursive computer programs require more memory and computation compared with
iterative algorithms, but they are simpler and for many cases a natural way of thinking about the
problem.

Example1: Factorial Pseudo Code (Recursion Algorithm)

Fact(n)
Begin
if n == 0 or 1 then
Return 1;
else
Return n*Fact(n-1);
endif
End

A C program to calculate the factorial of a number using Recursion:

#include<stdio.h> int
GEORGE(int n)
{ if (n
== 0)
return 1;
else
return(n * GEORGE(n-1));
}
void main()
{ int number; int ans; printf("Enter a
number: "); scanf("%d", &number); ans =
GEORGE(number); printf("Factorial of %d is
%ld\n", number, ans); return 0;
}

By way of comparison, the same problem can be solved by an iterative algorithm.


1. Step 1: Start
2. Step 2: Declare variables n,factorial and i.
3. Step 3: Initialize variables
4. factorial←1
5. i←1
6. Step 4: Read value of n
7. Step 5: Repeat the steps until i=n
8. 5.1: factorial←factorial*i
9. 5.2: i←i+1
10. Step 6: Display factorial
11. Step 7: Stop
Flowchart:
A C program to calculate the factorial of a number using Iterative Method:
#include<stdio.h> int
main()
{
int i,GEORGE=1,number; printf("Enter
a number: "); scanf("%d",&number);
for(i=1;i<=number;i++)
{
GEORGE=GEORGE*i;
}
printf("Factorial of %d is: %d",number,GEORGE);
return 0; }
Example2: Sum of Natural Numbers Using Recursion (Recursion Algorithm in form of a
Pseudocode)
findSum(n):
IF n<=1 THEN
RETURN n
ELSE
RETURN n + findSum(n-1)
END FUNCTION

A C program to find the sum of the first n natural numbers using Recursion:
#include <stdio.h>
int GEORGE(int n); int
main() {

int num;
printf("Enter a positive integer: ");
scanf("%d", &num);
printf("Sum = %d", GEORGE(num));
return 0; }
int GEORGE(int n)
{ if (n !=
0)
return n + GEORGE(n - 1);
else
return n; }
By way of comparison, the same problem can be solved by an iterative algorithm.

1. int i, sum = 0, num

2. input positive number

3. i = 0

4. do

5. sum = sum + i

6. i = i + 1

7. iterate the value of i < = num

8. display the sum of the first natural number.

A C program to find the sum of the first n natural numbers using iterative method:
#include <stdio.h>
main() {
int num, i, sum = 0; // declare local variables
printf(" Enter a positive number: ");
scanf("%d",&num); // take any positive number
// executes until the condition remains true. for
(i = 0; i <= num; i++)
{
sum = sum + i; // at each iteration the value of i is added to the sum variable
}
// display the sum of natural number
printf("\n Sum of the first %d number is: %d", num, sum); return
0;
}

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 10


Asymptotic analysis
These are the mathematical notations that are used for the asymptotic analysis of the algorithms. The
term 'asymptotic' describes an expression where a variable exists whose value tends to infinity. In
short, it is a method that describes the limiting behavior of an expression. Thus, using asymptotic
notations, we analyze the complexities of an algorithm and its performance. Using the asymptotic
notations, we determine and show the complexities after analyzing it. Therefore, there are three types
of asymptotic notations through which we can analyze the complexities of the algorithms:
Asymptotic comparison operator Numeric comparison operator

Our algorithm is o( something ) A number is < something

Our algorithm is O( something ) A number is ≤ something

Our algorithm is Θ( something ) A number is = something

Our algorithm is Ω( something ) A number is ≥ something

Our algorithm is ω( something ) A number is > something

Usually, the time required by an algorithm comes under three types:

Worst case: It defines the input for which the algorithm takes a huge time (Big O notation).

Best case: It defines the input for which the algorithm takes the lowest time (Omega Notation).

Average case: It takes average time for the program execution (Theta Notation).

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 11


Asymptotic Notations

The commonly used asymptotic notations used for calculating the running time complexity of an
algorithm is given below:

o Big O Notation (O): It represents the upper bound of the runtime of an algorithm. Big O
Notation's role is to calculate the longest time an algorithm can take for its execution, i.e., it is
used for calculating the worst-case time complexity of an algorithm.
o Omega Notation (Ω(n)): It represents the lower bound of the runtime of an algorithm. It
is used for calculating the best time an algorithm can take to complete its execution, i.e., it is
used for measuring the best case time complexity of an algorithm.
o Theta Notation (Θ(n)): It carries the middle characteristics of both Big O and Omega
notations as it represents the lower and upper bound of an algorithm.

So, these three asymptotic notations are the most used notations, but other than these, there are more
common asymptotic notations also present, such as linear, logarithmic, cubic, and many more.

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 12


Big O Notation in Data Structures

Big O Notation is a mathematical notation named after the term "order of the function", meaning
growth of functions. It is also called Landau's Symbol and belongs to the Asymptotic Notations group.

Asymptotic analysis is the study of how the algorithm's performance changes when the order of the
input size changes. We employ big-notation to asymptotically confine the expansion of a running time
to within constant factors above and below. The amount of time, storage, and other resources required
to perform an algorithm determine its efficiency. Asymptotic notations are used to determine the
efficiency. For different types of inputs, an algorithm's performance may vary. The performance will
fluctuate as the input size grows larger.

When the input tends towards a certain value or a limiting value, asymptotic notations are used to
represent how long an algorithm takes to execute. When the input array is already sorted, for example,
the time spent by the method is linear, which is the best scenario.

However, when the input array is in reverse order, the method takes the longest (quadratic) time to
sort the items, which is the worst-case scenario. It takes average time when the input array is not sorted
or in reverse order. Asymptotic notations are used to represent these durations.

Big O notation classifies functions based on their growth rates: several functions with the same growth
rate can be written using the same O notation. The symbol O is utilized since a function's development
rate is also known as the order of the function. A large O notation description of a function generally
only offers an upper constraint on the function's development rate.

It would be convenient to have a form of asymptotic notation that means "the running time grows at
most this much, but it could grow more slowly." We use "big-O" notation for just such occasions.

Advantages of Big O Notation

o When examining the efficiency of an algorithm using run-time inputs, asymptotic analysis is
quite useful. Otherwise, if we do it manually with passing test cases for various inputs,
performance may vary as the algorithm's input changes.
o When the algorithm is executed on multiple computers, its performance varies. As a result, we
pick an algorithm whose performance does not change much as the number of inputs
increases. As a result, a mathematical representation provides a clear understanding of the top
and lower boundaries of an algorithm's run-time.

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 13


Examples

Now let us have a deeper look at the Big O notation of various examples:

O(1):

1. void constantTimeComplexity(int arr[])


2. {
3. printf("First element of array = %d",arr[0]);
4. }

This function runs in O(1) time (or "constant time") relative to its input. The input array could be 1
item or 1,000 items, but this function would still just require one step.

O(n):

1. void linearTimeComplexity(int arr[], int size)


2. {
3. for (int i = 0; i < size; i++)
4. {
5. printf("%d\n", arr[i]);
6. }
7. }

This function runs in O(n) time (or "linear time"), where n is the number of items in the array. If the
array has 10 items, we have to print 10 times. If it has 1000 items, we have to print 1000 times.

O(n^2):

1. void quadraticTimeComplexity(int arr[], int size)


2. {
3. for (int i = 0; i < size; i++)
4. {
5. for (int j = 0; j < size; j++)
6. {
7. printf("%d = %d\n", arr[i], arr[j]); 8. }
9. }
10. }

Here we're nesting two loops. If our array has n items, our outer loop runs n times, and our inner loop
runs n times for each iteration of the outer loop, giving us n^2 total prints. If the array has 10 items,

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 14


we have to print 100 times. If it has 1000 items, we have to print 1000000 times. Thus this function
runs in O(n^2) time (or "quadratic time").

O(2^n):
1. int fibonacci(int num)
2. {
3. if (num <= 1) return num;
4. return fibonacci(num - 2) + fibonacci(num - 1);
5. }

An example of an O(2^n) function is the recursive calculation of Fibonacci numbers. O(2^n) denotes
an algorithm whose growth doubles with each addition to the input data set. The growth curve of an
O(2^n) function is exponential - starting off very shallow, then rising meteorically.

Running time calculation.


Time complexity is defined in terms of how many times it takes to run a given algorithm, based on
the length of the input. Factors such as programming language, operating system, and processing
power are also considered.

Time complexity is a type of computational complexity that describes the time required to execute an
algorithm. The time complexity of an algorithm is the amount of time it takes for each statement to
complete. As a result, it is highly dependent on the size of the processed data. It also aids in defining
an algorithm's effectiveness and evaluating its performance.

Q. Imagine a classroom of 100 students in which you gave your pen to one person. You have to find
that pen without knowing to whom you gave it.
Here are some ways to find the pen and what the O order is.
• O(n2): You go and ask the first person in the class if he has the pen. Also, you ask this
person about the other 99 people in the classroom if they have that pen and so on, This
is what we call O(n2).
• O(n): Going and asking each student individually is O(N).
• O(log n): Now I divide the class into two groups, then ask: “Is it on the left side, or the
right side of the classroom?” Then I take that group and divide it into two and ask again,
and so on. Repeat the process till you are left with one student who has your pen. This is
what you mean by O(log n).

I might need to do:


• The O(n2) searches if only one student knows on which student the pen is hidden.
• The O(n) if one student had the pen and only they knew it.

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 15


• The O(log n) search if all the students knew, but would only tell me if I guessed the right side.

The above O -> is called Big – Oh which is an asymptotic notation.

Complexity Analysis of Algorithm.

The term algorithm complexity measures how many steps are required by the algorithm to solve the
given problem. It evaluates the order of count of operations executed by an algorithm as a function
of input data size.

To assess the complexity, the order (approximation) of the count of operation is always considered
instead of counting the exact steps.

O(f) notation represents the complexity of an algorithm, which is also termed as an Asymptotic
notation or "Big O" notation. Here the f corresponds to the function whose size is the same as that
of the input data. The complexity of the asymptotic computation O(f) determines in which order the
resources such as CPU.

time, memory, etc. are consumed by the algorithm that is articulated as a function of the size of the
input data.

The complexity can be found in any form such as constant, logarithmic, linear, n*log(n), quadratic,
cubic, exponential, etc. It is nothing but the order of constant, logarithmic, linear and so on, the
number of steps encountered for the completion of a particular algorithm. To make it even more
precise, we often call the complexity of an algorithm as "running time".

Typical Complexities of an Algorithm

o Constant Complexity:
It imposes a complexity of O(1). It undergoes an execution of a constant number of steps
like 1, 5, 10, etc. for solving a given problem. The count of operations is independent of the
input data size.
o Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the execution of the order of log(N)
steps. To perform operations on N elements, it often takes the logarithmic base as 2.
For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would undergo 20 steps
(with a constant precision). Here, the logarithmic base does not hold a necessary consequence
for the operation count order, so it is usually omitted.
o Linear Complexity:

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 16


o It imposes a complexity of O(N). It encompasses the same number of steps as that of the
total number of elements to implement an operation on N elements. For example, if there
exist 500 elements, then it will take about 500 steps. Basically, in linear complexity, the
number of elements linearly depends on the number of steps. For example, the number of
steps for N elements can be N/2 or 3*N.
o It also imposes a run time of O(n*log(n)). It undergoes the execution of the order
N*log(N) on N number of elements to solve the given problem.
For a given 1000 elements, the linear complexity will execute 10,000 steps for solving
a given problem.
o Quadratic Complexity: It imposes a complexity of O(n2). For N input data size, it
undergoes the order of N2 count of operations on N number of elements for solving a given
problem.
If N = 100, it will endure 10,000 steps. In other words, whenever the order of operation tends
to have a quadratic relation with the input data size, it results in quadratic complexity.
For example, for N number of elements, the steps are found to be in the order of 3*N2/2.
o Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it executes the
order of N3 steps on N elements to solve a given problem.
For example, if there exist 100 elements, it is going to execute 1,000,000 steps.
o Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N
elements, it will execute the order of count of operations that is exponentially dependable on
the input data size.
For example, if N = 10, then the exponential function 2N will result in 1024. Similarly, if N =
20, it will result in 1048 576, and if N = 100, it will result in a number having 30 digits. The
exponential function N! grows even faster; for example, if N = 5 will result in 120. Likewise,
if N = 10, it will result in 3,628,800 and so on.

Since the constants do not hold a significant effect on the order of count of operation, so it is better
to ignore them. Thus, to consider an algorithm to be linear and equally efficient, it must undergo N,
N/2 or 3*N count of operation, respectively, on the same number of elements to solve a particular
problem.

How to approximate the time taken by the Algorithm?

So, to find it out, we shall first understand the types of the algorithm we have. There are two types of
algorithms:

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 17


1. Iterative Algorithm: In the iterative approach, the function repeatedly runs until the
condition is met or it fails. It involves the looping construct.

2. Recursive Algorithm: In the recursive approach, the function calls itself until the condition
is met. It integrates the branching structure.

However, it is worth noting that any program that is written in iteration could be written as recursion.
Likewise, a recursive program can be converted to iteration, making both of these algorithms
equivalent to each other.

But to analyze the iterative program, we have to count the number of times the loop is going to
execute, whereas in the recursive program, we use recursive equations, i.e., we write a function of F(n)
in terms of F(n/2).

Suppose the program is neither iterative nor recursive. In that case, it can be concluded that there is
no dependency of the running time on the input data size, i.e., whatever is the input size, the running
time is going to be a constant value. Thus, for such programs, the complexity will be O(1).

For Iterative Programs

Consider the following programs that are written in simple English and does not correspond to any
syntax.

Example1:

In the first example, we have an integer i and a for loop running from i equals 1 to n. Now the question
arises, how many times does the name get printed?

1. A()
2. {
3. int i;
4. for (i=1 to n)
5. printf("Edward");
6. }

Since i equals 1 to n, so the above program will print Edward, n number of times. Thus, the complexity
will be O(n).

Example2:

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 18


1. A()
2. {
3. int i, j:
4. for (i=1 to n)
5. for (j=1 to n)
6. printf("Edward");
7. }
In this case, firstly, the outer loop will run n times, such that for each time, the inner loop will also run
n times. Thus, the time complexity will be O(n2).

Example3:

1. A()
2. {
3. i = 1; S = 1;
4. while (S<=n)
5. {
6. i++;
7. SS = S + i;
8. printf("Edward");
9. }
10. }

As we can see from the above example, we have two variables; i, S and then we have while S<=n,
which means S will start at 1, and the entire loop will stop whenever S value reaches a point where S
becomes greater than n.

Here i is incrementing in steps of one, and S will increment by the value of i, i.e., the increment in i is
linear. However, the increment in S depends on the i.

Initially;

i=1, S=1

After 1st iteration; i=2,

S=3

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 19


After 2nd iteration; i=3,

S=6

After 3rd iteration; i=4, S=10

… and so on.

Since we don't know the value of n, so let's suppose it to be k. Now, if we notice the value of S in the
above case is increasing; for i=1, S=1; i=2, S=3; i=3, S=6; i=4, S=10; …

Thus, it is nothing but a series of the sum of first n natural numbers, i.e., by the time i reaches k, the
value of S will be k(k+1)/2.

To stop the loop, has to be greater than n, and when we solve this equation, we will get
> n. Hence, it can be concluded that we get a complexity of O(√ n ) in this case.

For Recursive Program

Consider the following recursive programs.

Example1:

1. A(n)
2. {
3. if (n>1)
4. return (A(n-1))
5. } Solution;

Here we will see the simple Back Substitution method to solve the above problem.

T(n) = 1 + T(n-1) …Eqn. (1)

Step1: Substitute n-1 at the place of n in Eqn. (1)

T(n-1) = 1 + T(n-2) ...Eqn. (2)

Step2: Substitute n-2 at the place of n in Eqn. (1)

T(n-2) = 1 + T(n-3) …Eqn. (3)

Step3: Substitute Eqn. (2) in Eqn. (1)

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 20


T(n)= 1 + 1+ T(n-2) = 2 + T(n-2) …Eqn. (4)

Step4: Substitute eqn. (3) in Eqn. (4)

T(n) = 2 + 1 + T(n-3) = 3 + T(n-3) = …... = k + T(n-k) …Eqn. (5)

Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1. Basically, n will
start from a very large number, and it will decrease gradually. So, when T(n) = 1, the algorithm
eventually stops, and such a terminating condition is called anchor condition, base condition or
stopping condition.

Thus, for k = n-1, the T(n) will become.

Step5: Substitute k = n-1 in eqn. (5)

T(n) = (n-1) + T(n-(n-1)) = (n-1) + T(1) = n-1+1

Hence, T(n) = n or O(n).

GEORGE WAINAINA 0718313173/0731919231 [email protected] Page 21

You might also like