0% found this document useful (0 votes)
22 views

1.1.2 (1)

Uploaded by

K. Vishal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

1.1.2 (1)

Uploaded by

K. Vishal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

What is recursion?

In simple words, recursion is a problem solving, and in some cases, a programming technique
that has a very special and exclusive property. In recursion, a function or method has the ability
to call itself to solve the problem. The process of recursion involves solving a problem by
turning it into smaller varieties of itself.

The process in which a function calls itself could happen directly as well as indirectly. This
difference in call gives rise to different types of recursion, which we will talk about a little later.
Some of the problems that can be solved using recursion include DFS of Graph, Towers of
Hanoi, Different Types of Tree Traversals, and others.

Recursion allows codes to get more compact and efficient by calling a similar function inside a
function. This is kind of similar to looping functions, however, there are a huge number of
differences. Recursion makes coding programs or solutions much faster and simpler.

Recursion also offers code redundancy, making it much more efficient to read and maintain
codes. Understanding recursion well can lead to true mastery over functional programming
languages and empower programmers to code certain programs rapidly. Some problems are
inherently recursive in nature, thus it is very valuable to know recursive functions to solve them
faster.

Besides, recursion is valuable and compulsory in many problems related to advanced algorithms
such as tree and graph traversal. Learning the applications of recursion in data structure and how
to effectively use recursion to manipulate functions in various programs is the key to a great
coding experience.

Five Main Recursion Methods

There are five main recursion methods programmers can use in functional programming. And,
they are:

 Tail Recursion: Even though linear recursion is the most commonly used method, tail recursion
is much more effective. When using the tail recursion method, the recursive functions are called
in the end.
 Binary Recursion: When using binary recursion, functions are called upon two times instead of
being called one by one. This kind of recursion in the data structure is used in operations such as
merging and tree traversal.
 Linear Recursion: It is the most popular recursion method. When using this method, functions
call themselves in a non-complex manner and terminate the function with a termination
condition. This method involves functions making a single call to itself during execution.
 Mutual Recursion: It is the method where a function will use others recursively.
Fundamentally, functions end up calling each other in this method. This method is especially
used when writing programs using programming languages that do not support recursive calling
functions. Hence, applying mutual recursion can act as a substitute for a recursion function. In
mutual recursion, base conditions are applied to a single, some or all the functions.
 Nested Recursion: It is an exception where these types of recursions cannot be converted into
an iterative format. In this method, recursive functions pass the parameters as recursive calls
which fundamentally translate to recursions inside recursions.

Types of Recursion

The five methods of recursion fall under these two main types of recursion – direct and indirect
recursion. Let us learn what they are and understand how to implement them.

Direct Recursion

In direct recursion, functions call themselves. This process involves a single step recursive call
by the function from inside itself. Let us check how to implement direct recursion to find out the
square root of any number.

// base case

if (x == 0)

return x;

// recursive case

else

return square(x-1) + (2*x) - 1;

int main() {

// implementation of square function


int input=3;

cout << input<<"^4 = "<<square(input);

return 0;

The output would display this:

3^4 = 9

Indirect Recursion

Indirect recursion is where functions call other functions to call the original function. This
process consists of two steps when creating a recursive call, fundamentally making functions call
functions to make a recursive call. Mutual recursion can be described as indirect recursion.

Let us check how to implement indirect recursion to print or find out all the numbers from 15 to
27.

using namespace std;

int n=15;

// declaring functions

void foo1(void);

void foo2(void);

// defining recursive functions

void foo1()

if (n <= 27)

{
cout<<n<<" "; // prints n

n++; // increments n by 1

foo2(); // calls foo2()

else

return;

void foo2()

if (n <= 27)

cout<<n<<" "; // prints n

n++; // increments n by 1

foo1(); // calls foo1()

else

return;

The output would display this:

15 16 17 18 19 20 21 22 23 24 25 26 27
1.2 Algorithm

An algorithm is a finite set of instructions or logic, written in order, to accomplish a certain


predefined task. Algorithm is not the complete code or program, it is just the core logic(solution)
of a problem, which can be expressed either as an informal high level description as pseudo
code or using a flowchart.

An algorithm is said to be efficient and fast, if it takes less time to execute and consumes less
memory space. The performance of an algorithm is measured on the basis of following
properties :

1. Time Complexity

2. Space Complexity

1.2.1 Space Complexity

Its the amount of memory space required by the algorithm, during the course of its execution.
Space complexity must be taken seriously for multi-user systems and in situations where limited
memory is available.

An algorithm generally requires space for following components:

· Instruction Space : It’s the space required to store the executable version of the program.
This space is fixed, but varies depending upon the number of lines of code in the program.

· Data Space: It’s the space required to store all the constants and variables value.

· Environment Space: It’s the space required to store the environment information needed
to resume the suspended function.

1.2.2 Time Complexity

Time Complexity is a way to represent the amount of time needed by the program to run to
completion. We will study this in details in our section.

Time Complexity of Algorithms

Time complexity of an algorithm signifies the total time required by the program to run to
completion. The time complexity of algorithms is most commonly expressed using the big O
notation.
Time Complexity is most commonly estimated by counting the number of elementary functions
performed by the algorithm. And since the algorithm's performance may vary with different
types of input data, hence for an algorithm we usually use the worst-case Time complexity of
an algorithm because that is the maximum time taken for any input size.

Calculating Time Complexity

Now lets tap onto the next big topic related to Time complexity, which is How to Calculate Time
Complexity. It becomes very confusing some times, but we will try to explain it in the simplest
way.

Now the most common metric for calculating time complexity is Big O notation. This removes
all constant factors so that the running time can be estimated in relation to N, as N approaches
infinity. In general you can think of it like this:

Statement;

Above we have a single statement. Its Time Complexity will be Constant. The running time of
the statement will not change in relation to N.

for(i=0; i < N; i++)

statement;

The time complexity for the above algorithm will be Linear. The running time of the loop is
directly proportional to N. When N doubles, so does the running time.

for(i=0; i < N; i++)

for(j=0; j < N;j++)

statement;
}

This time, the time complexity for the above code will be Quadratic. The running time of the
two loops is proportional to the square of N. When N doubles, the running time increases by N *
N.

while(low <= high)

mid = (low + high) / 2;

if (target < list[mid])

high = mid - 1;

else if (target > list[mid])

low = mid + 1;

else break;

This is an algorithm to break a set of numbers into halves, to search a particular field (we will
study this in detail later). Now, this algorithm will have a Logarithmic Time Complexity. The
running time of the algorithm is proportional to the number of times N can be divided by 2(N is
high-low here). This is because the algorithm divides the working area in half with each
iteration.

void quicksort(int list[], int left, int right)

int pivot = partition(list, left, right);

quicksort(list, left, pivot - 1);


quicksort(list, pivot + 1, right);

Taking the previous algorithm forward, above we have a small logic of Quick Sort(we will study
this in detail later). Now in Quick Sort, we divide the list into halves every time, but we repeat
the iteration N times(where N is the size of list). Hence time complexity will be N*log( N ). The
running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is
a combination of linear and logarithmic.

NOTE : In general, doing something with every item in one dimension is linear, doing
something with every item in two dimensions is quadratic, and dividing the working area in half
is logarithmic.

Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of


its run-time performance. Using asymptotic analysis, we can very well conclude the best case,
average case, and worst case scenario of an algorithm.

Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.

Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be
for another operation it is computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.

Usually, the time required by an algorithm falls under three types −

· Best Case − Minimum time required for program execution.

· Average Case − Average time required for program execution.

· Worst Case − Maximum time required for program execution.

Asymptotic Notations

Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.

 Ο Notation
 Ω Notation
 θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.

For example, for a function f(n)

Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }

Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete.

For example, for a function f(n)

Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

Theta Notation, θ

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It is represented as follows −

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Common Asymptotic Notations

Following is a list of some common asymptotic notations −

constant − Ο(1)
logarithmic − Ο(log n)
linear − Ο(n)
n log n − Ο(n log n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nΟ(1)
exponential − 2Ο(n)
Reference:

https://youtu.be/kepBmgvWNDw

You might also like