0% found this document useful (0 votes)
10 views

Unit - 1_merged

Uploaded by

parthskrpp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Unit - 1_merged

Uploaded by

parthskrpp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Unit - 1

Introduction to Data Structures


1.1 Introduction: Concept and Need of Data Structure, Definition, Abstract Data Type
1.2 Types of Data Structures: (i) Linear Data Structures (ii) Non-Linear Data Structures
1.3 Operations on Data Structures: (i) Traversing (ii) Insertion (iii) Deletion

Previous Year Questions

1. Define the terms: linear data structure and non-linear data structure.
2. Write any four operations that can be performed on a data structure.
3. Write any four operations performed on the data structure
4. Explain linear and non-linear data structures.
5. Define Abstract data type.
6. Define complexity and classify it.
7. Write any four operations that can be performed on the data structure.
8. Give classification of data structure.
9. List any four operations on the data structure
10. Differentiate between linear and non-linear data structures on any two parameters.
11. Define the term algorithm.
12. Write C program for performing following operations on array : insertion, display.
13. Write ‘C’ program for deletion of an element from an array.
14. Implement a C program to insert an element in an array.

What is Data Structure?


The data structure name indicates the organization of the data in memory. There are many ways
of organizing the data in the memory as we have already seen one of the data structures, i.e.,
array in C language.
The array is a collection of memory elements in which data is stored sequentially, i.e., one after
another. In other words, the array stores the elements constantly. This organization of data is
done with the help of an array of data structures. There are also other ways to organize the data
in memory.
Let's see the different types of data structures.
The data structure is not any programming language like C, C++, java, etc. It is a set of
algorithms that we can use in any programming language to structure the data in the memory.
To structure the data in memory, 'n' number of algorithms were proposed, and all these
algorithms are known as Abstract data types. These abstract data types are a set of rules.
Abstract Data Type
As per the National Institute of Standards and Technology (NIST), a data structure is an
arrangement of information, generally in the memory, for better algorithm efficiency. Data
Structures include linked lists, stacks, queues, trees, and dictionaries. They could also be a
theoretical entity, like the name and address of a person.

From the definition mentioned above, we can conclude that the operations in data structure
include:

1. A high level of abstractions like addition or deletion of an item from a list.


2. Searching and sorting an item in a list.
3. Accessing the highest priority item in a list.

Whenever the data structure does such operations, it is known as an Abstract Data Type (ADT).

We can define it as a set of data elements along with the operations on the data. The term
"abstract" refers to the fact that the data and the fundamental operations defined on it are being
studied independently of their implementation. It includes what we can do with the data, not how
we can do it.

An ADI implementation contains a storage structure in order to store the data elements and
algorithms for fundamental operation. All the data structures, like an array, linked list, queue,
stack, etc., are examples of ADT.

Understanding the Advantages of using ADTs


In the real world, programs evolve as a consequence of new constraints or requirements, so
modifying a program generally requires a change in one or multiple data structures. For example,
suppose we want to insert a new field into an employee's record to keep track of more details
about each employee. In that case, we can improve the efficiency of the program by replacing an
Array with a Linked structure. In such a situation, rewriting every procedure that utilizes the
modified structure is unsuitable. Hence, a better alternative is to separate a data structure from its
implementation information. This is the principle behind the usage of Abstract Data Types
(ADT).

Some Applications of Data Structures


The following are some applications of Data Structures:

1. Data Structures help in the organization of data in a computer's memory.


2. Data Structures also help in representing the information in databases.
3. Data Structures allows the implementation of algorithms to search through data (For
example, search engine).
4. We can use the Data Structures to implement the algorithms to manipulate data (For
example, word processors).
5. We can also implement the algorithms to analyse data using Data Structures (For
example, data miners).
6. Data Structures support algorithms to generate the data (For example, a random number
generator).
7. Data Structures also support algorithms to compress and decompress the data (For
example, a zip utility).
8. We can also use Data Structures to implement algorithms to encrypt and decrypt the data
(For example, a security system).
9. With the help of Data Structures, we can build software that can manage files and
directories (For example, a file manager).
10. We can also develop software that can render graphics using Data Structures. (For
example, a web browser or 3D rendering software).

Types of Data Structures


There are two types of data structures:

❖ Primitive data structure


❖ Non-primitive data structure
Primitive Data structure

The primitive data structures are primitive data types. The int, char, float, double, and pointer are
the primitive data structures that can hold a single value.

Non-Primitive Data structure

The non-primitive data structure is divided into two types:

○ Linear data structure


○ Non-linear data structure

Linear Data Structure

The arrangement of data in a sequential manner is known as a linear data structure. The data
structures used for this purpose are Arrays, Linked list, Stacks, and Queues. In these data
structures, one element is connected to only one another element in a linear form.

When one element is connected to the 'n' number of elements known as a non-linear data
structure. The best example is trees and graphs. In this case, the elements are arranged in a
random manner.

We will briefly discuss the above data structures in the coming topics. Now, we will see the
common operations that we can perform on these data structures.

Data structures can also be classified as:

● Static data structure: It is a type of data structure where the size is allocated at the
compile time. Therefore, the maximum size is fixed.
● Dynamic data structure: It is a type of data structure where the size is allocated at the
run time. Therefore, the maximum size is flexible.

Basic Operations of Data Structures


In the following section, we will discuss the different types of operations that we can perform to
manipulate data in every data structure:

1. Traversal: Traversing a data structure means accessing each data element exactly once
so it can be administered. For example, traversing is required while printing the names of
all the employees in a department.
2. Search: Search is another data structure operation which means to find the location of
one or more data elements that meet certain constraints. Such a data element may or may
not be present in the given set of data elements. For example, we can use the search
operation to find the names of all the employees who have the experience of more than 5
years.
3. Insertion: Insertion means inserting or adding new data elements to the collection. For
example, we can use the insertion operation to add the details of a new employee the
company has recently hired.
4. Deletion: Deletion means to remove or delete a specific data element from the given list
of data elements. For example, we can use the deleting operation to delete the name of an
employee who has left the job.
5. Sorting: Sorting means to arrange the data elements in either Ascending or Descending
order depending on the type of application. For example, we can use the sorting operation
to arrange the names of employees in a department in alphabetical order or estimate the
top three performers of the month by arranging the performance of the employees in
descending order and extracting the details of the top three.
6. Merge: Merge means to combine data elements of two sorted lists in order to form a
single list of sorted data elements.
7. Create: Create is an operation used to reserve memory for the data elements of the
program. We can perform this operation using a declaration statement. The creation of
data structure can take place either during the following:

1. Compile-time
2. Run-time
For example, the malloc() function is used in C Language to create data structure.

8. Selection: Selection means selecting a particular data from the available data. We can
select any particular data by specifying conditions inside the loop.
9. Update: The Update operation allows us to update or modify the data in the data
structure. We can also update any particular data by specifying some conditions inside the
loop, like the Selection operation.
10. Splitting: The Splitting operation allows us to divide data into various subparts
decreasing the overall process completion time.

Types of Linear Data Structures


The following is the list of Linear Data Structures that we generally use:

1. Arrays

An Array is a data structure used to collect multiple data elements of the same data type into one
variable. Instead of storing multiple values of the same data types in separate variable names, we
could store all of them together into one variable. This statement doesn't imply that we will have
to unite all the values of the same data type in any program into one array of that data type. But
there will often be times when some specific variables of the same data types are all related to
one another in a way appropriate for an array.

An Array is a list of elements where each element has a unique place in the list. The data
elements of the array share the same variable name; however, each carries a different index
number called a subscript. We can access any data element from the list with the help of its
location in the list. Thus, the key feature of the arrays to understand is that the data is stored in
contiguous memory locations, making it possible for the users to traverse through the data
elements of the array using their respective indexes.

Figure : An Array

Arrays can be classified into different types:

1. One-Dimensional Array: An Array with only one row of data elements is known as a
One-Dimensional Array. It is stored in ascending storage location.
2. Two-Dimensional Array: An Array consisting of multiple rows and columns of data
elements is called a Two-Dimensional Array. It is also known as a Matrix.
3. Multidimensional Array: We can define Multidimensional Array as an Array of Arrays.
Multidimensional Arrays are not bounded to two indices or two dimensions as they can
include as many indices are per the need.

Some Applications of Array:

1. We can store a list of data elements belonging to the same data type.
2. Array acts as an auxiliary storage for other data structures.
3. The array also helps store data elements of a binary tree of the fixed count.
4. Array also acts as a storage of matrices.

2. Linked Lists
A Linked List is another example of a linear data structure used to store a collection of data
elements dynamically. Data elements in this data structure are represented by the Nodes,
connected using links or pointers. Each node contains two fields, the information field consists of
the actual data, and the pointer field consists of the address of the subsequent nodes in the list.
The pointer of the last node of the linked list consists of a null pointer, as it points to nothing.
Unlike the Arrays, the user can dynamically adjust the size of a Linked List as per the
requirements.

Figure : A Linked List

Linked Lists can be classified into different types:

1. Singly Linked List: A Singly Linked List is the most common type of Linked List. Each
node has data and a pointer field containing an address to the next node.
2. Doubly Linked List: A Doubly Linked List consists of an information field and two
pointer fields. The information field contains the data. The first pointer field contains an
address of the previous node, whereas another pointer field contains a reference to the
next node. Thus, we can go in both directions (backward as well as forward).
3. Circular Linked List: The Circular Linked List is similar to the Singly Linked List. The
only key difference is that the last node contains the address of the first node, forming a
circular loop in the Circular Linked List.

Some Applications of Linked Lists:

1. The Linked Lists help us implement stacks, queues, binary trees, and graphs of
predefined size.
2. We can also implement Operating System's function for dynamic memory management.
3. Linked Lists also allow polynomial implementation for mathematical operations.
4. We can use Circular Linked List to implement Operating Systems or application
functions that Round Robin execution of tasks.
5. Circular Linked List is also helpful in a Slide Show where a user requires to go back to
the first slide after the last slide is presented.
6. Doubly Linked List is utilized to implement forward and backward buttons in a browser
to move forward and backward in the opened pages of a website.

3. Stacks

A Stack is a Linear Data Structure that follows the LIFO (Last In, First Out) principle that allows
operations like insertion and deletion from one end of the Stack, i.e., Top. Stacks can be
implemented with the help of contiguous memory, an Array, and non-contiguous memory, a
Linked List. Real-life examples of Stacks are piles of books, a deck of cards, piles of money, and
many more.

Figure : A Real-life Example of Stack

The above figure represents the real-life example of a Stack where the operations are performed
from one end only, like the insertion and removal of new books from the top of the Stack. It
implies that the insertion and deletion in the Stack can be done only from the top of the Stack.
We can access only the Stack's tops at any given time.

The primary operations in the Stack are as follows:

1. Push: Operation to insert a new element in the Stack is termed as Push Operation.
2. Pop: Operation to remove or delete elements from the Stack is termed as Pop Operation.
Figure : A Stack

Some Applications of Stacks:

1. The Stack is used as a Temporary Storage Structure for recursive operations.


2. Stack is also utilized as Auxiliary Storage Structure for function calls, nested operations,
and deferred/postponed functions.
3. We can manage function calls using Stacks.
4. Stacks are also utilized to evaluate the arithmetic expressions in different programming
languages.
5. Stacks are also helpful in converting infix expressions to postfix expressions.
6. Stacks allow us to check the expression's syntax in the programming environment.
7. We can match parenthesis using Stacks.
8. Stacks can be used to reverse a String.
9. Stacks are helpful in solving problems based on backtracking.
10. We can use Stacks in depth-first search in graph and tree traversal.
11. Stacks are also used in Operating System functions.
12. Stacks are also used in UNDO and REDO functions in an edit.

4. Queues

A Queue is a linear data structure similar to a Stack with some limitations on the insertion and
deletion of the elements. The insertion of an element in a Queue is done at one end, and the
removal is done at another or opposite end. Thus, we can conclude that the Queue data structure
follows FIFO (First In, First Out) principle to manipulate the data elements. Implementation of
Queues can be done using Arrays, Linked Lists, or Stacks. Some real-life examples of Queues
are a line at the ticket counter, an escalator, a car wash, and many more.

Figure : A Real-life Example of Queue

The above image is a real-life illustration of a movie ticket counter that can help us understand
the Queue where the customer who comes first is always served first. The customer arriving last
will undoubtedly be served last. Both ends of the Queue are open and can execute different
operations. Another example is a food court line where the customer is inserted from the rear end
while the customer is removed at the front end after providing the service they asked for.

The following are the primary operations of the Queue:

1. Enqueue: The insertion or Addition of some data elements to the Queue is called
Enqueue. The element insertion is always done with the help of the rear pointer.
2. Dequeue: Deleting or removing data elements from the Queue is termed Dequeue. The
deletion of the element is always done with the help of the front pointer.
Figure: A Queue

Some Applications of Queues:

1. Queues are generally used in the breadth search operation in Graphs.


2. Queues are also used in Job Scheduler Operations of Operating Systems, like a keyboard
buffer queue to store the keys pressed by users and a print buffer queue to store the
documents printed by the printer.
3. Queues are responsible for CPU scheduling, Job scheduling, and Disk Scheduling.
4. Priority Queues are utilized in file-downloading operations in a browser.
5. Queues are also used to transfer data between peripheral devices and the CPU.
6. Queues are also responsible for handling interrupts generated by the User Applications
for the CPU.

Non-Linear Data Structures


Non-Linear Data Structures are data structures where the data elements are not arranged in
sequential order. Here, the insertion and removal of data are not feasible in a linear manner.
There exists a hierarchical relationship between the individual data items.

Types of Non-Linear Data Structures


The following is the list of Non-Linear Data Structures that we generally use:

1. Trees

A Tree is a Non-Linear Data Structure and a hierarchy containing a collection of nodes such that
each node of the tree stores a value and a list of references to other nodes (the "children").

The Tree data structure is a specialized method to arrange and collect data in the computer to be
utilized more effectively. It contains a central node, structural nodes, and sub-nodes connected
via edges. We can also say that the tree data structure consists of roots, branches, and leaves
connected.

Figure : A Tree

Trees can be classified into different types:

1. Binary Tree: A Tree data structure where each parent node can have at most two children
is termed a Binary Tree.
2. Binary Search Tree: A Binary Search Tree is a Tree data structure where we can easily
maintain a sorted list of numbers.
3. AVL Tree: An AVL Tree is a self-balancing Binary Search Tree where each node
maintains extra information known as a Balance Factor whose value is either -1, 0, or +1.
4. B-Tree: A B-Tree is a special type of self-balancing Binary Search Tree where each node
consists of multiple keys and can have more than two children.

Some Applications of Trees:

1. Trees implement hierarchical structures in computer systems like directories and file
systems.
2. Trees are also used to implement the navigation structure of a website.
3. We can generate code like Huffman's code using Trees.
4. Trees are also helpful in decision-making in Gaming applications.
5. Trees are responsible for implementing priority queues for priority-based OS scheduling
functions.
6. Trees are also responsible for parsing expressions and statements in the compilers of
different programming languages.
7. We can use Trees to store data keys for indexing for Database Management System
(DBMS).
8. Spanning Trees allows us to route decisions in Computer and Communications Networks.
9. Trees are also used in the path-finding algorithm implemented in Artificial Intelligence
(AI), Robotics, and Video Games Applications.

2. Graphs

A Graph is another example of a Non-Linear Data Structure comprising a finite number of nodes
or vertices and the edges connecting them. The Graphs are utilized to address problems of the
real world in which it denotes the problem area as a network such as social networks, circuit
networks, and telephone networks. For instance, the nodes or vertices of a Graph can represent a
single user in a telephone network, while the edges represent the link between them via
telephone.

The Graph data structure, G is considered a mathematical structure comprised of a set of


vertices, V and a set of edges, E as shown below:

G = (V,E)

Figure : A Graph
The above figure represents a Graph having seven vertices A, B, C, D, E, F, G, and ten edges [A,
B], [A, C], [B, C], [B, D], [B, E], [C, D], [D, E], [D, F], [E, F], and [E, G].

Depending upon the position of the vertices and edges, the Graphs can be classified into different
types:

1. Null Graph: A Graph with an empty set of edges is termed a Null Graph.
2. Trivial Graph: A Graph having only one vertex is termed a Trivial Graph.
3. Simple Graph: A Graph with neither self-loops nor multiple edges is known as a Simple
Graph.
4. Multi Graph: A Graph is said to be Multi if it consists of multiple edges but no self-loops.
5. Pseudo Graph: A Graph with self-loops and multiple edges is termed a Pseudo Graph.
6. Non-Directed Graph: A Graph consisting of non-directed edges is known as a
Non-Directed Graph.
7. Directed Graph: A Graph consisting of the directed edges between the vertices is known
as a Directed Graph.
8. Connected Graph: A Graph with at least a single path between every pair of vertices is
termed a Connected Graph.
9. Disconnected Graph: A Graph where there does not exist any path between at least one
pair of vertices is termed a Disconnected Graph.
10. Regular Graph: A Graph where all vertices have the same degree is termed a Regular
Graph.
11. Complete Graph: A Graph in which all vertices have an edge between every pair of
vertices is known as a Complete Graph.
12. Cycle Graph: A Graph is said to be a Cycle if it has at least three vertices and edges that
form a cycle.
13. Cyclic Graph: A Graph is said to be Cyclic if and only if at least one cycle exists.
14. Acyclic Graph: A Graph having zero cycles is termed an Acyclic Graph.
15. Finite Graph: A Graph with a finite number of vertices and edges is known as a Finite
Graph.
16. Infinite Graph: A Graph with an infinite number of vertices and edges is known as an
Infinite Graph.
17. Bipartite Graph: A Graph where the vertices can be divided into independent sets A and
B, and all the vertices of set A should only be connected to the vertices present in set B
with some edges is termed a Bipartite Graph.
18. Planar Graph: A Graph is said to be a Planar if we can draw it in a single plane with two
edges intersecting each other.
19. Euler Graph: A Graph is said to be Euler if and only if all the vertices are even degrees.
20. Hamiltonian Graph: A Connected Graph consisting of a Hamiltonian circuit is known as
a Hamiltonian Graph.
Some Applications of Graphs:

1. Graphs help us represent routes and networks in transportation, travel, and


communication applications.
2. Graphs are used to display routes in GPS.
3. Graphs also help us represent the interconnections in social networks and other
network-based applications.
4. Graphs are utilized in mapping applications.
5. Graphs are responsible for the representation of user preference in e-commerce
applications.
6. Graphs are also used in Utility networks in order to identify the problems posed to local
or municipal corporations.
7. Graphs also help to manage the utilization and availability of resources in an
organization.
8. Graphs are also used to make document link maps of the websites in order to display the
connectivity between the pages through hyperlinks.
9. Graphs are also used in robotic motions and neural networks.

How to Search, Insert, and Delete in an Unsorted Array


Search Operation:
With an unsorted array, the search operation can be accomplished by doing a linear traversal
from the first to the final element.
Insert Operation:
1. Insert at the end:
In an unsorted array, the insert operation is quicker than in a sorted array as we do not need to
worry about the position at which the element is to be inserted.

2. Insert at any point


Insert operations in an array can be performed at any point by moving elements to the right that
are on the right side of the desired position.
Delete Operation:

The element to be deleted is found using a linear search, and then the delete operation is
executed, followed by relocating the elements.
Unit - 2

Searching and Sorting


2.1 Searching: Searching for an item in a data set using the following methods: (i)
Linear Search (ii) Binary Search
2.2 Sorting: Sorting of data set in an order using the following methods: (i) Bubble
Sort (ii) Selection Sort (iii) Insertion Sort (iv) Quick Sort (v) Merge Sort

Previous Year Questions

1. Sort the following numbers in ascending order using quick sort. Given numbers 50, 2,
6, 22, 3, 39, 49, 25, 18, 5.
2. Sort the following numbers in ascending order using Bubble sort. Given numbers : 29,
35, 3, 8, 11, 15, 56, 12, 1, 4, 85, 5 & write the output after each interaction.
3. Find the position of element 30 using Binary search method in array A{10, 5, 20, 25,
8, 30, 40}.
4. Describe the working of Selection Sort Method. Also sort given input list in ascending
order using selection sort. Input list: 50, 24, 5, 12, 30
5. Sort the following numbers in ascending order using Insertion sort :
{25, 15, 4, 103, 62, 9} and write the output after each iteration.
6. Differentiate between Binary search and Linear search with respect to any four
parameters.
7. Find the position of element 29 using Binary search method in an array given
as : {11, 5, 21, 3, 29, 17, 2, 43}.
8. Find the position of element 29 using binary search method in an array ‘A’
given below. Show each step. A = {11, 5, 21, 3, 29, 17, 2, 43}
9. Describe working of bubble sort with example.
10. Describe working of selection sort method. Also sort given input list in
ascending order using selection sort input list – 55, 25, 5, 15, 35.
11. Implement a ‘C’ program to search a particular data from the given array using Linear
Search.
12. Find the position of element 21 using Binary Search method in Array ‘A’ given below :
A = {11, 5, 21, 3, 29, 17, 2, 45}
13. Elaborate the steps for performing selection sort for given elements of array. A = {37,
12, 4, 90, 49, 23, –19}.
14. State the following terms : (i) searching (ii) sorting
15. Write a program to implement bubble sort.
16. Find location of element 20 by using binary search algorithm in the list given
below :10, 20, 30, 40, 50, 60, 70, 80
17. Describe working of linear search with example.
18. Write a ‘C’ program for insertion sort. Sort the following array using insertion
sort : 30 10 40 50 20 45
19. State any two differences between linear search and binary search.
20. Describe working of selection sort method with suitable example.
21. Explain complexity of following algorithms in terms of time and space : (i) Binary
search (ii) Bubble sort
22. Describe working of bubble sort with example.
23. Sort the following number in ascending order using bubble sort. Given numbers as
follows: 475, 15, 513, 6753, 45, 118.
24. Define Searching. State two methods of Searching.
25. Write a program to implement a linear search for 10 elements in an array.
26. Write a program to print a string in reverse order.
27. Write a program to implement selection sort.
28. Define Searching. What are its types.
29. Differentiate between Binary search and Linear search with respect to any four
parameters.

Searching is the process of finding some particular element in the list. If the element is present
in the list, then the process is called successful, and the process returns the location of that
element; otherwise, the search is called unsuccessful.

Two popular search methods are Linear Search and Binary Search.

Linear search:
Linear search is a type of sequential searching algorithm. In this method, every element within
the input array is traversed and compared with the key element to be found. If a match is found
in the array the search is said to be successful; if there is no match found the search is said to be
unsuccessful and gives the worst-case time complexity.

For instance, in the given animated diagram, we are searching for an element 33. Therefore, the
linear search method searches for it sequentially from the very first element until it finds a
match. This returns a successful search.
In the same diagram, if we have to search for an element 46, then it returns an unsuccessful
search since 46 is not present in the input.

Linear Search Algorithm:

The algorithm for linear search is relatively simple. The procedure starts at the very first index of
the input array to be searched.

Step 1 − Start from the 0th index of the input array, and compare the key value with the value
present in the 0th index.

Step 2 − If the value matches with the key, return the position at which the value was found.

Step 3 − If the value does not match with the key, compare the next element in the array.

Step 4 − Repeat Step 3 until there is a match found. Return the position at which the match was
found.

Step 5 − If it is an unsuccessful search, print that the element is not present in the array and exit
the program.

Pseudocode:

procedure linear_search (list, value)


for each item in the list
if match item == value
return the item's location
end if
end for
end procedure

Analysis: Linear search traverses through every element sequentially therefore, the best case is
when the element is found in the very first iteration. The best-case time complexity would be
O(1).
However, the worst case of the linear search method would be an unsuccessful search that does
not find the key value in the array, it performs n iterations. Therefore, the worst-case time
complexity of the linear search algorithm would be O(n).

Example

Let us look at the step-by-step searching of the key element (say 47) in an array using the linear
search method.

Step 1

The linear search starts from the 0th index. Compare the key element with the value in the 0th
index, 34.

However, 47 ≠ 34. So it moves to the next element.

Step 2

Now, the key is compared with value in the 1st index of the array.
Still, 47 ≠ 10, making the algorithm move for another iteration.

Step 3

The next element 66 is compared with 47. They are both not a match so the algorithm compares
the further elements.

Step 4

Now the element in 3rd index, 27, is compared with the key value, 47. They are not equal so the
algorithm is pushed forward to check the next element.

Step 5
Comparing the element in the 4th index of the array, 47, to the key 47. It is figured that both the
elements match. Now, the position in which 47 is present, i.e., 4 is returned.

The output achieved is “Element found at 4th index”.

Application of Linear Search Algorithm

● The linear search is applicable to both single and multi-dimensional arrays.


● It is less complex, effective, and easy to implement when the array contains a few
elements.
● It is efficient when we need to search for a single element in an unordered array.

Advantages of Linear Search Algorithm:


● Linear search can be used irrespective of whether the array is sorted or not. It can be
used on arrays of any data type.
● Does not require any additional memory.
● It is a well-suited algorithm for small datasets.

Disadvantages of Linear Search Algorithm:


● Linear search has a time complexity of O(N), which in turn makes it slow for large
datasets.
● Not suitable for large arrays.

When to use a Linear Search Technique?


● When we are dealing with a small dataset.
● When you are searching for a dataset stored in contiguous memory.

Binary search:
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer, since it divides the array into half before
searching. For this algorithm to work properly, the data collection should be in the sorted form.
Binary search looks for a particular key value by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. But if the middle item has a
value greater than the key value, the right sub-array of the middle item is searched. Otherwise,
the left sub-array is searched. This process continues recursively until the size of a subarray
reduces to zero.

Binary Search Algorithm:

Binary Search algorithm is an interval searching method that performs the searching in intervals
only. The input taken by the binary search algorithm must always be in a sorted array since it
divides the array into subarrays based on the greater or lower values. The algorithm follows the
procedure below −

Step 1 − Select the middle item in the array and compare it with the key value to be searched. If
it is matched, return the position of the median.

Step 2 − If it does not match the key value, check if the key value is either greater than or less
than the median value.

Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower
than the median value, perform the search in the left sub-array.

Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1.

Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful
search.

Pseudocode:

The pseudocode of binary search algorithms should look like this −


Analysis:

Since the binary search algorithm performs searching iteratively, calculating the time complexity
is not as easy as the linear search algorithm.

The input array is searched iteratively by dividing into multiple sub-arrays after every
unsuccessful iteration. Therefore, the recurrence relation formed would be of a dividing function.

To explain it in simpler terms,

​ During the first iteration, the element is searched in the entire array. Therefore, length of
the array = n.
​ In the second iteration, only half of the original array is searched. Hence, length of the
array = n/2.
​ In the third iteration, half of the previous sub-array is searched. Here, length of the array
will be = n/4.
​ Similarly, in the ith iteration, the length of the array will become n/2i

To achieve a successful search, after the last iteration the length of array must be 1. Hence,

n/2i = 1
That gives us −

n = 2i

Applying log on both sides,

log n = log 2i
log n = i. log 2
i = log n

The time complexity of the binary search algorithm is O(log n)

Example

For a binary search to work, the target array must be sorted. We shall learn the process of binary
search with a pictorial example. The following is our sorted array and let us assume that we need
to search the location of value 31 using binary search.

First, we shall determine half of the array by using this formula −

mid = low + (high - low) / 2

Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find
that the value at location 4 is 27, which is not a match. As the value is greater than 27 and we
have a sorted array, so we also know that the target value must be in the upper portion of the
array.
We change our low to mid + 1 and find the new mid value again.

low = mid + 1
mid = low + (high - low) / 2

Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.

The value stored at location 7 is not a match, rather it is less than what we are looking for. So, the
value must be in the lower part from this location.

Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.
We conclude that the target value 31 is stored at location 5.

Binary search halves the searchable items and thus reduces the count of comparisons to be made
to very less numbers.

Applications of Binary Search:


● Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the
optimal hyperparameters for a model.
● It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
● It can be used for searching a database.

Advantages of Binary Search


● Binary search is faster than linear search, especially for large arrays.
● More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
● Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.

Disadvantages of Binary Search


● The array should be sorted.
● Binary search requires that the data structure being searched be stored in contiguous
memory locations.
● Binary search requires that the elements of the array be comparable, meaning that
they must be able to be ordered.

Bubble sort:
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm
in which each pair of adjacent elements is compared and the elements are swapped if they are not
in order. This algorithm is not suitable for large data sets as its average and worst case
complexity are of O(n2) where n is the number of items.

Bubble Sort Algorithm


Bubble Sort is an elementary sorting algorithm, which works by repeatedly exchanging adjacent
elements, if necessary. When no exchanges are required, the file is sorted.

We assume list is an array of n elements. We further assume that swap function swaps the values
of the given array elements.

Step 1 − Check if the first element in the input array is greater than the next element in the array.

Step 2 − If it is greater, swap the two elements; otherwise move the pointer forward in the array.

Step 3 − Repeat Step 2 until we reach the end of the array.

Step 4 − Check if the elements are sorted; if not, repeat the same process (Step 1 to Step 3) from
the last element of the array to the first.

Step 5 − The final output achieved is the sorted array.

Algorithm: Sequential-Bubble-Sort (A)


fori ← 1 to length [A] do
for j ← length [A] down-to i +1 do
if A[A] < A[j-1] then
Exchange A[j] ⟷ A[j-1]

Pseudocode

We observe in algorithm that Bubble Sort compares each pair of array element unless the whole
array is completely sorted in an ascending order. This may cause a few complexity issues like
what if the array needs no more swapping as all the elements are already ascending.

To ease-out the issue, we use one flag variable swapped which will help us see if any swap has
happened or not. If no swap has occurred, i.e. the array requires no more processing to be sorted,
it will come out of the loop.

The pseudocode of bubble sort algorithm can be written as follows:

Void bubbleSort(int numbers[], intarray_size){


inti, j, temp;
for (i = (array_size - 1); i>= 0; i--)
for (j = 1; j <= i; j++)
if (numbers[j-1] > numbers[j]){
temp = numbers[j-1];
numbers[j-1] = numbers[j];
numbers[j] = temp;
}
}

Analysis:

Here, the number of comparisons are

1 + 2 + 3 + ... + (n - 1) = n(n - 1)/2 = O(n2)

Clearly, the graph shows the n2 nature of the bubble sort.

In this algorithm, the number of comparison is irrespective of the data set, i.e. whether the
provided input elements are in sorted order or in reverse order or at random.

Memory Requirement: From the algorithm stated above, it is clear that bubble sort does not
require extra memory.

Example: We take an unsorted array for our example. Bubble sort takes Ο(n2) time so we're
keeping it short and precise.

Bubble sort starts with very first two elements, comparing them to check which one is greater.
In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we compare 33
with 27.

We find that 27 is smaller than 33 and these two values must be swapped.

Next we compare 33 and 35. We find that both are in already sorted positions.

Then we move to the next two values, 35 and 10.

We know then that 10 is smaller 35. Hence they are not sorted. We swap these values. We find
that we have reached the end of the array. After one iteration, the array should look like this −
To be precise, we are now showing how an array should look like after each iteration. After the
second iteration, it should look like this −

Notice that after each iteration, at least one value moves at the end.
And when there's no swap required, bubble sort learns that an array is completely sorted.

Now we should look into some practical aspects of bubble sort.

Complexity Analysis of Bubble Sort:


Time Complexity: O(N^2)
Auxiliary Space: O(1)
Advantages of Bubble Sort:
● Bubble sort is easy to understand and implement.
● It does not require any additional memory space.
● It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.

Disadvantages of Bubble Sort:


● Bubble sort has a time complexity of O(N^2) which makes it very slow for large data
sets.
● Bubble sort is a comparison-based sorting algorithm, which means that it requires a
comparison operator to determine the relative order of elements in the input data set.
It can limit the efficiency of the algorithm in certain cases.
Selection Sort:

In selection sort, the smallest value among the unsorted elements of the array is selected in every
pass and inserted to its appropriate position into the array. It is also the simplest algorithm. It is
an in-place comparison sorting algorithm. In this algorithm, the array is divided into two parts,
first is sorted part, and another one is the unsorted part. Initially, the sorted part of the array is
empty, and unsorted part is the given array. Sorted part is placed at the left, while the unsorted
part is placed at the right.

In selection sort, the first smallest element is selected from the unsorted array and placed at the
first position. After that second smallest element is selected and placed in the second position.
The process continues until the array is entirely sorted.

The average and worst-case complexity of selection sort is O(n2), where n is the number of
items. Due to this, it is not suitable for large data sets.

Selection sort is generally used when -

○ A small array is to be sorted


○ Swapping cost doesn't matter
○ It is compulsory to check all elements

Working of Selection sort Algorithm


Now, let's see the working of the Selection sort Algorithm.

To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be
easier to understand the Selection sort via an example.

Let the elements of array are -

Now, for the first position in the sorted array, the entire array is to be scanned sequentially.

At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is
the smallest value.

So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the
items of unsorted array. After scanning, we find that 12 is the second lowest element in the array
that should be appeared at second position.

Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the
sorted array. So, after two iterations, the two smallest values are placed at the beginning in a
sorted way.

The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.

Now, the array is completely sorted.


Complexity Analysis of Selection Sort:
Time Complexity: The time complexity of Selection Sort is O(N2) as there are two nested
loops:
● One loop to select an element of Array one by one = O(N)
● Another loop to compare that element with every other Array element = O(N)
● Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N2)

Auxiliary Space: O(1) as the only extra memory used is for temporary variables while swapping
two values in Array. The selection sort never makes more than O(N) swaps and can be useful
when memory writing is costly.
Advantages of Selection Sort Algorithm:
● Simple and easy to understand.
● Works well with small datasets.

Disadvantages of the Selection Sort Algorithm


● Selection sort has a time complexity of O(n^2) in the worst and average case.
● Does not work well on large datasets.
● Does not preserve the relative order of items with equal keys which means it is not
stable.

Applications of Selection Sort Algorithm


● Mainly works as a basis for some more efficient algorithms like Heap Sort. Heap Sort
mainly uses Heap Data Structure along with the Selection Sort idea.
● Used when memory writes (or swaps) are costly for example EEPROM or Flash
Memory. When compared to other popular sorting algorithms, it takes relatively less
memory writes (or less swaps) for sorting. But Selection sort is not optimal in terms
of memory writes, cycle sort even requires lesser memory writes than selection sort.
● Simple technique and used to introduce sorting in teaching.
● Used as a benchmark for comparison with other algorithms.

Insertion sort:
Insertion sort is a very simple method to sort numbers in an ascending or descending order. This
method follows the incremental method. It can be compared with the technique how cards are
sorted at the time of playing a game.

This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is


always sorted. For example, the lower part of an array is maintained to be sorted. An element
which is to be 'inserted' in this sorted sub-list, has to find its appropriate place and then it has to
be inserted there. Hence the name, insertion sort.

The array is searched sequentially and unsorted items are moved and inserted into the sorted
sub-list (in the same array). This algorithm is not suitable for large data sets as its average and
worst case complexity are of Ο(n2), where n is the number of items.

Insertion Sort Algorithm:

Now we have a bigger picture of how this sorting technique works, so we can derive simple steps
by which we can achieve insertion sort.

Step 1 − If it is the first element, it is already sorted. return 1;

Step 2 − Pick next element

Step 3 − Compare with all elements in the sorted sub-list

Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted

Step 5 − Insert the value

Step 6 − Repeat until list is sorted

Pseudocode:

Algorithm: Insertion-Sort(A)
for j = 2 to A.length
key = A[j]
i=j–1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i -1
A[i + 1] = key

Analysis:

Run time of this algorithm is very much dependent on the given input.

If the given numbers are sorted, this algorithm runs in O(n) time. If the given numbers are in
reverse order, the algorithm runs in O(n2) time.

Working of Insertion sort Algorithm


Now, let's see the working of the insertion sort Algorithm.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be
easier to understand the insertion sort via an example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are
31 and 8.

Both 31 and 8 are not sorted. So, swap them.


After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.


Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Complexity Analysis of Insertion Sort :

Time Complexity of Insertion Sort

● Best case: O(n) , If the list is already sorted, where n is the number of elements in the
list.
● Average case: O(n2 ) , If the list is randomly ordered
● Worst case: O(n2) , If the list is in reverse order

Space Complexity of Insertion Sort

● Auxiliary Space: O(1), Insertion sort requires O(1) additional space, making it a
space-efficient sorting algorithm.

Advantages of Insertion Sort:


● Simple and easy to implement.
● Stable sorting algorithm.
● Efficient for small lists and nearly sorted lists.
● Space-efficient.
● Adoptive. the number of inversions is directly proportional to number of swaps. For
example, no swapping happens for a sorted array and it takes O(n) time only.

Disadvantages of Insertion Sort:


● Inefficient for large lists.
● Not as efficient as other sorting algorithms (e.g., merge sort, quick sort) for most
cases.

Applications of Insertion Sort:


Insertion sort is commonly used in situations where:
● The list is small or nearly sorted.
● Simplicity and stability are important.
● Used as a subroutine in Bucket Sort
● Can be useful when array is already almost sorted (very few inversions)
● Since Insertion sort is suitable for small sized arrays, it is used in Hybrid Sorting
algorithms along with other efficient algorithms like Quick Sort and Merge Sort.
When the subarray size becomes small, we switch to insertion sort in these recursive
algorithms. For example IntroSort and TimSort use insertions sort.

Quick sort:
Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used sorting
algorithm that makes n log n comparisons in average case for sorting an array of n elements. It is
a faster and highly efficient sorting algorithm. This algorithm follows the divide and conquer
approach. Divide and conquer is a technique of breaking down the algorithms into subproblems,
then solving the subproblems, and combining the results back together to solve the original
problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two
sub-arrays such that each element in the left sub-array is less than or equal to the pivot element
and each element in the right sub-array is larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.

Combine: Combine the already sorted array.

Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds values
that are smaller than the specified value (Pivot), and another array holds the values that are
greater than the pivot.

After that, left and right sub-arrays are also partitioned using the same approach. It will continue
until the single element remains in the sub-array.

Choosing the pivot


Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical
to determine a good pivot. Some of the ways of choosing a pivot are as follows -
○ Pivot can be random, i.e. select the random pivot from the given array.
○ Pivot can either be the rightmost element of the leftmost element of the given array.
○ Select median as the pivot element.

Working of Quick Sort Algorithm

Now, it’s time to see the working of a quick sort algorithm, and for that, we need to take an
unsorted array.

Let the components of the array are –

Here,

L= Left

R = Right

P = Pivot

In the given series of arrays, let’s assume that the leftmost item is the pivot. So, in this condition,
a[L] = 23, a[R] = 26 and a[P] = 23.

Since, at this moment, the pivot item is at left, so the algorithm initiates from right and travels
towards left.
Now, a[P] < a[R], so the algorithm travels forward one position towards left, i.e. –

Now, a[L] = 23, a[R] = 18, and a[P] = 23.

Since a[P] > a[R], so the algorithm will exchange or swap a[P] with a[R], and the pivot travels to
right, as –

Now, a[L] = 18, a[R] = 23, and a[P] = 23. Since the pivot is at right, so the algorithm begins
from left and travels to right.
As a[P] > a[L], so algorithm travels one place to right as –

Now, a[L] = 8, a[R] = 23, and a[P] = 23. As a[P] > a[L], so algorithm travels one place to right as

Now, a[L] = 28, a[R] = 23, and a[P] = 23. As a[P] < a[L], so, swap a[P] and a[L], now pivot is at
left, i.e. –
Since the pivot is placed at the leftmost side, the algorithm begins from right and travels to left.
Now, a[L] = 23, a[R] = 28, and a[P] = 23. As a[P] < a[R], so algorithm travels one place to left,
as –

Now, a[P] = 23, a[L] = 23, and a[R] = 13. As a[P] > a[R], so, exchange a[P] and a[R], now pivot
is at right, i.e. –

Now, a[P] = 23, a[L] = 13, and a[R] = 23. Pivot is at right, so the algorithm begins from left and
travels to right.
Now, a[P] = 23, a[L] = 23, and a[R] = 23. So, pivot, left and right, are pointing to the same
element. It represents the termination of the procedure.

Item 23, which is the pivot element, stands at its accurate position.

Items that are on the right side of element 23 are greater than it, and the elements that are on the
left side of element 23 are smaller than it.

Now, in a similar manner, the quick sort algorithm is separately applied to the left and right
sub-arrays. After sorting gets done, the array will be –

Quick Sort Algorithm:

Algorithm:

​ QUICKSORT (array A, start, end)


​ {
​ 1 if (start < end)
​ 2{
​ 3 p = partition(A, start, end)
​ 4 QUICKSORT (A, start, p - 1)
​ 5 QUICKSORT (A, p + 1, end)
​ 6}
​ }
Partition Algorithm:

The partition algorithm rearranges the sub-arrays in a place.

​ PARTITION (array A, start, end)


​ {
​ 1 pivot ? A[end]
​ 2 i ? start-1
​ 3 for j ? start to end -1 {
​ 4 do if (A[j] < pivot) {
​ 5 then i ? i + 1
​ 6 swap A[i] with A[j]
​ 7 }}
​ 8 swap A[i+1] with A[end]
​ 9 return i+1
​ }

Complexity Analysis of Quick Sort :


Time Complexity:
● Best Case : Ω (N log (N))
The best-case scenario for quicksort occur when the pivot chosen at the each step
divides the array into roughly equal halves.
In this case, the algorithm will make balanced partitions, leading to efficient Sorting.
● Average Case: θ ( N log (N))
Quicksort’s average-case performance is usually very good in practice, making it one
of the fastest sorting Algorithm.
● Worst Case: O(N ^ 2)
The worst-case Scenario for Quicksort occur when the pivot at each step consistently
results in highly unbalanced partitions. When the array is already sorted and the pivot
is always chosen as the smallest or largest element. To mitigate the worst-case
Scenario, various techniques are used such as choosing a good pivot (e.g., median of
three) and using Randomized algorithm (Randomized Quicksort ) to shuffle the
element before sorting.
● Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we consider
the recursive stack space then, in the worst case quicksort could make O ( N ).

Advantages of Quick Sort:


● It is a divide-and-conquer algorithm that makes it easier to solve problems.
● It is efficient on large data sets.
● It has a low overhead, as it only requires a small amount of memory to function.
● It is Cache Friendly as we work on the same array to sort and do not copy data to any
auxiliary array.
● Fastest general purpose algorithm for large data when stability is not required.
● It is tail recursive and hence all the tail call optimization can be done.

Disadvantages of Quick Sort:


● It has a worst-case time complexity of O(N 2 ), which occurs when the pivot is
chosen poorly.
● It is not a good choice for small data sets.
● It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here we
are swapping elements according to the pivot’s position (without considering their
original positions).

Merge sort:
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided further. Then
we combine the pair of one element lists into two-element lists, sorting them in the process. The
sorted two-element pairs is merged into the four-element lists, and so on until we get the sorted
list.

Now, let's see the algorithm of merge sort.

Algorithm:
In the following algorithm, arr is the given array, beg is the starting element, and end is the last
element of the array.

​ MERGE_SORT(arr, beg, end)



​ if beg < end
​ set mid = (beg + end)/2
​ MERGE_SORT(arr, beg, mid)
​ MERGE_SORT(arr, mid + 1, end)
​ MERGE (arr, beg, mid, end)
​ end of if

​ END MERGE_SORT
The important part of the merge sort is the MERGE function. This function performs the
merging of two sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one sorted
array A[beg…end]. So, the inputs of the MERGE function are A[], beg, mid, and end.

Recurrence Relation of Merge Sort:

● T(n) Represents the total time time taken by the algorithm to sort an array of size n.
● 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of
the array. Since each half has n/2 elements, we have two recursive calls with input
size as (n/2).
● O(n) represents the time taken to merge the two sorted halves

Complexity Analysis of Merge Sort:


● Time Complexity:
○ Best Case: O(n log n), When the array is already sorted or nearly
sorted.
○ Average Case: O(n log n), When the array is randomly ordered.
○ Worst Case: O(n log n), When the array is sorted in reverse order.
● Auxiliary Space: O(n), Additional space is required for the temporary array used
during merging.

Applications of Merge Sort:


● Sorting large datasets
● External sorting (when the dataset is too large to fit in memory)
● Inversion counting
● Merge Sort and its variations are used in library methods of programming languages.
For example its variation TimSort is used in Python, Java Android and Swift. The
main reason why it is preferred to sort non-primitive types is stability which is not
there in QuickSort. For example Arrays.sort in Java uses QuickSort while
Collections.sort uses MergeSort.
● It is a preferred algorithm for sorting Linked lists.
● It can be easily parallelized as we can independently sort subarrays and then merge.
● The merge function of merge sort to efficiently solve the problems like union and
intersection of two sorted arrays.

Advantages of Merge Sort:


● Stability : Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
● Guaranteed worst-case performance: Merge sort has a worst-case time complexity
of O(N logN) , which means it performs well even on large datasets.
● Simple to implement: The divide-and-conquer approach is straightforward.
● Naturally Parallel : We independently merge subarrays that makes it suitable for
parallel processing.

Disadvantages of Merge Sort:


● Space complexity: Merge sort requires additional memory to store the merged
sub-arrays during the sorting process.
● Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
● Slower than QuickSort in general. QuickSort is more cache friendly because it
works
Unit - 3

Stack
4.1 Introduction to Stack: Definition, Stack as an ADT, Operations on Stack-(Push, Pop),
Stack Operations Conditions - Stack Full / Stack Overflow, Stack Empty /Stack Underflow.
4.2 Stack Implementation: using Array and Representation Using Linked List.
Applications of Stack: Reversing a List, Polish Notations, Conversion of Infix to Postfix
Expression, Evaluation of Postfix Expression.
4.4 Recursion: Definition and Applications.

Previous Year Questions


1. Show the memory representation of stack using array with the help of a diagram.
2. Convert the following infix expression to its prefix form using stack: A + B – C D / E
+F
3. Evaluate the following postfix expression: 5, 6, 2, +, , 12, 4, /, – Show each step of
evolution diagrammatically using stack.
4. Evaluate the following prefix expression: – + 4 3 2 5 show diagrammatically each step
of evaluation using stack.
5. Show the effect of PUSH and POP operation on to the stack of size 10. The
6. stack contains 40, 30, 52, 86, 39, 45, 50 with 50 being at top of the stack.Show
diagrammatically the effect of : (i) PUSH 59 (ii) PUSH 85 (iii) POP (iv) POP (v)
PUSH 59 (vi)POP
Sketch the final structure of stack after performing the above said operations.
7. Evaluate the following postfix expression : 57 + 62 –
8. Enlist operations on stack.
9. Evaluate the following postfix expression :10, 2, *, 15, 3, /, +, 12, 3, +, + Show
diagrammatically each step of evaluation using stack.
10. Convert the following Infix expression to its prefix form using stack. Show the details
of stack at each step of conversion.
Expression: P*Q↑R-S/T+(U/V)
11. Write a program to implement a stack with push, pop and display operations.
12. Write any two operations performed on the stack.
13. Explain stack overflow and underflow conditions with example.
14. Show the effect of PUSH and POP operation on to the stack of size 10. The
stack contains 10, 20, 30, 40, 50 and 60, with 60 being at top of the stack.
Show diagrammatically the effect of – (i) PUSH 55 (ii) PUSH 70 (iii) POP (iv) POP
Sketch the final structure of stack after performing the above said operations.
15. Convert the infix expression to its postfix expression using stack
((A + B) *D) ^ (E – F). Show diagrammatically each step of conversion.
16. Evaluate the following postfix expression : 4 6 24 + * 6 3 / –
Show diagrammatically each step of evaluation using stack.
17. Differentiate between stack and queue. (any two points)
18. Convert infix expression into prefix expression : (A + B)*(C / G) + F
19. Convert following expression into postfix form. Give stepwise procedure.
A + B ↑ C * (D / E) – F / G
20. Write algorithm for performing push and pop operations on stack.
21. Define the terms ‘overflow’ and ‘underflow’ with respect to stack.
22. Evaluate the following arithmetic expression P written in postfix notation :
P : 4, 2, ^, 3, *, 3, -, 8, 4, /, +
23. Convert the following infix expression to postfix expression using stack and
show the details of stack in each step. ((A+B)*D)^(E-F)
24. Show the effect of PUSH and POP operations on the stack of size 10.
PUSH(10), PUSH(20), POP, PUSH(30).
25. List any four applications of stack.
26. Convert following expression into postfix form with illustration of all steps
using stack : (A + B – C + D*E/F^G)
27. Explain stack overflow and stack underflow with example.
28. Write a menu driven ‘C’ program to implement stack using array with the
following menu : (i) push (ii) pop (iii) display (iv) exit
29. Convert the following infix expression to its postfix form using stack :
A + B – C * D/E + F
30. Convert the given infix expression to postfix expression using stack and the
details of stack at each step of conversion. Expression : A * B ↑ C – D / E + [F / G]
31. Describe working of bubble sort with example.
32. Show the effect of PUSH and POP operation on the stack of size 10. The
stack contains 10, 20, 25, 15, 30 & 40 with 40 being at top of stack. Show
diagrammatically the effect of (i) PUSH (45) (ii) PUSH (50) (iii) POP (iv) PUSH (55)
33. Find out prefix equivalent of the expression : (i) [(A + B) + C] * D, (ii) A[(B * C) + D]
34. Explain the concept of recursion using stack.
35. Define the term recursion. Write a program in C to display the factorial of a entered
number using recursion.
36. Write a 'C' program to calculate the factorial of number using recursion.

What is a Stack?
A Stack is a linear data structure that holds a linear, ordered sequence of elements. It is an
abstract data type. A Stack works on the LIFO process (Last In First Out), i.e., the element that
was inserted last will be removed first. To implement the Stack, it is required to maintain a
pointer to the top of the Stack, which is the last element to be inserted because we can access the
elements only on the top of the Stack.

Types of Stack Data Structure:


● Fixed Size Stack: As the name suggests, a fixed size stack has a fixed size and
cannot grow or shrink dynamically. If the stack is full and an attempt is made to add
an element to it, an overflow error occurs. If the stack is empty and an attempt is
made to remove an element from it, an underflow error occurs.
● Dynamic Size Stack: A dynamic size stack can grow or shrink dynamically. When
the stack is full, it automatically increases its size to accommodate the new element,
and when the stack is empty, it decreases its size. This type of stack is implemented
using a linked list, as it allows for easy resizing of the stack.
Stack ADT in Data Structures:

The abstract datatype is special kind of datatype, whose behavior is defined by a set of values
and set of operations. The keyword “Abstract” is used as we can use these datatypes, we can
perform different operations. But how those operations are working that is totally hidden from
the user. The ADT is made of with primitive datatypes, but operation logics are hidden.

Here we will see the stack ADT. These are few operations or functions of the Stack ADT.

​ isFull(), This is used to check whether stack is full or not


​ isEmpry(), This is used to check whether stack is empty or not
​ push(x), This is used to push x into the stack
​ pop(), This is used to delete one element from top of the stack
​ peek(), This is used to get the top most element of the stack
​ size(), this function is used to get number of elements present into the stack

Example:
Working of Stack:
Stack works on the LIFO pattern. As we can observe in the below figure there are five memory
blocks in the stack; therefore, the size of the stack is 5.

Suppose we want to store the elements in a stack and let's assume that stack is empty. We have
taken the stack of size 5 as shown below in which we are pushing the elements one by one until
the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can observe that it goes
from the top to the bottom when we were entering the new element in the stack. The stack gets
filled up from the bottom to the top.

When we perform the delete operation on the stack, there is only one way for entry and exit as
the other end is closed. It follows the LIFO pattern, which means that the value entered first will
be removed last. In the above case, the value 5 is entered first, so it will be removed only after
the deletion of all the other elements.

Standard Stack Operations:


The following are some common operations implemented on the stack:

○ push(): When we insert an element in a stack then the operation is known as a push. If
the stack is full then the overflow condition occurs.
○ pop(): When we delete an element from the stack, the operation is known as a pop. If the
stack is empty means that no element exists in the stack, this state is known as an
underflow state.
○ isEmpty(): It determines whether the stack is empty or not.
○ isFull(): It determines whether the stack is full or not.'
○ peek(): It returns the element at the given position.
○ count(): It returns the total number of elements available in a stack.
○ change(): It changes the element at the given position.
○ display(): It prints all the elements available in the stack.

i) Push() Operation in Stack Data Structure: Adds an item to the stack. If the stack is full,
then it is said to be an Overflow condition.

Algorithm for Push Operation:

● Before pushing the element to the stack, we check if the stack is full .
● If the stack is full (top == capacity-1) , then Stack Overflows and we cannot insert the
element to the stack.
● Otherwise, we increment the value of top by 1 (top = top + 1) and the new value is
inserted at top position .
● The elements can be pushed into the stack till we reach the capacity of the stack.

ii) Pop() Operation in Stack Data Structure:

Removes an item from the stack. The items are popped in the reversed order in which they are
pushed. If the stack is empty, then it is said to be an Underflow condition.

Algorithm for Pop Operation:

● Before popping the element from the stack, we check if the stack is empty .
● If the stack is empty (top == -1), then Stack Underflows and we cannot remove any
element from the stack.
● Otherwise, we store the value at top, decrement the value of top by 1 (top = top – 1)
and return the stored top value.
iii) Top() or Peek() Operation in Stack Data Structure: Returns the top element of the stack.

Algorithm for Top Operation:


● Before returning the top element from the stack, we check if the stack is empty.
● If the stack is empty (top == -1), we simply print “Stack is empty”.
● Otherwise, we return the element stored at index = top .
iv) isEmpty() Operation in Stack Data Structure: Returns true if the stack is empty, else false.

Algorithm for isEmpty Operation:


● Check for the value of top in stack.
● If (top == -1) , then the stack is empty so return true .
● Otherwise, the stack is not empty so return false .

v) isFull() Operation in Stack Data Structure: Returns true if the stack is full, else false.

Algorithm for isFull Operation:


● Check for the value of top in stack.
● If (top == capacity-1), then the stack is full so return true.
● Otherwise, the stack is not full so return false.
Application of the Stack:

1. A Stack can be used for evaluating expressions consisting of operands and operators.
2. Stacks can be used for Backtracking, i.e., to check parenthesis matching in an expression.
3. It can also be used to convert one form of expression to another form.
4. It can be used for systematic Memory Management.

Advantages of Stack:

1. A Stack helps to manage the data in the ‘Last in First out’ method.
2. When the variable is not used outside the function in any program, the Stack can be used.
3. It allows you to control and handle memory allocation and deallocation.
4. It helps to automatically clean up the objects.

Disadvantages of Stack:

1. It is difficult in Stack to create many objects as it increases the risk of the Stack overflow.
2. It has very limited memory.
3. In Stack, random access is not possible.

Array implementation of Stack:

In array implementation, the stack is formed by using the array. All the operations regarding the
stack are performed using arrays. Lets see how each operation can be implemented on the stack
using array data structure.
Adding an element onto the stack (push operation):
Adding an element into the top of the stack is referred to as push operation. Push operation
involves following two steps.

1. Increment the variable Top so that it can now refere to the next memory location.
2. Add element at the position of incremented top. This is referred to as adding new element
at the top of the stack.

Stack is overflown when we try to insert an element into a completely filled stack therefore, our
main function must always avoid stack overflow condition.

Algorithm:

​ begin
​ if top = n then stack full
​ top = top + 1
​ stack (top) : = item;
​ end
Time Complexity : O(1)

implementation of push algorithm in C language


​ void push (int val,int n) //n is size of the stack
​ {
​ if (top == n )
​ printf("\n Overflow");
​ else
​ {
​ top = top +1;
​ stack[top] = val;
​ }
​ }
Deletion of an element from a stack (Pop operation):
Deletion of an element from the top of the stack is called pop operation. The value of the
variable top will be incremented by 1 whenever an item is deleted from the stack. The top most
element of the stack is stored in an another variable and then the top is decremented by 1. the
operation returns the deleted value that was stored in another variable as the result.

The underflow condition occurs when we try to delete an element from an already empty stack.

Algorithm :
​ begin
​ if top = 0 then stack empty;
​ item := stack(top);
​ top = top - 1;
​ end;
Time Complexity : o(1)

Implementation of POP algorithm using C language


​ int pop ()
​ {
​ if(top == -1)
​ {
​ printf("Underflow");
​ return 0;
​ }
​ else
​ {
​ return stack[top - - ];
​ }
​ }
Visiting each element of the stack (Peek operation):
Peek operation involves returning the element which is present at the top of the stack without
deleting it. Underflow condition can occur if we try to return the top element in an already empty
stack.

Algorithm :

PEEK (STACK, TOP)

​ Begin
​ if top = -1 then stack empty
​ item = stack[top]
​ return item
​ End
Time complexity: o(n)

Implementation of Peek algorithm in C language


​ int peek()
​ {
​ if (top == -1)
​ {
​ printf("Underflow");
​ return 0;
​ }
​ else
​ {
​ return stack [top];
​ }
​ }
C program:

​ #include <stdio.h>
​ int stack[100],i,j,choice=0,n,top=-1;
​ void push();
​ void pop();
​ void show();
​ void main ()
​ {

​ printf("Enter the number of elements in the stack ");
​ scanf("%d",&n);
​ printf("*********Stack operations using array*********");

​ printf("\n----------------------------------------------\n");
​ while(choice != 4)
​ {
​ printf("Chose one from the below options...\n");
​ printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
​ printf("\n Enter your choice \n");
​ scanf("%d",&choice);
​ switch(choice)
​ {
​ case 1:
​ {
​ push();
​ break;
​ }
​ case 2:
​ {
​ pop();
​ break;
​ }
​ case 3:
​ {
​ show();
​ break;
​ }
​ case 4:
​ {
​ printf("Exiting....");
​ break;
​ }
​ default:
​ {
​ printf("Please Enter valid choice ");
​ }
​ };
​ }
​ }

​ void push ()
​ {
​ int val;
​ if (top == n )
​ printf("\n Overflow");
​ else
​ {
​ printf("Enter the value?");
​ scanf("%d",&val);
​ top = top +1;
​ stack[top] = val;
​ }
​ }





​ void pop ()
​ {
​ if(top == -1)
​ printf("Underflow");
else
​ top = top -1;
​ }
​ void show()
​ {
​ for (i=top;i>=0;i--)
​ {
​ printf("%d\n",stack[i]);
​ }
​ if(top == -1)
​ {
​ printf("Stack is empty");
​ }
​ }

Linked list implementation of the stack:

Instead of using array, we can also use linked list to implement stack. Linked list allocates the
memory dynamically. However, time complexity in both the scenario is same for all the
operations i.e. push, pop and peek.

In linked list implementation of stack, the nodes are maintained non-contiguously in the memory.
Each node contains a pointer to its immediate successor node in the stack. Stack is said to be
overflown if the space left in the memory heap is not enough to create a node.
The top most node in the stack always contains null in its address field. Lets discuss the way in
which, each operation is performed in linked list implementation of stack.

Adding a node to the stack (Push operation):


Adding a node to the stack is referred to as push operation. Pushing an element to a stack in a
linked list implementation is different from that of an array implementation. In order to push an
element onto the stack, the following steps are involved.

1. Create a node first and allocate memory to it.


2. If the list is empty then the item is to be pushed as the start node of the list. This includes
assigning value to the data part of the node and assign null to the address part of the node.
3. If there are some nodes in the list already, then we have to add the new element in the
beginning of the list (to not violate the property of the stack). For this purpose, assign the
address of the starting element to the address field of the new node and make the new
node, the starting node of the list.

Time Complexity : O(1)


C implementation :

​ void push ()
​ {
​ int val;
​ struct node *ptr =(struct node*)malloc(sizeof(struct node));
​ if(ptr == NULL)
​ {
​ printf("not able to push the element");
​ }
​ else
​ {
​ printf("Enter the value");
​ scanf("%d",&val);
​ if(head==NULL)
​ {
​ ptr->val = val;
​ ptr -> next = NULL;
​ head=ptr;
​ }
​ else
​ {
​ ptr->val = val;
​ ptr->next = head;
​ head=ptr;

​ }
​ printf("Item pushed");

​ }
​ }

Deleting a node from the stack (POP operation):


Deleting a node from the top of stack is referred to as pop operation. Deleting a node from the
linked list implementation of stack is different from that in the array implementation. In order to
pop an element from the stack, we need to follow the following steps :

​ Check for the underflow condition: The underflow condition occurs when we try
to pop from an already empty stack. The stack will be empty if the head pointer of
the list points to null.
​ Adjust the head pointer accordingly: In stack, the elements are popped only from
one end, therefore, the value stored in the head pointer must be deleted and the
node must be freed. The next node of the head node now becomes the head node.

Time Complexity : O(n)

C implementation:

​ void pop()
​ {
​ int item;
​ struct node *ptr;
​ if (head == NULL)
​ {
​ printf("Underflow");
​ }
​ else
​ {
​ item = head->val;
​ ptr = head;
​ head = head->next;
​ free(ptr);
​ printf("Item popped");

​ }
​ }

Display the nodes (Traversing):


Displaying all the nodes of a stack needs traversing all the nodes of the linked list organized in
the form of stack. For this purpose, we need to follow the following steps.

​ Copy the head pointer into a temporary pointer.


​ Move the temporary pointer through all the nodes of the list and print the value
field attached to every node.

Time Complexity : O(n)


C Implementation:

​ void display()
​ {
​ int i;
​ struct node *ptr;
​ ptr=head;
​ if(ptr == NULL)
​ {
​ printf("Stack is empty\n");
​ }
​ else
​ {
​ printf("Printing Stack elements \n");
​ while(ptr!=NULL)
​ {
​ printf("%d\n",ptr->val);
​ ptr = ptr->next;
​ }
​ }
​ }

Menu Driven program in C implementing all the stack operations using linked list :

​ #include <stdio.h>
​ #include <stdlib.h>
​ void push();
​ void pop();
​ void display();
​ struct node
​ {
​ int val;
​ struct node *next;
​ };
​ struct node *head;

​ void main ()
​ {
​ int choice=0;
​ printf("\n*********Stack operations using linked list*********\n");
​ printf("\n----------------------------------------------\n");
​ while(choice != 4)
​ {
​ printf("\n\nChose one from the below options...\n");
​ printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
​ printf("\n Enter your choice \n");
​ scanf("%d",&choice);
​ switch(choice)
​ {
​ case 1:
​ {
​ push();
​ break;
​ }
​ case 2:
​ {
​ pop();
​ break;
​ }
​ case 3:
​ {
​ display();
​ break;
​ }
​ case 4:
​ {
​ printf("Exiting....");
​ break;
​ }
​ default:
​ {
​ printf("Please Enter valid choice ");
​ }
​ };
​ }
​ }
​ void push ()
​ {
​ int val;
​ struct node *ptr = (struct node*)malloc(sizeof(struct node));
​ if(ptr == NULL)
​ {
​ printf("not able to push the element");
​ }
​ else
​ {
​ printf("Enter the value");
​ scanf("%d",&val);
​ if(head==NULL)
​ {
​ ptr->val = val;
​ ptr -> next = NULL;
​ head=ptr;
​ }
​ else
​ {
​ ptr->val = val;
​ ptr->next = head;
​ head=ptr;

​ }
​ printf("Item pushed");

​ }
​ }

​ void pop()
​ {
​ int item;
​ struct node *ptr;
​ if (head == NULL)
​ {
​ printf("Underflow");
​ }
​ else
​ {
​ item = head->val;
​ ptr = head;
​ head = head->next;
​ free(ptr);
​ printf("Item popped");

​ }
​ }
​ void display()
​ {
​ int i;
​ struct node *ptr;
​ ptr=head;
​ if(ptr == NULL)
​ {
​ printf("Stack is empty\n");
​ }
​ else
​ {
​ printf("Printing Stack elements \n");
​ while(ptr!=NULL)
​ {
​ printf("%d\n",ptr->val);
​ ptr = ptr->next;
​ }
​ }
​ }

Recursion
What is Recursion?

Recursion is defined as a process that calls itself directly or indirectly and the corresponding
function is called a recursive function.

Properties of Recursion:

Recursion has some important properties. Some of which are mentioned below:

● The primary property of recursion is the ability to solve a problem by breaking it


down into smaller sub-problems, each of which can be solved in the same way.
● A recursive function must have a base case or stopping criteria to avoid infinite
recursion.
● Recursion involves calling the same function within itself, which leads to a call stack.
● Recursive functions may be less efficient than iterative solutions in terms of memory
and performance.

Types of Recursion:

1. Direct recursion: When a function is called within itself directly it is called direct
recursion. This can be further categorised into four types:
● Tail recursion,
● Head recursion,
● Tree recursion and
● Nested recursion.
2. Indirect recursion: Indirect recursion occurs when a function calls another function
that eventually calls the original function and it forms a cycle.

Applications of Recursion:

Recursion is used in many fields of computer science and mathematics, which includes:

● Searching and sorting algorithms: Recursive algorithms are used to search and sort
data structures like trees and graphs.
● Mathematical calculations: Recursive algorithms are used to solve problems such as
factorial, Fibonacci sequence, etc.
● Compiler design: Recursion is used in the design of compilers to parse and analyze
programming languages.
● Graphics: many computer graphics algorithms, such as fractals and the Mandelbrot
set, use recursion to generate complex patterns.
● Artificial intelligence: recursive neural networks are used in natural language
processing, computer vision, and other AI applications.

Advantages of Recursion:

● Recursion can simplify complex problems by breaking them down into smaller, more
manageable pieces.
● Recursive code can be more readable and easier to understand than iterative code.
● Recursion is essential for some algorithms and data structures.
● Also with recursion, we can reduce the length of code and become more readable and
understandable to the user/ programmer.
Disadvantages of Recursion:

● Recursion can be less efficient than iterative solutions in terms of memory and
performance.
● Recursive functions can be more challenging to debug and understand than iterative
solutions.
● Recursion can lead to stack overflow errors if the recursion depth is too high.

You might also like