Unit - 1_merged
Unit - 1_merged
1. Define the terms: linear data structure and non-linear data structure.
2. Write any four operations that can be performed on a data structure.
3. Write any four operations performed on the data structure
4. Explain linear and non-linear data structures.
5. Define Abstract data type.
6. Define complexity and classify it.
7. Write any four operations that can be performed on the data structure.
8. Give classification of data structure.
9. List any four operations on the data structure
10. Differentiate between linear and non-linear data structures on any two parameters.
11. Define the term algorithm.
12. Write C program for performing following operations on array : insertion, display.
13. Write ‘C’ program for deletion of an element from an array.
14. Implement a C program to insert an element in an array.
From the definition mentioned above, we can conclude that the operations in data structure
include:
Whenever the data structure does such operations, it is known as an Abstract Data Type (ADT).
We can define it as a set of data elements along with the operations on the data. The term
"abstract" refers to the fact that the data and the fundamental operations defined on it are being
studied independently of their implementation. It includes what we can do with the data, not how
we can do it.
An ADI implementation contains a storage structure in order to store the data elements and
algorithms for fundamental operation. All the data structures, like an array, linked list, queue,
stack, etc., are examples of ADT.
The primitive data structures are primitive data types. The int, char, float, double, and pointer are
the primitive data structures that can hold a single value.
The arrangement of data in a sequential manner is known as a linear data structure. The data
structures used for this purpose are Arrays, Linked list, Stacks, and Queues. In these data
structures, one element is connected to only one another element in a linear form.
When one element is connected to the 'n' number of elements known as a non-linear data
structure. The best example is trees and graphs. In this case, the elements are arranged in a
random manner.
We will briefly discuss the above data structures in the coming topics. Now, we will see the
common operations that we can perform on these data structures.
● Static data structure: It is a type of data structure where the size is allocated at the
compile time. Therefore, the maximum size is fixed.
● Dynamic data structure: It is a type of data structure where the size is allocated at the
run time. Therefore, the maximum size is flexible.
1. Traversal: Traversing a data structure means accessing each data element exactly once
so it can be administered. For example, traversing is required while printing the names of
all the employees in a department.
2. Search: Search is another data structure operation which means to find the location of
one or more data elements that meet certain constraints. Such a data element may or may
not be present in the given set of data elements. For example, we can use the search
operation to find the names of all the employees who have the experience of more than 5
years.
3. Insertion: Insertion means inserting or adding new data elements to the collection. For
example, we can use the insertion operation to add the details of a new employee the
company has recently hired.
4. Deletion: Deletion means to remove or delete a specific data element from the given list
of data elements. For example, we can use the deleting operation to delete the name of an
employee who has left the job.
5. Sorting: Sorting means to arrange the data elements in either Ascending or Descending
order depending on the type of application. For example, we can use the sorting operation
to arrange the names of employees in a department in alphabetical order or estimate the
top three performers of the month by arranging the performance of the employees in
descending order and extracting the details of the top three.
6. Merge: Merge means to combine data elements of two sorted lists in order to form a
single list of sorted data elements.
7. Create: Create is an operation used to reserve memory for the data elements of the
program. We can perform this operation using a declaration statement. The creation of
data structure can take place either during the following:
1. Compile-time
2. Run-time
For example, the malloc() function is used in C Language to create data structure.
8. Selection: Selection means selecting a particular data from the available data. We can
select any particular data by specifying conditions inside the loop.
9. Update: The Update operation allows us to update or modify the data in the data
structure. We can also update any particular data by specifying some conditions inside the
loop, like the Selection operation.
10. Splitting: The Splitting operation allows us to divide data into various subparts
decreasing the overall process completion time.
1. Arrays
An Array is a data structure used to collect multiple data elements of the same data type into one
variable. Instead of storing multiple values of the same data types in separate variable names, we
could store all of them together into one variable. This statement doesn't imply that we will have
to unite all the values of the same data type in any program into one array of that data type. But
there will often be times when some specific variables of the same data types are all related to
one another in a way appropriate for an array.
An Array is a list of elements where each element has a unique place in the list. The data
elements of the array share the same variable name; however, each carries a different index
number called a subscript. We can access any data element from the list with the help of its
location in the list. Thus, the key feature of the arrays to understand is that the data is stored in
contiguous memory locations, making it possible for the users to traverse through the data
elements of the array using their respective indexes.
Figure : An Array
1. One-Dimensional Array: An Array with only one row of data elements is known as a
One-Dimensional Array. It is stored in ascending storage location.
2. Two-Dimensional Array: An Array consisting of multiple rows and columns of data
elements is called a Two-Dimensional Array. It is also known as a Matrix.
3. Multidimensional Array: We can define Multidimensional Array as an Array of Arrays.
Multidimensional Arrays are not bounded to two indices or two dimensions as they can
include as many indices are per the need.
1. We can store a list of data elements belonging to the same data type.
2. Array acts as an auxiliary storage for other data structures.
3. The array also helps store data elements of a binary tree of the fixed count.
4. Array also acts as a storage of matrices.
2. Linked Lists
A Linked List is another example of a linear data structure used to store a collection of data
elements dynamically. Data elements in this data structure are represented by the Nodes,
connected using links or pointers. Each node contains two fields, the information field consists of
the actual data, and the pointer field consists of the address of the subsequent nodes in the list.
The pointer of the last node of the linked list consists of a null pointer, as it points to nothing.
Unlike the Arrays, the user can dynamically adjust the size of a Linked List as per the
requirements.
1. Singly Linked List: A Singly Linked List is the most common type of Linked List. Each
node has data and a pointer field containing an address to the next node.
2. Doubly Linked List: A Doubly Linked List consists of an information field and two
pointer fields. The information field contains the data. The first pointer field contains an
address of the previous node, whereas another pointer field contains a reference to the
next node. Thus, we can go in both directions (backward as well as forward).
3. Circular Linked List: The Circular Linked List is similar to the Singly Linked List. The
only key difference is that the last node contains the address of the first node, forming a
circular loop in the Circular Linked List.
1. The Linked Lists help us implement stacks, queues, binary trees, and graphs of
predefined size.
2. We can also implement Operating System's function for dynamic memory management.
3. Linked Lists also allow polynomial implementation for mathematical operations.
4. We can use Circular Linked List to implement Operating Systems or application
functions that Round Robin execution of tasks.
5. Circular Linked List is also helpful in a Slide Show where a user requires to go back to
the first slide after the last slide is presented.
6. Doubly Linked List is utilized to implement forward and backward buttons in a browser
to move forward and backward in the opened pages of a website.
3. Stacks
A Stack is a Linear Data Structure that follows the LIFO (Last In, First Out) principle that allows
operations like insertion and deletion from one end of the Stack, i.e., Top. Stacks can be
implemented with the help of contiguous memory, an Array, and non-contiguous memory, a
Linked List. Real-life examples of Stacks are piles of books, a deck of cards, piles of money, and
many more.
The above figure represents the real-life example of a Stack where the operations are performed
from one end only, like the insertion and removal of new books from the top of the Stack. It
implies that the insertion and deletion in the Stack can be done only from the top of the Stack.
We can access only the Stack's tops at any given time.
1. Push: Operation to insert a new element in the Stack is termed as Push Operation.
2. Pop: Operation to remove or delete elements from the Stack is termed as Pop Operation.
Figure : A Stack
4. Queues
A Queue is a linear data structure similar to a Stack with some limitations on the insertion and
deletion of the elements. The insertion of an element in a Queue is done at one end, and the
removal is done at another or opposite end. Thus, we can conclude that the Queue data structure
follows FIFO (First In, First Out) principle to manipulate the data elements. Implementation of
Queues can be done using Arrays, Linked Lists, or Stacks. Some real-life examples of Queues
are a line at the ticket counter, an escalator, a car wash, and many more.
The above image is a real-life illustration of a movie ticket counter that can help us understand
the Queue where the customer who comes first is always served first. The customer arriving last
will undoubtedly be served last. Both ends of the Queue are open and can execute different
operations. Another example is a food court line where the customer is inserted from the rear end
while the customer is removed at the front end after providing the service they asked for.
1. Enqueue: The insertion or Addition of some data elements to the Queue is called
Enqueue. The element insertion is always done with the help of the rear pointer.
2. Dequeue: Deleting or removing data elements from the Queue is termed Dequeue. The
deletion of the element is always done with the help of the front pointer.
Figure: A Queue
1. Trees
A Tree is a Non-Linear Data Structure and a hierarchy containing a collection of nodes such that
each node of the tree stores a value and a list of references to other nodes (the "children").
The Tree data structure is a specialized method to arrange and collect data in the computer to be
utilized more effectively. It contains a central node, structural nodes, and sub-nodes connected
via edges. We can also say that the tree data structure consists of roots, branches, and leaves
connected.
Figure : A Tree
1. Binary Tree: A Tree data structure where each parent node can have at most two children
is termed a Binary Tree.
2. Binary Search Tree: A Binary Search Tree is a Tree data structure where we can easily
maintain a sorted list of numbers.
3. AVL Tree: An AVL Tree is a self-balancing Binary Search Tree where each node
maintains extra information known as a Balance Factor whose value is either -1, 0, or +1.
4. B-Tree: A B-Tree is a special type of self-balancing Binary Search Tree where each node
consists of multiple keys and can have more than two children.
1. Trees implement hierarchical structures in computer systems like directories and file
systems.
2. Trees are also used to implement the navigation structure of a website.
3. We can generate code like Huffman's code using Trees.
4. Trees are also helpful in decision-making in Gaming applications.
5. Trees are responsible for implementing priority queues for priority-based OS scheduling
functions.
6. Trees are also responsible for parsing expressions and statements in the compilers of
different programming languages.
7. We can use Trees to store data keys for indexing for Database Management System
(DBMS).
8. Spanning Trees allows us to route decisions in Computer and Communications Networks.
9. Trees are also used in the path-finding algorithm implemented in Artificial Intelligence
(AI), Robotics, and Video Games Applications.
2. Graphs
A Graph is another example of a Non-Linear Data Structure comprising a finite number of nodes
or vertices and the edges connecting them. The Graphs are utilized to address problems of the
real world in which it denotes the problem area as a network such as social networks, circuit
networks, and telephone networks. For instance, the nodes or vertices of a Graph can represent a
single user in a telephone network, while the edges represent the link between them via
telephone.
G = (V,E)
Figure : A Graph
The above figure represents a Graph having seven vertices A, B, C, D, E, F, G, and ten edges [A,
B], [A, C], [B, C], [B, D], [B, E], [C, D], [D, E], [D, F], [E, F], and [E, G].
Depending upon the position of the vertices and edges, the Graphs can be classified into different
types:
1. Null Graph: A Graph with an empty set of edges is termed a Null Graph.
2. Trivial Graph: A Graph having only one vertex is termed a Trivial Graph.
3. Simple Graph: A Graph with neither self-loops nor multiple edges is known as a Simple
Graph.
4. Multi Graph: A Graph is said to be Multi if it consists of multiple edges but no self-loops.
5. Pseudo Graph: A Graph with self-loops and multiple edges is termed a Pseudo Graph.
6. Non-Directed Graph: A Graph consisting of non-directed edges is known as a
Non-Directed Graph.
7. Directed Graph: A Graph consisting of the directed edges between the vertices is known
as a Directed Graph.
8. Connected Graph: A Graph with at least a single path between every pair of vertices is
termed a Connected Graph.
9. Disconnected Graph: A Graph where there does not exist any path between at least one
pair of vertices is termed a Disconnected Graph.
10. Regular Graph: A Graph where all vertices have the same degree is termed a Regular
Graph.
11. Complete Graph: A Graph in which all vertices have an edge between every pair of
vertices is known as a Complete Graph.
12. Cycle Graph: A Graph is said to be a Cycle if it has at least three vertices and edges that
form a cycle.
13. Cyclic Graph: A Graph is said to be Cyclic if and only if at least one cycle exists.
14. Acyclic Graph: A Graph having zero cycles is termed an Acyclic Graph.
15. Finite Graph: A Graph with a finite number of vertices and edges is known as a Finite
Graph.
16. Infinite Graph: A Graph with an infinite number of vertices and edges is known as an
Infinite Graph.
17. Bipartite Graph: A Graph where the vertices can be divided into independent sets A and
B, and all the vertices of set A should only be connected to the vertices present in set B
with some edges is termed a Bipartite Graph.
18. Planar Graph: A Graph is said to be a Planar if we can draw it in a single plane with two
edges intersecting each other.
19. Euler Graph: A Graph is said to be Euler if and only if all the vertices are even degrees.
20. Hamiltonian Graph: A Connected Graph consisting of a Hamiltonian circuit is known as
a Hamiltonian Graph.
Some Applications of Graphs:
The element to be deleted is found using a linear search, and then the delete operation is
executed, followed by relocating the elements.
Unit - 2
1. Sort the following numbers in ascending order using quick sort. Given numbers 50, 2,
6, 22, 3, 39, 49, 25, 18, 5.
2. Sort the following numbers in ascending order using Bubble sort. Given numbers : 29,
35, 3, 8, 11, 15, 56, 12, 1, 4, 85, 5 & write the output after each interaction.
3. Find the position of element 30 using Binary search method in array A{10, 5, 20, 25,
8, 30, 40}.
4. Describe the working of Selection Sort Method. Also sort given input list in ascending
order using selection sort. Input list: 50, 24, 5, 12, 30
5. Sort the following numbers in ascending order using Insertion sort :
{25, 15, 4, 103, 62, 9} and write the output after each iteration.
6. Differentiate between Binary search and Linear search with respect to any four
parameters.
7. Find the position of element 29 using Binary search method in an array given
as : {11, 5, 21, 3, 29, 17, 2, 43}.
8. Find the position of element 29 using binary search method in an array ‘A’
given below. Show each step. A = {11, 5, 21, 3, 29, 17, 2, 43}
9. Describe working of bubble sort with example.
10. Describe working of selection sort method. Also sort given input list in
ascending order using selection sort input list – 55, 25, 5, 15, 35.
11. Implement a ‘C’ program to search a particular data from the given array using Linear
Search.
12. Find the position of element 21 using Binary Search method in Array ‘A’ given below :
A = {11, 5, 21, 3, 29, 17, 2, 45}
13. Elaborate the steps for performing selection sort for given elements of array. A = {37,
12, 4, 90, 49, 23, –19}.
14. State the following terms : (i) searching (ii) sorting
15. Write a program to implement bubble sort.
16. Find location of element 20 by using binary search algorithm in the list given
below :10, 20, 30, 40, 50, 60, 70, 80
17. Describe working of linear search with example.
18. Write a ‘C’ program for insertion sort. Sort the following array using insertion
sort : 30 10 40 50 20 45
19. State any two differences between linear search and binary search.
20. Describe working of selection sort method with suitable example.
21. Explain complexity of following algorithms in terms of time and space : (i) Binary
search (ii) Bubble sort
22. Describe working of bubble sort with example.
23. Sort the following number in ascending order using bubble sort. Given numbers as
follows: 475, 15, 513, 6753, 45, 118.
24. Define Searching. State two methods of Searching.
25. Write a program to implement a linear search for 10 elements in an array.
26. Write a program to print a string in reverse order.
27. Write a program to implement selection sort.
28. Define Searching. What are its types.
29. Differentiate between Binary search and Linear search with respect to any four
parameters.
Searching is the process of finding some particular element in the list. If the element is present
in the list, then the process is called successful, and the process returns the location of that
element; otherwise, the search is called unsuccessful.
Two popular search methods are Linear Search and Binary Search.
Linear search:
Linear search is a type of sequential searching algorithm. In this method, every element within
the input array is traversed and compared with the key element to be found. If a match is found
in the array the search is said to be successful; if there is no match found the search is said to be
unsuccessful and gives the worst-case time complexity.
For instance, in the given animated diagram, we are searching for an element 33. Therefore, the
linear search method searches for it sequentially from the very first element until it finds a
match. This returns a successful search.
In the same diagram, if we have to search for an element 46, then it returns an unsuccessful
search since 46 is not present in the input.
The algorithm for linear search is relatively simple. The procedure starts at the very first index of
the input array to be searched.
Step 1 − Start from the 0th index of the input array, and compare the key value with the value
present in the 0th index.
Step 2 − If the value matches with the key, return the position at which the value was found.
Step 3 − If the value does not match with the key, compare the next element in the array.
Step 4 − Repeat Step 3 until there is a match found. Return the position at which the match was
found.
Step 5 − If it is an unsuccessful search, print that the element is not present in the array and exit
the program.
Pseudocode:
Analysis: Linear search traverses through every element sequentially therefore, the best case is
when the element is found in the very first iteration. The best-case time complexity would be
O(1).
However, the worst case of the linear search method would be an unsuccessful search that does
not find the key value in the array, it performs n iterations. Therefore, the worst-case time
complexity of the linear search algorithm would be O(n).
Example
Let us look at the step-by-step searching of the key element (say 47) in an array using the linear
search method.
Step 1
The linear search starts from the 0th index. Compare the key element with the value in the 0th
index, 34.
Step 2
Now, the key is compared with value in the 1st index of the array.
Still, 47 ≠ 10, making the algorithm move for another iteration.
Step 3
The next element 66 is compared with 47. They are both not a match so the algorithm compares
the further elements.
Step 4
Now the element in 3rd index, 27, is compared with the key value, 47. They are not equal so the
algorithm is pushed forward to check the next element.
Step 5
Comparing the element in the 4th index of the array, 47, to the key 47. It is figured that both the
elements match. Now, the position in which 47 is present, i.e., 4 is returned.
Binary search:
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer, since it divides the array into half before
searching. For this algorithm to work properly, the data collection should be in the sorted form.
Binary search looks for a particular key value by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. But if the middle item has a
value greater than the key value, the right sub-array of the middle item is searched. Otherwise,
the left sub-array is searched. This process continues recursively until the size of a subarray
reduces to zero.
Binary Search algorithm is an interval searching method that performs the searching in intervals
only. The input taken by the binary search algorithm must always be in a sorted array since it
divides the array into subarrays based on the greater or lower values. The algorithm follows the
procedure below −
Step 1 − Select the middle item in the array and compare it with the key value to be searched. If
it is matched, return the position of the median.
Step 2 − If it does not match the key value, check if the key value is either greater than or less
than the median value.
Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower
than the median value, perform the search in the left sub-array.
Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1.
Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful
search.
Pseudocode:
Since the binary search algorithm performs searching iteratively, calculating the time complexity
is not as easy as the linear search algorithm.
The input array is searched iteratively by dividing into multiple sub-arrays after every
unsuccessful iteration. Therefore, the recurrence relation formed would be of a dividing function.
During the first iteration, the element is searched in the entire array. Therefore, length of
the array = n.
In the second iteration, only half of the original array is searched. Hence, length of the
array = n/2.
In the third iteration, half of the previous sub-array is searched. Here, length of the array
will be = n/4.
Similarly, in the ith iteration, the length of the array will become n/2i
To achieve a successful search, after the last iteration the length of array must be 1. Hence,
n/2i = 1
That gives us −
n = 2i
log n = log 2i
log n = i. log 2
i = log n
Example
For a binary search to work, the target array must be sorted. We shall learn the process of binary
search with a pictorial example. The following is our sorted array and let us assume that we need
to search the location of value 31 using binary search.
Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find
that the value at location 4 is 27, which is not a match. As the value is greater than 27 and we
have a sorted array, so we also know that the target value must be in the upper portion of the
array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is less than what we are looking for. So, the
value must be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
We conclude that the target value 31 is stored at location 5.
Binary search halves the searchable items and thus reduces the count of comparisons to be made
to very less numbers.
Bubble sort:
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm
in which each pair of adjacent elements is compared and the elements are swapped if they are not
in order. This algorithm is not suitable for large data sets as its average and worst case
complexity are of O(n2) where n is the number of items.
We assume list is an array of n elements. We further assume that swap function swaps the values
of the given array elements.
Step 1 − Check if the first element in the input array is greater than the next element in the array.
Step 2 − If it is greater, swap the two elements; otherwise move the pointer forward in the array.
Step 4 − Check if the elements are sorted; if not, repeat the same process (Step 1 to Step 3) from
the last element of the array to the first.
Pseudocode
We observe in algorithm that Bubble Sort compares each pair of array element unless the whole
array is completely sorted in an ascending order. This may cause a few complexity issues like
what if the array needs no more swapping as all the elements are already ascending.
To ease-out the issue, we use one flag variable swapped which will help us see if any swap has
happened or not. If no swap has occurred, i.e. the array requires no more processing to be sorted,
it will come out of the loop.
Analysis:
In this algorithm, the number of comparison is irrespective of the data set, i.e. whether the
provided input elements are in sorted order or in reverse order or at random.
Memory Requirement: From the algorithm stated above, it is clear that bubble sort does not
require extra memory.
Example: We take an unsorted array for our example. Bubble sort takes Ο(n2) time so we're
keeping it short and precise.
Bubble sort starts with very first two elements, comparing them to check which one is greater.
In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we compare 33
with 27.
We find that 27 is smaller than 33 and these two values must be swapped.
Next we compare 33 and 35. We find that both are in already sorted positions.
We know then that 10 is smaller 35. Hence they are not sorted. We swap these values. We find
that we have reached the end of the array. After one iteration, the array should look like this −
To be precise, we are now showing how an array should look like after each iteration. After the
second iteration, it should look like this −
Notice that after each iteration, at least one value moves at the end.
And when there's no swap required, bubble sort learns that an array is completely sorted.
In selection sort, the smallest value among the unsorted elements of the array is selected in every
pass and inserted to its appropriate position into the array. It is also the simplest algorithm. It is
an in-place comparison sorting algorithm. In this algorithm, the array is divided into two parts,
first is sorted part, and another one is the unsorted part. Initially, the sorted part of the array is
empty, and unsorted part is the given array. Sorted part is placed at the left, while the unsorted
part is placed at the right.
In selection sort, the first smallest element is selected from the unsorted array and placed at the
first position. After that second smallest element is selected and placed in the second position.
The process continues until the array is entirely sorted.
The average and worst-case complexity of selection sort is O(n2), where n is the number of
items. Due to this, it is not suitable for large data sets.
To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be
easier to understand the Selection sort via an example.
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is
the smallest value.
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the
items of unsorted array. After scanning, we find that 12 is the second lowest element in the array
that should be appeared at second position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the
sorted array. So, after two iterations, the two smallest values are placed at the beginning in a
sorted way.
The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.
Auxiliary Space: O(1) as the only extra memory used is for temporary variables while swapping
two values in Array. The selection sort never makes more than O(N) swaps and can be useful
when memory writing is costly.
Advantages of Selection Sort Algorithm:
● Simple and easy to understand.
● Works well with small datasets.
Insertion sort:
Insertion sort is a very simple method to sort numbers in an ascending or descending order. This
method follows the incremental method. It can be compared with the technique how cards are
sorted at the time of playing a game.
The array is searched sequentially and unsorted items are moved and inserted into the sorted
sub-list (in the same array). This algorithm is not suitable for large data sets as its average and
worst case complexity are of Ο(n2), where n is the number of items.
Now we have a bigger picture of how this sorting technique works, so we can derive simple steps
by which we can achieve insertion sort.
Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted
Pseudocode:
Algorithm: Insertion-Sort(A)
for j = 2 to A.length
key = A[j]
i=j–1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i -1
A[i + 1] = key
Analysis:
Run time of this algorithm is very much dependent on the given input.
If the given numbers are sorted, this algorithm runs in O(n) time. If the given numbers are in
reverse order, the algorithm runs in O(n2) time.
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are
31 and 8.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
● Best case: O(n) , If the list is already sorted, where n is the number of elements in the
list.
● Average case: O(n2 ) , If the list is randomly ordered
● Worst case: O(n2) , If the list is in reverse order
● Auxiliary Space: O(1), Insertion sort requires O(1) additional space, making it a
space-efficient sorting algorithm.
Quick sort:
Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used sorting
algorithm that makes n log n comparisons in average case for sorting an array of n elements. It is
a faster and highly efficient sorting algorithm. This algorithm follows the divide and conquer
approach. Divide and conquer is a technique of breaking down the algorithms into subproblems,
then solving the subproblems, and combining the results back together to solve the original
problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two
sub-arrays such that each element in the left sub-array is less than or equal to the pivot element
and each element in the right sub-array is larger than the pivot element.
Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds values
that are smaller than the specified value (Pivot), and another array holds the values that are
greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will continue
until the single element remains in the sub-array.
Now, it’s time to see the working of a quick sort algorithm, and for that, we need to take an
unsorted array.
Here,
L= Left
R = Right
P = Pivot
In the given series of arrays, let’s assume that the leftmost item is the pivot. So, in this condition,
a[L] = 23, a[R] = 26 and a[P] = 23.
Since, at this moment, the pivot item is at left, so the algorithm initiates from right and travels
towards left.
Now, a[P] < a[R], so the algorithm travels forward one position towards left, i.e. –
Since a[P] > a[R], so the algorithm will exchange or swap a[P] with a[R], and the pivot travels to
right, as –
Now, a[L] = 18, a[R] = 23, and a[P] = 23. Since the pivot is at right, so the algorithm begins
from left and travels to right.
As a[P] > a[L], so algorithm travels one place to right as –
Now, a[L] = 8, a[R] = 23, and a[P] = 23. As a[P] > a[L], so algorithm travels one place to right as
–
Now, a[L] = 28, a[R] = 23, and a[P] = 23. As a[P] < a[L], so, swap a[P] and a[L], now pivot is at
left, i.e. –
Since the pivot is placed at the leftmost side, the algorithm begins from right and travels to left.
Now, a[L] = 23, a[R] = 28, and a[P] = 23. As a[P] < a[R], so algorithm travels one place to left,
as –
Now, a[P] = 23, a[L] = 23, and a[R] = 13. As a[P] > a[R], so, exchange a[P] and a[R], now pivot
is at right, i.e. –
Now, a[P] = 23, a[L] = 13, and a[R] = 23. Pivot is at right, so the algorithm begins from left and
travels to right.
Now, a[P] = 23, a[L] = 23, and a[R] = 23. So, pivot, left and right, are pointing to the same
element. It represents the termination of the procedure.
Item 23, which is the pivot element, stands at its accurate position.
Items that are on the right side of element 23 are greater than it, and the elements that are on the
left side of element 23 are smaller than it.
Now, in a similar manner, the quick sort algorithm is separately applied to the left and right
sub-arrays. After sorting gets done, the array will be –
Algorithm:
Merge sort:
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided further. Then
we combine the pair of one element lists into two-element lists, sorting them in the process. The
sorted two-element pairs is merged into the four-element lists, and so on until we get the sorted
list.
Algorithm:
In the following algorithm, arr is the given array, beg is the starting element, and end is the last
element of the array.
● T(n) Represents the total time time taken by the algorithm to sort an array of size n.
● 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of
the array. Since each half has n/2 elements, we have two recursive calls with input
size as (n/2).
● O(n) represents the time taken to merge the two sorted halves
Stack
4.1 Introduction to Stack: Definition, Stack as an ADT, Operations on Stack-(Push, Pop),
Stack Operations Conditions - Stack Full / Stack Overflow, Stack Empty /Stack Underflow.
4.2 Stack Implementation: using Array and Representation Using Linked List.
Applications of Stack: Reversing a List, Polish Notations, Conversion of Infix to Postfix
Expression, Evaluation of Postfix Expression.
4.4 Recursion: Definition and Applications.
What is a Stack?
A Stack is a linear data structure that holds a linear, ordered sequence of elements. It is an
abstract data type. A Stack works on the LIFO process (Last In First Out), i.e., the element that
was inserted last will be removed first. To implement the Stack, it is required to maintain a
pointer to the top of the Stack, which is the last element to be inserted because we can access the
elements only on the top of the Stack.
The abstract datatype is special kind of datatype, whose behavior is defined by a set of values
and set of operations. The keyword “Abstract” is used as we can use these datatypes, we can
perform different operations. But how those operations are working that is totally hidden from
the user. The ADT is made of with primitive datatypes, but operation logics are hidden.
Here we will see the stack ADT. These are few operations or functions of the Stack ADT.
Example:
Working of Stack:
Stack works on the LIFO pattern. As we can observe in the below figure there are five memory
blocks in the stack; therefore, the size of the stack is 5.
Suppose we want to store the elements in a stack and let's assume that stack is empty. We have
taken the stack of size 5 as shown below in which we are pushing the elements one by one until
the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can observe that it goes
from the top to the bottom when we were entering the new element in the stack. The stack gets
filled up from the bottom to the top.
When we perform the delete operation on the stack, there is only one way for entry and exit as
the other end is closed. It follows the LIFO pattern, which means that the value entered first will
be removed last. In the above case, the value 5 is entered first, so it will be removed only after
the deletion of all the other elements.
○ push(): When we insert an element in a stack then the operation is known as a push. If
the stack is full then the overflow condition occurs.
○ pop(): When we delete an element from the stack, the operation is known as a pop. If the
stack is empty means that no element exists in the stack, this state is known as an
underflow state.
○ isEmpty(): It determines whether the stack is empty or not.
○ isFull(): It determines whether the stack is full or not.'
○ peek(): It returns the element at the given position.
○ count(): It returns the total number of elements available in a stack.
○ change(): It changes the element at the given position.
○ display(): It prints all the elements available in the stack.
i) Push() Operation in Stack Data Structure: Adds an item to the stack. If the stack is full,
then it is said to be an Overflow condition.
● Before pushing the element to the stack, we check if the stack is full .
● If the stack is full (top == capacity-1) , then Stack Overflows and we cannot insert the
element to the stack.
● Otherwise, we increment the value of top by 1 (top = top + 1) and the new value is
inserted at top position .
● The elements can be pushed into the stack till we reach the capacity of the stack.
Removes an item from the stack. The items are popped in the reversed order in which they are
pushed. If the stack is empty, then it is said to be an Underflow condition.
● Before popping the element from the stack, we check if the stack is empty .
● If the stack is empty (top == -1), then Stack Underflows and we cannot remove any
element from the stack.
● Otherwise, we store the value at top, decrement the value of top by 1 (top = top – 1)
and return the stored top value.
iii) Top() or Peek() Operation in Stack Data Structure: Returns the top element of the stack.
v) isFull() Operation in Stack Data Structure: Returns true if the stack is full, else false.
1. A Stack can be used for evaluating expressions consisting of operands and operators.
2. Stacks can be used for Backtracking, i.e., to check parenthesis matching in an expression.
3. It can also be used to convert one form of expression to another form.
4. It can be used for systematic Memory Management.
Advantages of Stack:
1. A Stack helps to manage the data in the ‘Last in First out’ method.
2. When the variable is not used outside the function in any program, the Stack can be used.
3. It allows you to control and handle memory allocation and deallocation.
4. It helps to automatically clean up the objects.
Disadvantages of Stack:
1. It is difficult in Stack to create many objects as it increases the risk of the Stack overflow.
2. It has very limited memory.
3. In Stack, random access is not possible.
In array implementation, the stack is formed by using the array. All the operations regarding the
stack are performed using arrays. Lets see how each operation can be implemented on the stack
using array data structure.
Adding an element onto the stack (push operation):
Adding an element into the top of the stack is referred to as push operation. Push operation
involves following two steps.
1. Increment the variable Top so that it can now refere to the next memory location.
2. Add element at the position of incremented top. This is referred to as adding new element
at the top of the stack.
Stack is overflown when we try to insert an element into a completely filled stack therefore, our
main function must always avoid stack overflow condition.
Algorithm:
begin
if top = n then stack full
top = top + 1
stack (top) : = item;
end
Time Complexity : O(1)
The underflow condition occurs when we try to delete an element from an already empty stack.
Algorithm :
begin
if top = 0 then stack empty;
item := stack(top);
top = top - 1;
end;
Time Complexity : o(1)
Algorithm :
Begin
if top = -1 then stack empty
item = stack[top]
return item
End
Time complexity: o(n)
#include <stdio.h>
int stack[100],i,j,choice=0,n,top=-1;
void push();
void pop();
void show();
void main ()
{
printf("Enter the number of elements in the stack ");
scanf("%d",&n);
printf("*********Stack operations using array*********");
printf("\n----------------------------------------------\n");
while(choice != 4)
{
printf("Chose one from the below options...\n");
printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
printf("\n Enter your choice \n");
scanf("%d",&choice);
switch(choice)
{
case 1:
{
push();
break;
}
case 2:
{
pop();
break;
}
case 3:
{
show();
break;
}
case 4:
{
printf("Exiting....");
break;
}
default:
{
printf("Please Enter valid choice ");
}
};
}
}
void push ()
{
int val;
if (top == n )
printf("\n Overflow");
else
{
printf("Enter the value?");
scanf("%d",&val);
top = top +1;
stack[top] = val;
}
}
void pop ()
{
if(top == -1)
printf("Underflow");
else
top = top -1;
}
void show()
{
for (i=top;i>=0;i--)
{
printf("%d\n",stack[i]);
}
if(top == -1)
{
printf("Stack is empty");
}
}
Instead of using array, we can also use linked list to implement stack. Linked list allocates the
memory dynamically. However, time complexity in both the scenario is same for all the
operations i.e. push, pop and peek.
In linked list implementation of stack, the nodes are maintained non-contiguously in the memory.
Each node contains a pointer to its immediate successor node in the stack. Stack is said to be
overflown if the space left in the memory heap is not enough to create a node.
The top most node in the stack always contains null in its address field. Lets discuss the way in
which, each operation is performed in linked list implementation of stack.
void push ()
{
int val;
struct node *ptr =(struct node*)malloc(sizeof(struct node));
if(ptr == NULL)
{
printf("not able to push the element");
}
else
{
printf("Enter the value");
scanf("%d",&val);
if(head==NULL)
{
ptr->val = val;
ptr -> next = NULL;
head=ptr;
}
else
{
ptr->val = val;
ptr->next = head;
head=ptr;
}
printf("Item pushed");
}
}
Check for the underflow condition: The underflow condition occurs when we try
to pop from an already empty stack. The stack will be empty if the head pointer of
the list points to null.
Adjust the head pointer accordingly: In stack, the elements are popped only from
one end, therefore, the value stored in the head pointer must be deleted and the
node must be freed. The next node of the head node now becomes the head node.
C implementation:
void pop()
{
int item;
struct node *ptr;
if (head == NULL)
{
printf("Underflow");
}
else
{
item = head->val;
ptr = head;
head = head->next;
free(ptr);
printf("Item popped");
}
}
void display()
{
int i;
struct node *ptr;
ptr=head;
if(ptr == NULL)
{
printf("Stack is empty\n");
}
else
{
printf("Printing Stack elements \n");
while(ptr!=NULL)
{
printf("%d\n",ptr->val);
ptr = ptr->next;
}
}
}
Menu Driven program in C implementing all the stack operations using linked list :
#include <stdio.h>
#include <stdlib.h>
void push();
void pop();
void display();
struct node
{
int val;
struct node *next;
};
struct node *head;
void main ()
{
int choice=0;
printf("\n*********Stack operations using linked list*********\n");
printf("\n----------------------------------------------\n");
while(choice != 4)
{
printf("\n\nChose one from the below options...\n");
printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
printf("\n Enter your choice \n");
scanf("%d",&choice);
switch(choice)
{
case 1:
{
push();
break;
}
case 2:
{
pop();
break;
}
case 3:
{
display();
break;
}
case 4:
{
printf("Exiting....");
break;
}
default:
{
printf("Please Enter valid choice ");
}
};
}
}
void push ()
{
int val;
struct node *ptr = (struct node*)malloc(sizeof(struct node));
if(ptr == NULL)
{
printf("not able to push the element");
}
else
{
printf("Enter the value");
scanf("%d",&val);
if(head==NULL)
{
ptr->val = val;
ptr -> next = NULL;
head=ptr;
}
else
{
ptr->val = val;
ptr->next = head;
head=ptr;
}
printf("Item pushed");
}
}
void pop()
{
int item;
struct node *ptr;
if (head == NULL)
{
printf("Underflow");
}
else
{
item = head->val;
ptr = head;
head = head->next;
free(ptr);
printf("Item popped");
}
}
void display()
{
int i;
struct node *ptr;
ptr=head;
if(ptr == NULL)
{
printf("Stack is empty\n");
}
else
{
printf("Printing Stack elements \n");
while(ptr!=NULL)
{
printf("%d\n",ptr->val);
ptr = ptr->next;
}
}
}
Recursion
What is Recursion?
Recursion is defined as a process that calls itself directly or indirectly and the corresponding
function is called a recursive function.
Properties of Recursion:
Recursion has some important properties. Some of which are mentioned below:
Types of Recursion:
1. Direct recursion: When a function is called within itself directly it is called direct
recursion. This can be further categorised into four types:
● Tail recursion,
● Head recursion,
● Tree recursion and
● Nested recursion.
2. Indirect recursion: Indirect recursion occurs when a function calls another function
that eventually calls the original function and it forms a cycle.
Applications of Recursion:
Recursion is used in many fields of computer science and mathematics, which includes:
● Searching and sorting algorithms: Recursive algorithms are used to search and sort
data structures like trees and graphs.
● Mathematical calculations: Recursive algorithms are used to solve problems such as
factorial, Fibonacci sequence, etc.
● Compiler design: Recursion is used in the design of compilers to parse and analyze
programming languages.
● Graphics: many computer graphics algorithms, such as fractals and the Mandelbrot
set, use recursion to generate complex patterns.
● Artificial intelligence: recursive neural networks are used in natural language
processing, computer vision, and other AI applications.
Advantages of Recursion:
● Recursion can simplify complex problems by breaking them down into smaller, more
manageable pieces.
● Recursive code can be more readable and easier to understand than iterative code.
● Recursion is essential for some algorithms and data structures.
● Also with recursion, we can reduce the length of code and become more readable and
understandable to the user/ programmer.
Disadvantages of Recursion:
● Recursion can be less efficient than iterative solutions in terms of memory and
performance.
● Recursive functions can be more challenging to debug and understand than iterative
solutions.
● Recursion can lead to stack overflow errors if the recursion depth is too high.