DSA_NOTES Unit 1,2
DSA_NOTES Unit 1,2
What is DSA?
The term DSA stands for Data Structures and Algorithms, in the context of Computer Science.
What is Algorithm?
Algorithm is defined as a process or set of well-defined instructions that are typically used to solve a particular
group of problems or perform a specific type of calculation. To explain in simpler terms, it is a set of operations
performed in a step-by-step manner to execute a task.
Types of Algorithm Analysis: The analysis of an algorithm is done based on three cases:
1. Best case (Omega Notation (Ω)): Define the input for which algorithm takes less time or minimum time. This
notation provides a lower bound on the growth rate of an algorithm’s running time or space usage. It
represents the best-case scenario, i.e., the minimum amount of time or space an algorithm may need to solve a
problem. For example, if an algorithm’s running time is Ω (n), then it means that the running time of the algorithm
increases linearly with the input size n or more.
For Example: In the linear search when search data is present at the first location of large data then the best
case occurs.
Omega (Ω) notation specifies the asymptotic lower bound for a function f(n). For a given function g(n), Ω(g(n))
is denoted by:
Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ c*g(n) ≤ f(n) for all n ≥ n 0}.
This means that, f(n) = Ω(g(n)), If there are positive constants n 0 and c such that, to the right of n 0 the f(n)
always lies on or above c*g(n).
1|Page
Follow the steps below to calculate Ω for a program:
1. Break the program into smaller segments.
2. Find the number of operations performed for each segment (in terms of the input size) assuming the given
input is such that the program takes the least amount of time.
3. Add up all the operations and simplify it, let’s say it is f(n).
4. Remove all the constants and choose the term having the least order or any other function which is always
less than f(n) when n tends to infinity, let say it is g(n) then, Omega (Ω) of f(n) is Ω(g(n)).
2. Average Case (Theta (Θ) Notation): This notation provides both an upper and lower bound on
the growth rate of an algorithm’s running time or space usage. It represents the average-case scenario,
i.e., the amount of time or space an algorithm typically needs to solve a problem. For example, if an
algorithm’s running time is Θ (n), then it means that the running time of the algorithm increases linearly with
the input size n.
Big-Theta (Θ) notation specifies a bound for a function f(n). For a given function g(n), Θ(g(n)) is denote d
by: Θ (g(n)) = {f(n): there exist positive constants c 1, c2 and n0 such that 0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n) for
all n ≥ n0}.
This means that, f(n) = Θ(g(n)), If there are positive constants n 0 and c such that, to the right of n 0 the f(n)
always lies on or above c1*g(n) and below c2*g(n).
Graphical representation
2|Page
1. Break the program into smaller segments.
2. Find all types of inputs and calculate the number of operations they take to be executed. Make sure that
the input cases are equally distributed.
3. Find the sum of all the calculated values and divide the sum by the total number of inputs let say the
function of n obtained is g(n) after removing all the constants, then in Θ notation, it’s represented as
Θ(g(n)).
Example: In a linear search problem, let’s assume that all the cases are uniformly distributed (including the
case when the key is absent in the array). So, sum all the cases when the key is present at positions 1, 2, 3,
……, n and not present, and divide the sum by n + 1.
Since all the types of inputs are considered while calculating the average time complexity, it is one of the best
analysis methods for an algorithm. In the average case take all random inputs and calculate the computation time
for all inputs. And then we divide it by the total number of inputs. Average case = all random case time / total
no of case
3. Worst Case (Big – O Notation): This notation provides an upper bound on the growth rate of an
algorithm’s running time or space usage. It represents the worst-case scenario, i.e., the maximum amount
of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is
O(n), then it means that the running time of the algorithm increases linearly with the input size n or less.
Big – O (O) notation specifies the asymptotic upper bound for a function f(n). For a given function g(n),
O(g(n)) is denoted by:
O (g(n)) = {f(n): there exist positive constants c and n 0 such that f(n) ≤ c*g(n) for all n ≥ n 0}.
This means that, f(n) = O(g(n)), If there are positive constants n 0 and c such that, to the right of n0 the f(n)
always lies on or below c*g(n).
3|Page
Follow the steps below to calculate O for a program:
1. Break the program into smaller segments.
2. Find the number of operations performed for each segment (in terms of the input size) assuming the given
input is such that the program takes the maximum time i.e the worst-case scenario.
3. Add up all the operations and simplify it, let’s say it is f(n).
4. Remove all the constants and choose the term having the highest order because for n tends to infinity the
constants and the lower order terms in f(n) will be insignificant, let say the function is g(n) then, big-O
notation is O(g(n)).
It is the most widely used notation as it is easier to calculate since there is no need to check for every type of
input as it was in the case of theta notation, also since the worst case of input is taken into account it pretty
much gives the upper bound of the time the program will take to execute.
Ra
Advantages:
1. Asymptotic analysis provides a high-level understanding of how an algorithm performs with respect to
input size.
2. It is a useful tool for comparing the efficiency of different algorithms and selecting the best one for a
specific problem.
3. It helps in predicting how an algorithm will perform on larger input sizes, which is essential for real-world
applications.
4. Asymptotic analysis is relatively easy to perform and requires only basic mathematical skills.
Disadvantages:
1. Asymptotic analysis does not provide an accurate running time or space usage of an algorithm.
2. It assumes that the input size is the only factor that affects an algorithm’s performance, which is not
always the case in practice.
3. Asymptotic analysis can sometimes be misleading, as two algorithms with the same asymptotic
complexity may have different actual running times or space usage.
It is not always straightforward to determine the best asympt
Complexities in Algorithms
The primary motive to use DSA is to solve a problem effectively and efficiently. To check if a program written is
efficient or not, its complexities are measured. While analyzing complexity of an algorithm, consider CPU (time)
4|Page
usage, Memory usage, Disk usage, and Network usage. All are important, but the most concern is about the CPU
time. To create an effective algorithm complexity is considered, however performance cannot be ignored.
Types of Complexity:
1. Time Complexity: Time complexity is used to measure the amount of time required to execute the code.
2. Space Complexity: Space complexity means the amount of space required to execute successfully the
functionalities of the code.
The term Auxiliary Space very commonly in DSA, which refers to the extra space used in the program
other than the input data structure.
Both of the above complexities are measured with respect to the input parameters. The time required for
executing a code depends on several factors, such as:
The number of operations performed in the program,
The speed of the device, and also
The speed of data transfer if being executed on an online platform.
Time Complexity
The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function
of the length of the input.
Note that the time to run is a function of the length of the input and not the actual execution time of the
machine on which the algorithm is running on.
The valid algorithm takes a finite amount of time for execution. The time required by the algorithm to solve
given problem is called time complexity of the algorithm. Time complexity is very useful measure in
algorithm analysis.For Example:
int count = 0;
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;
In the above example, it seems like the complexity is O(N * log N). N for the j′s loop and log (N) for i′s loop.
But it’s wrong. Let’s see why.
Think about how many times count++ will run.
When i = N, it will run N times.
When i = N / 2, it will run N / 2 times.
When i = N / 4, it will run N / 4 times.
And so on.
The total number of times count++ will run is N + N/2 + N/4+…+1= 2 * N. So the time complexity will
be O(N).
5|Page
Another Example to understand time Complexity
For example, let us consider the search problem (searching a given item) in a sorted array.
The solution to above search problem includes:
Linear Search (order of growth is linear)
Binary Search (order of growth is logarithmic).
To understand how Asymptotic Analysis solves the problems mentioned above in analyzing algorithms,
let us say:
we run the Linear Search on a fast computer A and
Binary Search on a slow computer B and
pick the constant values for the two computers so that it tells us exactly how long it takes for
the given machine to perform the search in seconds.
Let’s say the constant for A is 0.2 and the constant for B is 1000 which means that A is 5000 times more
powerful than B.
For small values of input array size n, the fast computer may take less time.
But, after a certain value of input array size, the Binary Search will definitely start taking less time
compared to the Linear Search even though the Binary Search is being run on a slow machine.
Input Size Running time on A Running time on B
10 2 sec ~1h
1. The reason is the order of growth of Binary Search with respect to input size is logarithmic while the
order of growth of Linear Search is linear.
2. So the machine-dependent constants can always be ignored after a certain value of input size.
Running times for this example:
Linear Search running time in seconds on A: 0.2 * n
Binary Search running time in seconds on B: 1000*log(n)
Programming style, also known as code style, is a set of rules or guidelines used when writing the source
code for a computer program.
Refinement of coding is used to convert an abstract data model (in terms of sets for example) into
implementable data structures (such as arrays). Operation refinement converts a specification of an operation on
a system into an implementable program (e.g., a procedure).
It is also a generic term of computer science that encompasses various approaches for producing correct
computer programs and simplifying existing programs to enable their formal verification.
Time and Space Trade Off: Space-time tradeoff is a way of solving a problem or calculation in less time by
using more storage space, or by solving a problem in very little space by spending a long time.
6|Page
Abstract data type is an abstraction of a data structure that provides only the interface to which the data
structure must adhere. The interface does not give any specific details about something should be implemented
or in what programming language.
In other words, we can say that abstract data types are the entities that are definitions of data and operations but
do not have implementation details. In this case, we know the data that we are storing and the operations that can
be performed on the data, but we don't know about the implementation details. The reason for not having
implementation details is that every programming language has a different implementation strategy for example;
a C data structure is implemented using structures while a C++ data structure is implemented using objects and
classes.
For example, a List is an abstract data type that is implemented using a dynamic array and linked list. A queue
is implemented using linked list-based queue, array-based queue, and stack-based queue. A Map is implemented
using Tree map, hash map, or hash table.
------------------------------------------------------------------------------------------------------------------------------------------
7|Page
Linear data structure: Data structure in which data elements are arranged sequentially or linearly, where
each element is attached to its previous and next adjacent elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
Static data structure: Static data structure has a fixed memory size. It is easier to access the
elements in a static data structure.
An example of this data structure is an array.
Dynamic data structure: In dynamic data structure, the size is not fixed. It can be randomly
updated during the runtime which may be considered efficient concerning the memory (space)
complexity of the code.
Examples of this data structure are queue, stack, linked list etc.
Non-linear data structure: Data structures where data elements are not placed sequentially or linearly
are called non-linear data structures. In a non-linear data structure, we can’t traverse all the elements in a
single run only.
Examples of non-linear data structures are trees and graphs.
Arrays
The array is a data structure used to store homogeneous elements at contiguous locations. The size of an array
must be provided before storing data.
An array is a linear data structure that stores elements in sequence.
An array is defined as it is a collection of homogeneous data values (similar type of elements) stored
at contiguous memory locations.
Array use an index-based data structure which helps to identify each element in array & makes it
easier to access each element.
In C Language array is declared as:
Datatype Array_Name[size];
Ex: int A[5]; // Declares an array of type integer named A that can hold 5 integer values.
Single sub-scripted values are called linear array or one- dimensional array (A[5] , B[9] ) and Array
can also handle complex data structures by storing data in a two-subscripted variables are called as
two-dimensional array:([5][9] , B[3][3] )
1. Constant-time Access: Arrays allow for constant-time access to elements by using their index, making it a
good choice for implementing algorithms that need fast access to elements.
2. Memory Allocation: Arrays are stored in contiguous memory locations, which makes the memory
allocation efficient.
3. Easy to Implement: Arrays are easy to implement and can be used with basic programming constructs like
loops and conditionals.
Disadvantages of arrays:
1. Fixed Size: Arrays have a fixed size, so once they are created, the size cannot be changed. This can lead
to memory waste if an array is too large or dynamic resizing overhead if an array is too small.
2. Slow Insertion and Deletion: Inserting or deleting elements in an array can be slow, especially if the
operation needs to be performed in the middle of the array. This requires shifting all elements to make
room for the new element or to fill the gap left by the deleted element.
3. Cache Misses: Arrays can suffer from cache misses if elements are not accessed in sequential order,
which can lead to poor performance.
4. In summary, arrays are a good choice for problems where constant-time access to elements and efficient
memory allocation are required, but their disadvantages should be considered for problems where
dynamic resizing and fast insertion/deletion operations are important.
mension Array:
Array of an element of an array say “A[ I ]” is calculated using the following formula:
Address of A [ I ] = B + W * ( I – LB )
Where,
B = Base address
W = Storage Size of one element stored in the array (in byte)
I = Subscript of element whose address is to be found
LB = Lower limit / Lower Bound of subscript, if not specified assume 0 (zero)
9|Page
Example:
Given the base address of an array B[1300…..1900] as 1020 and size of each element is 2 bytes in the memory.
Find the address of B[1700].
Solution:
The given values are: B = 1020, LB = 1300, W = 2, I = 1700
Address of A [ I ] = B + W * ( I – LB )
= 1020 + 2 * (1700 – 1300)
= 1020 + 2 * 400
= 1020 + 800
= 1820 [Ans]
10 | P a g e
Address of an element of any array say “A[ I ][ J ]” is calculated in two forms as given:
(1) Row Major System (2) Column Major System
Row Major System:
The address of a location in Row Major System is calculated using the following formula:
Address of A [ I ][ J ] = B + W * [ N * ( I – Lr ) + ( J – Lc ) ]
Column Major System:
The address of a location in Column Major System is calculated using the following formula:
Address of A [ I ][ J ] Column Major Wise = B + W * [( I – Lr ) + M * ( J – Lc )]
Where,
B = Base address
I = Row subscript of element whose address is to be found
J = Column subscript of element whose address is to be found
W = Storage Size of one element stored in the array (in byte)
Lr = Lower limit of row/start row index of matrix, if not given assume 0 (zero)
Lc = Lower limit of column/start column index of matrix, if not given assume 0 (zero)
M = Number of row of the given matrix
N = Number of column of the given matrix
Important : Usually number of rows and columns of a matrix are given ( like A[20][30] or A[40][60] ) but if it is given
as A[Lr- – – – – Ur, Lc- – – – – Uc]. In this case number of rows and columns are calculated using the following
methods:
Number of rows (M) will be calculated as = (Ur – Lr) + 1
Number of columns (N) will be calculated as = (Uc – Lc) + 1
And rest of the process will remain same as per requirement (Row Major Wise or Column Major Wise).
Examples:
Q 1. An array X [-15……….10, 15……………40] requires one byte of storage. If beginning location is 1500
determine the location of X [15][20].
Solution:
As you see here the number of rows and columns are not given in the question. So they are calculated as:
Number or rows say M = (Ur – Lr) + 1 = [10 – (- 15)] +1 = 26
Number or columns say N = (Uc – Lc) + 1 = [40 – 15)] +1 = 26
(i) Column Major Wise Calculation of above equation
The given values are: B = 1500, W = 1 byte, I = 15, J = 20, Lr = -15, Lc = 15, M = 26
Address of A [ I ][ J ] = B + W * [ ( I – Lr ) + M * ( J – Lc ) ]
= 1500 + 1 * [(15 – (-15)) + 26 * (20 – 15)] = 1500 + 1 * [30 + 26 * 5] = 1500 + 1 * [160] = 1660 [Ans]
(ii) Row Major Wise Calculation of above equation
The given values are: B = 1500, W = 1 byte, I = 15, J = 20, Lr = -15, Lc = 15, N = 26
Address of A [ I ][ J ] = B + W * [ N * ( I – Lr ) + ( J – Lc ) ]
= 1500 + 1* [26 * (15 – (-15))) + (20 – 15)] = 1500 + 1 * [26 * 30 + 5] = 1500 + 1 * [780 + 5] = 1500 + 785
= 2285 [Ans]
11 | P a g e
Linked List
A linked list is a linear data structure (like arrays) where each element is a separate object. A linked list is
collection of nodes, where each node stores the data and the address of the next node the linear order is
maintained by pointers..
2.Doubly Linked List: In this type of Linked list, there are two references associated with each node, One of
the reference points to the next node and one to the previous node. The advantage of this data structure is that
we can traverse in both directions and for deletion, we don’t need to have explicit access to the previous
node.
12 | P a g e
3. Circular Linked List: Circular linked list is a linked list where all nodes are connected to form a circle.
There is no NULL at the end. A circular linked list can be a singly circular linked list or a doubly circular
linked list. The advantage of this data structure is that any node can be made as starting node. This is useful in
the implementation of the circular queues in the linked list.
4. Circular Doubly Linked List: The circular doubly linked list is a combination of the doubly linked list
and the circular linked list. It means that this linked list is bidirectional and contains two pointers and the last
pointer points to the first pointer.
Accessing time of an element : O(n)
Search time of an element : O(n)
Insertion of an Element : O(1) [If we are at the position where we have to insert an element]
Deletion of an Element : O(1) [If we know address of node previous to the node to be deleted]
Example: Consider the previous example where we made an array of marks of students. Now if a new
subject is added to the course, its marks are also to be added to the array of marks. But the size of the array
was fixed and it is already full so it can not add any new element. If we make an array of a size lot more than
the number of subjects it is possible that most of the array will remain empty. We reduce the space wastage
Linked List is formed which adds a node only when a new element is introduced. Insertions and deletions
also become easier with a linked list.
One big drawback of a linked list is, random access is not allowed. With arrays, we can access i’th element in
O(1) time. In the linked list, it takes Θ(i) time.
1. Dynamic Size: Linked lists are dynamic in size, so they can grow or shrink as needed without wasting
memory.
2. Efficient Insertion and Deletion: Linked lists provide efficient insertion and deletion operations, as only
the pointers to the previous and next nodes need to be updated.
3. Cache Friendliness: Linked lists can be cache-friendly, as they allow for linear access to elements, which
can lead to better cache utilization and improved performance.
1. Slow Access: Linked lists do not allow for constant-time access to elements by index, so accessing an
element in the middle of the list can be slow.
2. More Memory Overhead: Linked lists require more memory overhead compared to arrays, as each
element in the list is stored as a node, which contains a value and a pointer to the next node.
3. Harder to Implement: Linked lists can be harder to implement than arrays, as they require the use of
pointers and linked data structures.
4. In summary, linked lists are a good choice for problems where dynamic size and efficient
insertion/deletion operations are important, but their disadvantages should be considered for problems
where constant-time access to elements is necessary.
Stack
A stack or LIFO (last in, first out) is an abstract data type that serves as a collection of elements, with two
principal operations: push, which adds an element to the collection, and pop, which removes the last element
that was added. In stack both the operations of push and pop take place at the same end that is top of the
stack. It can be implemented by using both array and linked list.
It is defined as ordered collection of elements represented by a real physical stack or pile. Linear data
structure features insertion and deletion of items take place at one end called top of the stack. You can use
these concepts or structures all throughout programming.
Insertion : O(1)
Deletion : O(1)
Access Time : O(n) [Worst Case]
Insertion and Deletion are allowed on one end known as TOP.
Implementation of Stacks:
1. Stacks are used for maintaining function calls (the last called function must finish execution
first), we can always remove recursion with the help of stacks.
2. Stacks are also used in cases where we have to reverse a word.
3. Check for balanced parenthesis.
4. Solving mathematical equations (Polish Notations)
14 | P a g e
5. In editors where the word you typed the last is the first to be removed when you use undo
operation.
6. Similarly, to implement back functionality in web browsers.
7. Recently open Files in any application. (Like Word, Excel etc)
Advantages of Stacks:
1. LIFO (Last-In, First-Out) Order: Stacks allow for elements to be stored and retrieved in a LIFO order,
which is useful for implementing algorithms like depth-first search.
2. Efficient Operations: Stacks provide efficient push-and-pop operations, as only the top element needs to
be updated.
3. Easy to Implement: Stacks can be easily implemented using arrays or linked lists, making them a simple
data structure to understand and use.
Disadvantages of Stacks:
1. Fixed Size: Stacks have a fixed size, so they can suffer from overflow if too many elements are added or
underflow if too many elements are removed.
2. Limited Operations: Stacks only allow for push, pop, and peek (accessing the top element) operations, so
they are not suitable for implementing algorithms that require constant-time access to elements or
efficient insertion and deletion operations.
3. Unbalanced Operations: Stacks can become unbalanced if push and pop operations are perform ed
unevenly, leading to overflow or underflow.
4. In summary, stacks are a good choice for problems where LIFO order and efficient push and pop
operations are important, but their disadvantages should be considered for problems that require dynamic
resizing, constant-time access to elements, or more complex operations.
Queue
15 | P a g e
A queue or FIFO (first in, first out) is an abstract data type that serves as a collection of elements, with two
principal operations: enqueue, the process of adding an element to the collection. (The element is added from
the rear side) and dequeue the process of removing the first element that was added. (The element is removed
from the front side).
It can be implemented by using both array and linked list. A queue is defined as a linear data structure that is
open at both ends, here addition of elements always takes place at REAR End and Deletion of Elements
take place at FRONT End.
Insertion : O(1)
Deletion : O(1)
Access Time : O(n) [Worst Case]
Examples include CPU scheduling, Disk Scheduling, Printing command operations. Another application of
queue is when data is transferred asynchronously (data not necessarily received at the same rate as sent)
between two processes. Examples include IO Buffers, pipes, file IO, etc.
Basic Operations on Queue:
void enqueue (int data): Inserts an element at the end of the queue i.e. at the rear end.
int dequeue(): This operation removes and returns an element that is at the front end of the queue.
Types of Queues:
Simple Queue: Simple queue also known as a linear queue is the most basic version of a queue. Here,
insertion of an element i.e. the Enqueue operation takes place at the rear end and removal of an element
i.e. the Dequeue operation takes place at the front end.
Circular Queue: In a circular queue, the element of the queue act as a circular ring. The working of a
circular queue is similar to the linear queue except for the fact that the last element is connected to the
first element. Its advantage is that the memory is utilized in a better way. This is because if there is an
empty space i.e. if no element is present at a certain position in the queue, then an element can be easily
added at that position.
Priority Queue: This queue is a special type of queue. Its specialty is that it arranges the elements in a
queue based on some priority. The priority can be something where the element with the highest value has
the priority so it creates a queue with decreasing order of values. The priority can also be such that the
element with the lowest value gets the highest priority so in turn it creates a queue with increasing order
of values.
Dequeue: Dequeue is also known as Double Ended Queue. As the name suggests double ended, it means
that an element can be inserted or removed from both the ends of the queue unlike the other queues in
which it can be done only from one end. Because of this property it may not obey the First In First Out
property.
Advantages of Queues:
16 | P a g e
1. FIFO (First-In, First-Out) Order: Queues allow for elements to be stored and retrieved in a FIFO order,
which is useful for implementing algorithms like breadth-first search.
2. Efficient Operations: Queues provide efficient enqueue and dequeue operations, as only the front and rear
of the queue need to be updated.
3. Dynamic Size: Queues can grow dynamically, so they can be used in situations where the number of
elements is unknown or can change over time.
Disadvantages of Queues:
1. Limited Operations: Queues only allow for enqueue, dequeue, and peek (accessing the front element)
operations, so they are not suitable for implementing algorithms that require constant-time access to
elements or efficient insertion and deletion operations.
2. Slow Random Access: Queues do not allow for constant-time access to elements by index, so accessing
an element in the middle of the queue can be slow.
3. Cache Unfriendly: Queues can be cache-unfriendly, as elements are retrieved in a different order than
they are stored, which can lead to poor cache utilization and performance.
Efficient data access: Elements can be easily accessed by their position in the sequence.
Dynamic sizing: Linear data structures can dynamically adjust their size as elements are added or
removed.
Ease of implementation: Linear data structures can be easily implemented using arrays or linked lists.
Versatility: Linear data structures can be used in various applications, such as searching, sorting, and
manipulation of data.
Simple algorithms: Many algorithms used in linear data structures are simple and straightforward.
1. Limited data access: Accessing elements not stored at the end or the beginning of the sequence can be
time-consuming.
2. Memory overhead: Maintaining the links between elements in linked lists and pointers in stacks and
queues can consume additional memory.
3. Complex algorithms: Some algorithms used in linear data structures, such as searching and sorting, can
be complex and time-consuming.
4. Inefficient use of memory: Linear data structures can result in inefficient use of memory if there are gaps
in the memory allocation.
5. Unsuitable for certain operations: Linear data structures may not be suitable for operations that require
constant random access to elements, such as searching for an element in a large dataset.
17 | P a g e
ARRAY OPERATIONS
Inserting Element into Array: In this scenario, we are given the exact location (index) of an array
where a new data element (value) needs to be inserted. First we shall check if the array is full, if it is
not, then we shall move all data elements from that given location to one step forward. This will make
room for a new data element.
Algorithm
We assume A is an array with N elements. The maximum numbers of elements it can store is defined
by MAX.
Begin
IF N = MAX, return
ELSE
N=N+1
SEEK Location index
For All Elements from A[index] to A[N]
Move to next adjacent location
A[index] = New_Element
End
Implementation in C
#include <stdio.h>
#define MAX 5
void main() {
{ array[i+1] = array[i]; }
18 | P a g e
array[index] = value; // add new element at first position
Output
If we compile and run the above program, it will produce the following result −
Printing array before insertion −
array[0] = 1
array[1] = 2
array[2] = 4
array[3] = 5
Printing array after insertion −
array[0] = 1
array[1] = 2
array[2] = 3
array[3] = 4
array[4] = 5
// Function call
int result = search(arr, N, x);
if (result == -1)
printf("Element is not present in array")
else
printf("Element is present at index %d", result);
return 0;
}
Complexity Analysis of Linear Search:
Time Complexity:
Best Case: In the best case, the key might be present at the first index. So the best case complexity is
O(1)
Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the end from
which the search has started in the list. So the worst-case complexity is O(N) where N is the size of the
list.
Average Case: O(N)
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable is used.
In this algorithm,
Divide the search space into two halves by finding the middle index “mid”.
Mid= low + high /2
Compare the middle element of the search space with the key.
If the key is found at middle element, the process is terminated.
If the key is not found at middle element, choose which half will be used as the next search space.
If the key is smaller than the middle element, then the left side is used for next search.
If the key is larger than the middle element, then the right side is used for next search.
This process is continued until the key is found or the total search space is exhausted.
include <stdio.h>
// An iterative binary search function.
int BinarySearch(int arr[], int low, int high, int x)
{
int mid;
21 | P a g e
while (low <= high)
{
mid = low + (high - low) / 2;
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = BinarySearch(arr, 0, n - 1, x);
if (result == -1) printf("Element is not present in array") ;
else printf("Element is present at index %d",result);
return 0;
}
22 | P a g e