DSA Minor
DSA Minor
List Array
List is used to collect items that usually An array is also a vital component that collects
consist of elements of multiple data types. several items of the same data type.
List cannot manage arithmetic operations. Array can manage arithmetic operations.
It consists of elements that belong to the It consists of elements that belong to the same
different data types. data type.
When it comes to flexibility, the list is When it comes to flexibility, the array is not
perfect as it allows easy modification of suitable as it does not allow easy modification of
data. data.
In a list, the complete list can be accessed In an array, a loop is mandatory to access the
without any specific looping. components of the array.
Data type in general refers to type of the Data structures are user-defined, hence the
value a variable holds , its size and in some type of data to hold depends on the user
cases its range. requirement
A Data type is one of the forms of a A Data structure is a collection of data of same
variable to which the value can be assigned or different data types. This collection of data
of a given type only. This value can be used can be represented using an object and can be
throughout the program. used throughout the program.
The implementation of data type is known The implementation of data structure is known
as an abstract implementation. as a concrete implementation.
It can hold value but not data i.e. multiple It can hold multiple types of data within a single
values . Therefore, we can say that it is object.
data-less.
In case of data type, a value can be In the case of data structure, some operations
assigned directly to the variables. are used to assign the data to the data
structure object.
`There is no problem in the time When we deal with a data structure object,
complexity. time complexity plays an important role.
The examples of data type are int, float, The examples of data structure are stack,
char. queue, tree, graph.
Difference between Linear and Non-linear Data Structures:
Linear Search
What is Search?
Search is a process of finding a value in a list of values. In other words, searching is the process
of locating given value position in a list of values.
Example
Consider the following list of elements and the element to be searched...
Unordered Linear Search
import array as k
a=k.array(‘i’)
n=int(input("Number of elements in array:"))
for i in range(0,n):
l=int(input())
a.append(l)
e=int(input("Enter the element to be searched:"))
pos=0;
for i in range(0,n):
if (a[i]==e):
pos=i+1
break
if(pos==0):
print("Element not found")
else:
print("Element found")
print("Its first occurrence is at the position",pos)
Time Complexity
Time Complexity
Time Complexity
Case (Successful
(Unsuccessful Search)
Search)
o Best Case Complexity - In Linear search, best case occurs when the element we are finding is at the
first position of the array. The best-case time complexity of linear search is O(1).
o Average Case Complexity - The average case time complexity of linear search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the element we are looking is
present at the end of the array. The worst-case in linear search could be when the target element is not
present in the given array, and we have to traverse the entire array. The worst -case time complexity of
linear search is O(n).
Searching element is greater than element which we are traversing, that is, searching
element is greater than any element in the array keep moving.
When searching element is equal to traversing element at position i , then we have
found the element successfully and return the position.
Program:
import array as k
a=k.array(‘i’)
n=int(input("Number of elements in array:"))
print(" Enter ", n , " elements in sorted order")
for i in range(0,n):
l=int(input())
a.append(l)
e=int(input("Enter the element want to search:"))
pos=0
for i in range(0,n):
if(a[i]==e):
pos=i+1
break
elif(e<a[i]):
break
if(pos==0):
print("Element not found")
else:
print("Element found")
print("Its first occurrence is at the position",pos)
Time Complexity:
The best case time complexity is O(1) while the worst case time complexity is O(n)
Binary Search
Binary search algorithm finds a given element in a list of elements with O(log n) time
complexity where n is total number of elements in the list.
It uses Divide and Conquer rule.
The binary search algorithm can be used with only a sorted list of elements. That means
the binary search is used only with a list of elements that are already arranged in an
order.
The binary search can not be used for a list of elements arranged in random order.
Binary search is implemented using following steps...
This search process starts comparing the search element with the middle element in the
list.
If both are matched, then the result is "element found". Otherwise, we check whether
the search element is smaller or larger than the middle element in the list.
If the search element is smaller, then we repeat the same process for the left sublist of
the middle element.
If the search element is larger, then we repeat the same process for the right sublist of
the middle element.
We repeat this process until we find the search element in the list or until we left with a
sub list of only one element.
And if that element also doesn't match with the search element, then the result
is "Element not found in the list".
Example
Consider the following list of elements and the element to be searched...
Program:
import array as k
a=k.array('i')
n=int(input("Enter the size of the array:"))
print("Enter " , n , "elements in sorted order")
for i in range(0,n):
l=int(input())
a.append(l)
e=int(input("Enter the element to be searched:"))
pos=0
low=0
high=n-1
while(low<=high):
mid=int((low+high)/2)
if(e==a[mid]):
pos=mid+1
break
elif(e<a[mid]):
high=mid-1
else:
low=mid+1
if(pos==0):
print("Element not found")
else:
print("Element found")
print("Its position is at:",pos)
Time Complexity
Case Time Complexity
import array as k
a=k.array('i')
n=int(input("Enter the size of the array:"))
print("Enter " , n , "elements in sorted order")
a=[0]*n
for i in range(0,n):
a[i]=int(input())
e=int(input("Enter the element to be searched:"))
pos=0
low=0
high=n-1
pos=binarysearch(a,e,low,high)
if(pos==0):
print("Element not found")
else:
print("Element found")
print("Its position is at:",pos)
Selection Sort
Step 1 - Select the first element of the list (i.e., Element at first position in the list).
Step 2: Compare the selected element with all the other elements in the list.
Step 3: In every comparision, if any element is found smaller than the selected
element (for Ascending order), then both are swapped.
Step 4: Repeat the same procedure with element in the next position in the list till
the entire list is sorted.
Complexity of the Selection Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-
3)+......+1) = (n (n-1))/2 number of comparisions in the worst case. If the list is already sorted
Program
import array as k
a=k.array(‘i’)
n=int(input("Number of elements in array:"))
print(" Enter ", n , " elements ")
for i in range(0,n):
l=int(input())
a.append(l)
print("Before sorting, array is:")
print(a)
for i in range(0,n):
for j in range(i+1,n):
if(a[i]>a[j]):
temp=a[i]
a[i]=a[j]
a[j]=temp
print("After sorting, array is:")
print(a)
Merge Sort
Merge sort algorithm uses Divide & Conquer technique.
In Divide & Conquer algorithm design paradigm, we divide the problems in sub-
problems recursively then solve the sub-problems, & at last combine the solutions to
find the final result.
One thing to keep in mind while dividing the problems into sub-problems is that, the
structure of sub-problems should not change as of the original problem.
Divide & Conquer algorithm has 3 steps:
1. Divide: Breaking the problem into subproblems
2. Conquer: Recursively solving the subproblems
3. Combine: Combining the solutions to get the final result
In Merge sort, we divide the array recursively in two halves, until each sub-array contains a
single element, and then we merge the sub-array in a way that it results into a sorted array.
merge() function merges two sorted sub-arrays into one, wherein it assumes that array[l .. n]
and arr[n+1 .. r] are sorted.
Merge sort is one of the efficient & fastest sorting algorithms with the following time
complexity:
4. Recursively, merge the two halves in a sorted manner, so that only one sorted array is left:
merge(array, low, mid,high)
Example:
1. Divide the unsorted array recursively until 1 element in each sub-array remains.
2. Recursively, merge sub-arrays to produce sorted sub-arrays until all the sub-array merges
and only one array remains.
Program
def mergesort(a,low,high):
if(low<high):
mid=int((low+high)/2)
mergesort(a,low,mid)
mergesort(a,mid+1,high)
merge(a,low,mid,high)
def merge(a,low,mid,high):
i=low
j=mid+1
k=low
while(i<=mid and j<=high):
if(a[i]<=a[j]):
b[k]=a[i]
k=k+1
i=i+1
else:
b[k]=a[j]
k=k+1
j=j+1
while(i<=mid):
b[k]=a[i]
k=k+1
i=i+1
while(j<=high):
b[k]=a[j]
k=k+1
j=j+1
i=low
while(i<k):
a[i]=b[i]
i+=1
import array as np
a=np.array('i')
b=np.array('i')
n=int(input("Number of elements in array:"))
a=[0]*n
b=[0]*n
print(" Enter ", n , " elements ")
for i in range(0,n):
a[i]=int(input())
print("Before sorting, array is:")
print(a)
low=0
high=n-1
mergesort(a,low,high)
print("After sorting, array is:")
print(a)
Time Complexity
Merge Sort is a recursive algorithm and time complexity can be expressed as following
recurrence relation:
T(n) = 2T(n/2) + O(n)
The solution of the above recurrence is O(nLogn).
The list of size N is divided into a max of Logn parts, and the merging of all sublists into a
single list takes O(N) time, the worst-case run time of this algorithm is O(nLogn)
Best Case Time Complexity: O(n*log n)
Worst Case Time Complexity: O(n*log n)
Average Time Complexity: O(n*log n)
The time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and best)
as the merge sort always divides the array into two halves and takes linear time to merge
two halves.
Preferred Algorithm
In the algorithm given below, suppose arr is an array of n elements. The assumed swap
function in the algorithm will swap the values of given array elements.
1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort
First Pass
Sorting will start from the initial two elements. Let compare them to check which is greater.
Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.
Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like -
Second Pass
Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be -
Third Pass
Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be -
Fourth pass
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time
complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements in
ascending
order, but its elements are in descending order. The worst-case time complexity of
bubble sort is O(n2).
Quick Sort
Quick sort is the widely used sorting algorithm that makes n log n comparisons in average
case for sorting an array of n elements.
It is a faster and highly efficient sorting algorithm.
This algorithm follows the divide and conquer approach. Divide and conquer is a technique
of breaking down the algorithms into subproblems, then solving the subproblems, and
combining the results back together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-
arrays such that each element in the left sub-array is less than or equal to the pivot element and
each element in the right sub-array is larger than the pivot element.
Conquer: Recursively, sort two subarrays with Quicksort.
Combine: Combine the already sorted array.
Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds
values that are smaller than the specified value (Pivot), and another array holds the values
that are greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will
continue until the single element remains in the sub-array.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] =
27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to right,
as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from
left and moves to right.
As a[pivot] > a[left], so algorithm moves one position to right as -
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and a[left],
now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] =
29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left
and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same
element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of
element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-arrays.
After sorting gets done, the array will be -
Program
def quicksort(a,low,high):
if (low<high):
pivot=partition(a,low,high)
quicksort(a,low,pivot-1)
quicksort(a,pivot+1,high)
def partition(a,low,high):
pivot=a[low]
i=high+1
for j in range(high,low,-1):
if (a[j] > pivot):
i=i-1
t = a[i]
a[i] = a[j]
a[j] = t
t=a[i-1]
a[i-1]=a[low]
a[low]=t
return(i-1)
Quicksort complexity
Time Complexity
Case Time Complexity
Stacks:
A stack is a linear data structure in which an element may be inserted or deleted at only one
end called top of the stack.
Example: a stack of dishes, a stack of coins
Stacks are also called a last-in-first-out (LIFO) list , that means the elements are removed
from a stack in the reverse order of that which they were inserted into the stack.
They are also called piles or push-down lists.
Special terminology used for two basic operations associated with stack are:
o Push is the term used to insert an element into the stack
o Pop is term used to delete an element from the stack
The terms are used only with stacks and not with other data structures.
Operations:
Create():
allocate memory block for the stack(array)
implemented by using constructors
isempty():
if top=-1,return 1 else return 0
complexity is Θ(1)
isfull():
if top=maxsize-1 return 1 else return 0
complexity is Θ(1)
top()/ peek():
return top most element in the list i.e., the element that is pointed by the top
complexity is Θ(1)
push():
if stack is full return a message stack overflow else increment top by 1 and push the
element into the list.
Complexity is Θ(1)
pop():
if stack is empty return a message stack underflow else remove the element pointed by top
and decrement top by 1.
Complexity is Θ(1)
Overflow:
When the stack is full of elements and we try to push a new element into the stack then stack
overflow occurs.
Underflow:
If we try to remove an element from a stack, which contains no elements in it then it is called
stack underflow.
def isempty(self):
if self.top==-1:
print("\n Stack is empty")
def isfull(self):
if self.top==self.MAXSIZE-1:
print("\n Stack is full")
def length(self):
if self.top==-1:
print("\n Stack is empty")
else:
print(" \n Stack contains ",self.top+1,"elements")
def push(self):
if self.top==self.MAXSIZE-1:
print("\n Stack overflow")
else:
value=int(input("\n Enter the value to be pushed into the stack"))
self.top+=1
self.stack[self.top]=value
def pop(self):
if self.top==-1:
print("\n Stack underflow")
else:
print("\n Value popped from the stack is=",self.stack[self.top])
self.top-=1
def display(self):
if self.top==-1:
print("\n Stack is empty")
else:
print("\n Stack contains=")
for i in range(self.top,-1,-1):
print(self.stack[i]," ",end='')
s=stack()
s.isempty()
s.push()
s.push()
s.push()
s.push()
s.push()
s.push()
s.display()
s.length()
s.pop()
s.display()
Characteristics of a Stacks:
LIFO ordering: The last item pushed onto the stack is the first item popped off the stack.
Push and Pop Operations: The two primary operations that can be performed on a stack
are "push," which adds an item to the top of the stack, and "pop," which removes the item
from the top of the stack.
Top Element Access: A stack allows access only to the top element, which is the most
recently added item. Other elements are not directly accessible and must be removed first.
Limited Accessibility: Stacks do not allow access to elements in the middle of the stack. To
access a specific element, all elements on top of it must be removed first.
Dynamic Size: Stacks can grow or shrink dynamically as items are added or removed.
Contiguous Memory Allocation: Stacks typically use contiguous memory allocation,
meaning that elements are stored in adjacent memory locations.
Stack Overflow: A stack has a finite amount of memory allocated to it. When the stack
exceeds this memory limit, a stack overflow occurs.
Stack Underflow: A stack underflow occurs when an attempt is made to remove an element
from an empty stack.
These characteristics make stacks useful for many applications, such as implementing
recursive function calls, undo-redo operations, and parsing expressions.
Applications of Stack
There are many applications of a stack. Some of them are:
Queues:
A Queue is a linear data structure in which additions and deletions take place at different
ends.
The end at which new elements are added is called the rear
The end at which the old elements are deleted is called the front.
The terms front and rear are used in describing a linear list only when it is implemented as
a queue.
Queues are also called first-in-first-out(FIFO) lists .
Thus in queue the order in which the elements enter into a queue is the order in which
they leave.
Ex: people waiting in line at a bank.
Operations
Create():
allocate memory block for the queue(array)
implemented by using constructors
isempty():
if front=-1,return 1 else return 0
complexity is Θ(1)
isfull():
if rear=maxsize-1 return 1 else return 0
complexity is Θ(1)
insert():
if queue is full return a message queue overflow else increment rear by 1 and insert the
element into the queue.
Complexity is Θ(1)
delete():
if queue is empty return a message queue underflow else remove the element pointed by
front and increment front by 1.
Complexity is Θ(1)
Overflow:
When the queue is full of elements and we try to insert a new element into the queue then
queue overflow occurs.
Underflow:
If we try to remove an element from a queue, which contains no elements in it then it is called
queue underflow.
def isempty(self):
if self.front==-1:
print("\n Queue is empty")
def isfull(self):
if self.rear==self.MAXSIZE-1:
print("\n Queue is full")
def length(self):
if self.front==-1:
print("\n Queue is empty")
else:
print(" \n Queue contains ",self.rear-self.front+1,"elements")
def insert(self):
if(self.rear==self.MAXSIZE-1):
print("Queue Overflow")
else:
value=int(input("\n Enter the value to be pushed into the queue"))
if self.front==-1:
self.front+=1
self.rear+=1
else:
self.rear+=1
self.queue[self.rear]=value
def delete(self):
if self.front==-1:
print("\n Queue underflow")
else:
print("\n Value deleted from the queue is=",self.queue[self.front])
if self.front==self.rear:
self.front=self.rear=-1
else:
self.front+=1
def display(self):
if self.front==-1:
print("\n Queue is empty")
else:
print("\n Queue contains=")
for i in range(self.front,self.rear+1):
print(self.queue[i]," ",end='')
s=queue()
s.isempty()
s.insert()
s.insert()
s.insert()
s.insert()
s.insert()
s.insert()
s.display()
s.length()
s.delete()
s.display()
s.delete()
s.display()
Applications of Queue:.
Railroad car Rearrangement
Wire Routing
Image-Component Labeling
Machine Shop Simulation
Circular Queue
Array implementation of queue suffers from one limitation.
In the implementation when the rear pointer reaches at the end, insertion will be denied
even if sufficient space available at front.
To overcome the above limitation we can implement the queue as a circular queue.
A circular queue is a queue in which all locations are treated as circular such that the first
location q[0] follows the last location q[max-1]
def isempty(self):
if self.front==-1:
print("\n Queue is empty")
else:
print("\n Queue is not empty")
def isfull(self):
if ((self.rear+1)%self.MAXSIZE==self.front):
print("\n Queue is full")
else:
print("\n Queue is not full")
def length(self):
if self.front==-1:
print("\n Queue is empty")
else:
if(self.front<=self.rear):
print(" \n Queue contains ",self.rear-self.front+1,"elements")
else:
print(" \n Queue contains ",self.MAXSIZE-self.front+self.rear+1,"elements")
def insert(self):
if ((self.rear+1)%self.MAXSIZE==self.front):
print("Queue Overflow")
else:
value=int(input("\n Enter the value to be pushed into the queue"))
if self.front==-1:
self.front+=1
self.rear+=1
else:
self.rear=(self.rear+1)%self.MAXSIZE
self.cq[self.rear]=value
def delete(self):
if self.front==-1:
print("\n Queue underflow")
else:
print("\n Value deleted from the circular queue is=",self.cq[self.front])
if self.front==self.rear:
self.front=self.rear=-1
else:
self.front=(self.front+1)%self.MAXSIZE
def display(self):
if self.front==-1:
print("\n Queue is empty")
else:
print("\n Queue contains=")
if(self.front<=self.rear):
for i in range(self.front,self.rear+1):
print(self.cq[i]," ",end='')
else:
for i in range(self.front,self.MAXSIZE):
print(self.cq[i]," ",end='')
for i in range(0,self.rear+1):
print(self.cq[i]," ",end='')
s=cqueue()
s.isempty()
s.insert()
s.insert()
s.insert()
s.insert()
s.insert()
s.insert()
s.display()
s.length()
s.delete()
s.display()
s.delete()
s.display()
s.insert()
s.display()
s.length()
Operations
Create():
create an empty list
implemented by using constructor.
isempty():
return true if the list is empty, false otherwise
the complexity of isempty() is Θ(1)
length():
return the list size i.e., number of elements in the list
complexity of length() is Θ(n)
find(k,x):
return the kth element of the list in x, return false if there is no kth element.
Complexity of find() is O(K)
Search(x):
return the position of x in the list
return 0 if x is not present in the list
complexity of search() is O(n)
Delete(k,x):
delete kth element and return it in x.
this function returns the modified list
complexity of Delete() is O(k)
Insert(k,x):
insert x just after the k th element.
This function returns the modified list
complexity of Delete() is O(k)
Output():
output the list from left to right
Complexity of output() is Θ(length)
l=linkedlist()
l.display()
l.insert()
l.insert()
l.insert()
l.display()
l.isempty()
l.length()
l.delete()
l.display()