ADSA Lecture Unit II Ch1
ADSA Lecture Unit II Ch1
UNIT-I (15h)
Introduction to Basic Data Structures: Importance and need of good data structures and algorithms, Introduction
to linear and non-linear data structure and its importance, Algorithms Complexity and Analysis. [3]
Linear and Non –Linear Data Structures : Arrays , Link Lists, Queues , Trees and related algorithms [6]
Advanced Data Structures: AVL Trees (Insertion , Deletion , Searching), Red-Black Trees, B-trees, B+trees,
Heaps. Data structure for disjoint sets, Augmented data structures. [6]
UNIT-II (15h)
Searching and Sorting : Internal and External Sorting algorithms: Linear Search, Binary Search, Bubble Sort,
Selection Sort, Insertion Sort, Shell Sort, Quick Sort, Heap Sort, Merge Sort, Counting Sort, Radix Sort and
analysis of their complexities and Hashing
[4]
Graphs & Algorithms: Representation, Type of Graphs, Depth- and breadth-first traversals, Planar graphs,
isomorphism, graph coloring, covering and partition, Minimum Spanning Tree: Prim’s and Kruskal’s
algorithms. [6]
String Matching Algorithms: Naïve String Matching, Suffix arrays, Suffix trees, Rabin-Karp, Knuth-Morris-
Pratt, Booyer-Moore algorithm [5]
UNIT-III (15h)
Approximation algorithms: Need of approximation algorithms: Introduction to P, NP, NP-Hard and NP-Complete Greedy
Approach, Dynamic Approach, Knapsack, Huffman Coding, TSP, All pair shortest path, Longest Common Subsequence
Problem, Matrix Chain Multiplication. [7]
Randomized algorithms: Introduction, type of randomized algorithms, Quick sort, min cut
[4]
Online Algorithms: Introduction, Online Paging Problem, k-server Problem. Data compression: Huffman’s coding,
BWT, LZW
[4]
Recommended Books:
1. Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall of India, 3rd edition 2012.
2. Horowitz, Sahni and Rajasekaran, “Fundamental of Computer, Algorithms”, University Press (India),
2nd edition.
3. Aho, Haperoft and Ullman, “The Design and analysis of Computer Algorithms”, Pearson Education
India.
Sorting
Types Of Sorting
• Bubble sort
• Insertion sort
• Selection sort
• Merge sort
• Quick sort
• Heap sort
• Shell sort
• Cycle sort
• Count sort
• Radix Sort
Bubble Sort
• Exchange two adjacent elements if they are out of order. Repeat until array is sorted.
• Example:
Take an array of numbers " 5 1 4 2 8", and sort the array from lowest
number to greatest number using bubble sort. In each step, elements
written in bold are being compared. Three passes will be required;
Step-by-step example
Bubble Sort
BUBBLE(DATA, N)
Here DATA is an array with N elements. This algorithm sorts the elements in DATA.
1.Repeat steps 2 and 3 for K = 1 to N – 1
2.Set PTR = 1 [Initialize pass pointer PTR]
3.Repeat while PTR ≤ N – K [Executes pass]
a) If DATA[PTR] > DATA[PTR + 1], then
Interchange DATA[PTR] and DATA[PTR + 1]
[End of If structure]
b) Set PTR = PTR + 1
[End of inner loop]
[End of step 1 outer loop]
4.Exit
Insertion sort
Insertion sort is a simple sorting algorithm that works the way we sort
playing cards in our hands
Algorithm
Selection Sort
• Selection sort is a simple sorting algorithm. This sorting algorithm is an in-place
comparison-based algorithm in which the list is divided into two parts, the
sorted part at the left end and the unsorted part at the right end. Initially, the
sorted part is empty and the unsorted part is the entire list.
• The smallest element is selected from the unsorted array and swapped with the
leftmost element, and that element becomes a part of the sorted array. This
process continues moving unsorted array boundary by one element to the right.
• This algorithm is not suitable for large data sets as its average and worst case
complexities are of Ο(n2), where n is the number of items.
Example
Algorithm
Heap Sort
• Heap sort is one of the sorting algorithms used to arrange a list of elements in
order.
• Heapsort algorithm uses one of the tree concepts called Heap Tree.
• In this sorting algorithm, we use Max Heap to arrange list of elements in
descending order and Min Heap to arrange list elements in ascending order.
• Step 1 - Construct a Binary Tree with given list of Elements.
• Step 2 - Transform the Binary Tree into Max Heap.
• Step 3 - Delete the root element from Max Heap using Heapify method.
• Step 4 - Put the deleted element into the Sorted list.
• Step 5 - Repeat the same until Min Heap becomes empty.
• Step 6 - Display the sorted list.
Merge Sort
Divide and Conquer
• Divide and Conquer cuts the problem in half each time, but uses the result of both
halves:
• cut the problem in half until the problem is trivial
• solve for both halves
• combine the solutions
• A divide-and-conquer algorithm:
• Divide the unsorted array into 2 halves until the sub-arrays only contain one element
• Merge the sub-problem solutions together:
• Compare the sub-array’s first elements
• Remove the smallest element and put it into the result array
• Continue the process until all elements have been put into the result array
Algorithm
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two
halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
Quick Sort
Quick Sort
Quicksort picks an element as pivot, and then it partitions the given array around the
picked pivot element. In quick sort, a large array is divided into two arrays in which one
holds values that are smaller than the specified value (Pivot), and another array holds the
values that are greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will
continue until the single element remains in the sub-array.
Quick Sort
Choosing the pivot
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is
typical to determine a good pivot. Some of the ways of choosing a pivot are as follows –
•Pivot can be random, i.e. select the random pivot from the given array.
•Pivot can either be the rightmost element of the leftmost element of the given array.
Shell Sort
• Shell sort is an algorithm that first sorts the elements far apart from each
other and successively reduces the interval between the elements to be
sorted.
• It is a generalized version of insertion sort.
• In shell sort, elements at a specific interval are sorted. The interval
between the elements is gradually decreased based on the sequence used.
• The performance of the shell sort depends on the type of sequence used
for a given input array.
Example
Contd…
Contd…
Count Sort
• In Counting sort, the frequencies of distinct elements of the array to be sorted is
counted and stored in an auxiliary array, by mapping its value as an index of the
auxiliary array.
Algorithm:
• Let's assume that, array A of size N needs to be sorted.
• Initialize the auxiliary array Aux[] as 0.
Note: The size of this array should be ≥max(A[]).
• Traverse array A and store the count of occurrence of each element in the
appropriate index of the Aux array, which means, execute Aux[A[i]]++ for
each i, where i ranges from [0,N−1].
• Initialize the empty array sortedA[]
• Traverse array Aux and copy i into sortedA for Aux[i] number of times
where 0≤i≤max(A[]).
Count Sort
Example:
Say A={5,2,9,5,2,3,5}.
• Aux will be of the size 9+1 i.e. 10
• Aux={0,0,2,1,0,3,0,0,0,2}.
Notice that Aux[2]=2 which represents the number of occurrences
of 2 in A[]. Similarly Aux[5]=3 which represents the number occurrences
of 5 in A[].
• After applying the counting sort algorithm, sortedA[] will be {2,2,3,5,5,5,9}
HASHING
Hash Table is a data structure where, data is stored in an array format, where
each data value has its own unique index value. Access of data becomes very
fast if we know the index of the desired data.
1. The hash function converts the item into a small integer or hash value. This integer is used as an index to
store the original data.
2. It stores the data in a hash table. You can use a hash key to locate data quickly.
hash = hashfunc(key)
index = hash % array_size
(1,20)
(2,70)
(42,80) Sr.No. Key Hash Array Index
(4,25) 1 1 1 % 20 = 1 1
(12,44)
2 2 2 % 20 = 2 2
(14,32)
(17,11) 3 42 42 % 20 = 2 2
(13,78) 4 4 4 % 20 = 4 4
(37,98) 5 12 12 % 20 = 12 12
6 14 14 % 20 = 14 14
7 17 17 % 20 = 17 17
8 13 13 % 20 = 13 13
9 37 37 % 20 = 17 17
• Efficiently computable.
• Should uniformly distribute the keys (Each table position equally likely for
each key)
Open Hashing
Chaining Method
In chaining, if a hash function produces the same index for multiple
elements, these elements are stored in the same index by using a linked
list.
Now, we can use a key K to search in the linked list by just linearly
traversing. If the intrinsic key for any entry is equal to K then it means
that we have found our entry. If we have reached the end of the linked
list and yet we haven’t found our entry then it means that the entry
does not exist.
Linear probing
This hash techniques is known to be the easiest way to resolve any
collisions in hash tables.
In linear probing, the hash table is searched sequentially that starts
from the original location of the hash. If in case the location that we
get is already occupied, then we check for the next location.
The main idea of open addressing is to keep all the data in the
same table to achieve it, we search for alternative slots in the
hash table until it is found.
Application of Hashing
Password Verification
Books Recommended
References
Thank you