CS III Sem Compelte Material
CS III Sem Compelte Material
B.SC.
COMPUTER SCIENCE
PAPER-III
DATA STRUCTURES
STUDY MATERIAL
CONTENT
UNIT –I
CHAPTER-1: FUNDAMENTALS CONCEPTS
1. Introduction to Data structures 1
2. Types of Data structures 2
3. Introduction to Algorithm 2
4. Pseudocode 3
5. Flowcharts 4
6. Analysis of Algorithm 5
CHAPTER-2: LINEAR DATA STRUCTURES USING ARRAYS
1. Arrays 6
2. One Dimensional Arrays 6
3. Two Dimensional Arrays 7
4. N-dimensional Arrays 9
5. Pros and cons of Arrays 10
6. String and String Manipulations 11
CHAPTER-3: STACK
1. Concept of Stack 15
2. Representation of Stack 15
3. Stack Operations 16
4. Applications of Stack 19
5. Expression Evaluation and conversion 20
UNIT-II
CHAPTER-1: RECUSION
1. Introduction to Recursion 22
2. Uses of Stack in Recursion 23
3. Variants of Recursion 24
4. Iteration versus Recursion 25
CHAPTER-2: QUEUE
1. Concept of Queue 26
2. Representation of Queue 26
3. Queue operations 27
4. Circular Queue and operations 29
5. Dequeue 32
6. Applications of Queue 32
CHAPTER-3: LINKED LIST
1. Concept of Linked List 33
2. Representation of Linked List 33
3. Types of Linked List 34
4. Single Linked List ADT 35
5. Representation of stack using Linked List 38
6. Representation of queue using Linked List 41
UNIT-III
CHAPTER-1: TREE
1. Tree introduction and basic Terminology 45
2. Types of Tree 45
3. Binary Tree 46
4. Representation of Binary Tree 46
5. Binary Tree Abstract Data Type 48
6. Binary Tree Traversal 49
7. Applications of Binary Tree 51
CHAPTER-2: GRAPH
1. Introduction to Graphs and Basic Terminology 53
2. Representation of Graphs 54
3. Graph Abstract Data type 55
4. Graph Traversal (BFS, DFS) 56
5. Spanning Tree (Prim’s and Kruskal’s algorithm) 57
CHAPTER-3: HASHING
1. Introduction to Hashing and Key terminology 60
2. Hash functions 61
3. Collision Resolution Strategies 62
UNIT-IV
CHAPTER-1: SERACHING AND SORTING
1. Introduction to Search Techniques 65
2. Sequential Search (Linear Search) 65
3. Binary Search 66
4. Introduction to Sorting techniques 68
5. Bubble Sort 69
6. Selection sort 70
7. Insertion Sort 72
8. Quick Sort 73
9. Merge Sort 75
CHAPTER -2: HEAPS
1. Concept of Heaps 76
2. Implementation of Heap 77
3. Heap Abstract Data type 78
4. Heap Sort 81
APPENDEX A: LAB PROGRAMS
APPENDEX B: MGU Previous Question Papers
Data Structures using C++ Prepared By S. NAGESH MCA, NET
UNIT - I
CHAPTER – 1
FUNDAMENTAL CONCEPTS
1. INTRODUCTION TO DATA STRUCTURES
Computer science includes the study of data, its representation, and its processing
by computers. Hence, it is essential to study about the terms associated with data and its
representation.
Data
Data is a collection of facts about an object. Data can be a number, a string, or a set of
many numbers and strings. Data may Atomic and Composite Data
Atomic data is the data that we choose to consider as a single, non-decomposable
entity. Composite data can be broken down into subfields that have meaning. Composite
data is also referred to as structured data and can be implemented using a structure or a
class in C++.
Data Type
Data type refers to type of a variable can store. A variable may have any value as per
the facilities provided by that language. Data type is a term that specifies the type of data
that a variable may hold in the programming language. There are two types of data types.
Built-in Data Types are primitive data types in a programming language. For
example int, float, char. User defined Data Types are defined by the programmer. For
example structures, unions, and classes are User defined data types.
Data Structure
Data structures refer to data and representation of data objects within a program. A
data structure is a collection of atomic and composite data types into a set with defined
relationships.
Data structure is “a combination of elements and a set of associations or relationships
involving the combined elements”. A data structure organizes data that specifies a set of data
elements and a set of operations that are applied to this data.
Abstract Data Type
Data abstraction is the separation of logical properties of the data from details of
how the data is represented.
An ADT includes declaration of data, implementation of operations, and
encapsulation of data and operations. We encapsulate the data and the operations on this
data and hide them from the user.
C++ provides ‘class’ declaration for the purpose of defining the ADT from which
objects are created. In C++, functions that operate on variables of a class are called member
functions. An ADT is a way of defining a data structure so that we know what it does but
not how it does it.
3. INTRODUCTION TO ALGORITHMS
An algorithm is simply a set of rules for carrying out some task. A programmer
should first solve the problem in a step-by-step manner. This step-by-step solution is called
an algorithm.
Characteristics of Algorithms
An algorithm is an ordered finite set of unambiguous and effective steps that
produces a result and terminates. The following are the characteristics of algorithms:
• Input: An algorithm is supplied with zero or n external quantities as input.
• Output: An algorithm must produce a result, that is, an output.
• Unambiguous steps: Each step in an algorithm must be clear and unambiguous to
understand.
• Finiteness An algorithm must have finite number of steps.
• Effectiveness: Every instruction must be executed easily and produce accurate result.
Algorithmic
Algorithmic is a field of computer science, defined as a study of algorithms. The goal
of algorithmic is to understand the complexity of algorithms.
Algorithmics is the science that allows us to evaluate the various available
algorithms, so that we can choose best one.
4. PSEUDOCODE
Pseudocode is a tool used to define algorithm. A pseudocode is an English-like
presentation of the code required for an algorithm. It is partly English and partly computer
language structure code.
Pseudocode Notations
The pseudocode uses various notations. They are as follows.
a. Algorithm Header: A header includes the name of the algorithm, the parameters, and
the list of pre and post conditions. The header makes the pseudocode readable.
b. Purpose: The purpose is a brief description about what the algorithm does.
c. Condition and Return Statements
The pre condition represents the pre-requirements for the parameters, if any.
The post condition identifies any action taken.
If a value is returned, it will be identified by a return condition.
d. Statement Numbers: The statements in an algorithm are numbered sequentially. Any
label system such as the decimal, roman numbers or even alphabets can be used to label the
statements.
e. Variables: Variables are needed in algorithms. We need not define every variable used in
the algorithm. The use of meaningful variable names is appreciated. It is suggested to use
meaningful variable names.
f. Statement Constructs: There are three statement constructs used for developing an
algorithm. They are Sequence, Decision and Looping.
Sequence: Sequential algorithm is a sequence of instructions, which can be implemented
orderly.
Decision : Decision constructs used to select one of two flows based on condition. Ex: if, if-
else, switch – case.
Looping: Looping construct used to execute as set of instructions repeatedly.
g. Sub algorithms: In structured programming, the problem solution is described in the
form of smaller modules. This modular design breaks an algorithm into smaller units called
sub algorithms. In programming languages these sub algorithms referred as functions,
subroutines, procedures, methods, and modules.
5. FLOWCHARTS
A very effective tool to show the logic flow of a program is the flowchart. A
flowchart is a graphical representation of an algorithm. It hides all the details of an
algorithm by giving a picture.
Flow chart shows Visualized execution of code within a program. Flow chart shows
sequence of steps in problem solving without language. In drawing flow charts certain
symbols are used. They are as follows.
Process
Decision box
Chart Connector
START
READ x
r=x%2
if(r = =0)
yes no
STOP
6. Analysis of algorithms
There can be several ways to write algorithms for a given problem. The difficulty is
in deciding which algorithm is the best. We can compare one algorithm with the other and
choose the best. For comparison, we need to analyze the algorithms. Analysis involves
measuring the performance of an algorithm. Performance is measured in terms of
Programmer’s time complexity, Space complexity and Time complexity
a. Space Complexity
Space complexity is the amount of computer memory required during program
execution as a function of the input size. Space complexity measurement can be performed
at two different times: Compile time and Run time
Compile Time Space Complexity: Compile time space complexity is defined as the storage
requirement of a program at compile time. This storage requirement can be computed
during compile time. This includes memory requirement before execution starts.
Run-time Space Complexity: Run time space complexity is defined as the storage
requirement of a program while execution. If the program is recursive or uses dynamic
variables or dynamic data structures, then there is a need to determine space complexity at
run-time.
b. Time Complexity
Time complexity of an algorithm is a measure of how much time is required to
execute an algorithm for a given number of inputs. Time complexity T (P) is the time taken
by a program P, that is, the sum of its compile and execution times. This is system-
dependent. Another way to compute it is to count the number of algorithm steps.
Computing Time Complexity of an Algorithm
The total time taken by the algorithm or program is calculated using the sum of the
time taken by each of the executable statements in an algorithm or a program. The time
required by each statement depends on the following:
1. The time required for executing it once
2. The number of times the statement is executed
The product of these two parameters gives the time required for that particular
statement. Compute the execution time of all executable statements. The summation of all
the execution times is the total time required for that algorithm or program. We get a
polynomial
Big-O Notation
Big O notation is used to describe performance or complexity of an algorithm. Big O
notation is used to classify algorithms according to how their running time or space
requirements grow as the input size grows.
When we sum up the frequency count of all the statements, we get a polynomial. In
an analysis, we are interested in the order of magnitude of an algorithm. The order of
magnitude of can be expressed as big – O as in “on the order of “ and expressed as O(n).
The big-O notation can be derived from f (n) using the following steps:
1. Remove all coefficients
2. Remove smaller factors
For example analyzing an algorithm, one may find that the time it takes to complete
a problem of size n is given by T(n) = 4 n 2 - 2 n + 2. If we ignore constants and slower
growing terms, we could say "T(n) grows at the order of n 2 " and write as T(n) = O(n 2 ).
CHAPTER – 2
LINEAR DATA STRUCTURE USING ARRAYS
1. ARRAYS
An array is a finite ordered collection of homogeneous data elements. Arrays enable
us to organize more than one element in consecutive memory locations. Arrays enable to
store a group of data together in a sequential manner in computer’s memory. Arrays
support direct access to any of those data items just by specifying the name of the array and
its index as the item’s position.
Arrays are the most general and easy to use of all the data structures. An array as a
data structure is defined as a set of pairs (index, value) such that with each index, a value is
associated.
index—indicates the location of an element in an array
value—indicates the actual value of that data element
Index allows the direct addressing (or accessing) of any element of an array. Most of
the time, an array is implemented by using continuous or consecutive memory locations.
The common terms associated with arrays are as follows:
• Size of array: The maximum number of elements that would be stored in an array is the
size of that array.
• Base Address: The base address of an array is the memory location where the first
element of an array is stored. It is decided at the time of execution of a program.
• Data type of an array: The data type of an array indicates the data type of elements
stored in that array.
• Index: A user can access the elements of an array by using subscripts such as Name[0],
Name[1], ..., Name[i]. This subscript is called the index of an element.
• Range of index: If N is the size of an array, then in C++, the range of index is 0 to (N -
1). The range is language dependent.
For example a one dimensional array can be declared as follows.
int a[20];
Here Size of the arrays is 20. Base address is the address of first element i.e a[0]. Data
type is int, range of index is 0-19.
Example: Consider an integer array , int a[5] ={10,6,45,3,4} in C++. If the base address is
1056. Find the address of the element a[3].
Sol: For C++ index starts from 0 and we have base address = 1056.
Now address of a[3] = base + no. of elements before 3rd element X size of elements
= 1056 + 3 X 2 here size of elements = 2 for integers.
= 1062
Array index Value Address
0 10 1056
1057
1 6 1058
1059
2 45 1060
1061
3 3 1062
1063
4 4 1064
1065
In Two Dimensional arrays the elements will be arranged as follows in the memory
Row-major Representation
In row-major representation, the elements of matrix M are stored row-wise, that is,
elements of the 0th row, 1st row, 2nd row, 3rd row, and so on till the m th row.
The address of the element of the ith row and the jth column for a matrix of size m X n
can be calculated as
Address of (A[i][ j]) = Base address + (Number of rows before ith row X No. of columns in
row X Size of element) + (Number of elements columns before jth
element) X ( Size of element)
Here, the base is the address of A[0][0]. There are i rows before ith row, j columns before jth
column. No. of columns in each row is n.
Column-major Representation
In column-major representation, m X n elements of a two-dimensional array A are
stored as one single row of columns. The elements are stored in the memory as a sequence,
first the elements of column 0, then the elements of column 1, and so on, till the elements of
column n - 1.
The address of A[i][ j] is computed as
Address of (A[i][ j]) = Base address + (Number of columns before jth column x No. of
rows x Size of element) + (Number of rows before in ith row X
Size of element )
Here, the base is the address of A[0][0]. There are i rows before ith row, j columns before jth
column. No. of rows is m.
Example: Consider an integer array, int A[3][4] in C++. If the base address is 1050, find the
address of the element A[1][2] with row-major and column-major representation of the
array.
Solution: For C++, the LB of index is 0, and we have m = 3, n = 4, and Base = 1050.
Row-major representation:
Address of A[i][j] = Base + (i × n × size of element) + (j × size of element )
Address of A[1][2] = 1050 + (1 × 4 ×2) + (2×2 ) (since size of element = 2 for integers)
= 1050 + 8 + 4
= 1062
A 0 1 2 3
0 1050 1052 1054 1056
1 1058 1060 1062 1064
2 1066 1068 1070 1072
Column-major representation:
Address of A[i][j] = Base + (j × m × size of element) + (i × size of element)
Address of A[1][2] = 1050 + (2 × 3×2) + (1×2) (since size of element = 2 for integers)
= 1050 + 12 + 2
= 1064
A 0 1 2 3
0 1050 1056 1062 1068
1 1052 1058 1064 1070
2 1054 1060 1066 1072
4. N-DIMENSIONAL ARRAYS
An n- dimensional m1 x m2 x m3 x …..x mn array is a collection of elements in which
each element is specified by a list of indices such as k1,k2,k3, … kn. The element of array
with subscripts k1,k2,k3, … kn is denoted by a[k1][k2]….[kn] where 0<=k1<m1-1, 0<=k2<m2-1
0<= k3<m3-1………. 0<=kn<mn-1
Characteristics of Arrays
The characteristics of an array are as follows:
1. An array is a finite ordered collection of homogeneous data elements.
2. In an array, successive elements are stored at a fixed distance apart.
3. An array is defined as a set of pairs—index and value.
4. An array allows direct access to any element.
5. In an array, insertion and deletion of elements in-between positions require data
movement.
6. An array provides static allocation, which means the space allocation done once during
the compile time cannot be changed during run-time.
Advantages of Arrays
The various merits of the array as a data structure are as follows:
1. Arrays permit efficient random access in constant time 0(1).
2. Arrays are most appropriate for storing a fixed amount of data and also for high
frequency of data retrievals as data can be accessed directly.
3. Arrays are among the most compact data structures; if we store 100 integers in an array,
it takes only as much space as the 100 integers,
4. Arrays are well known in applications such as searching, hash tables, matrix operations,
and sorting.
5. Wherever there is a direct mapping between the elements and their position, such as an
ordered list, arrays are the most suitable data structures.
6. Ordered lists such as polynomials are most efficiently handled using arrays.
7. Arrays can be used to represent strings, stacks, and queues.
Disadvantages
Some of the disadvantages of arrays are as follows:
1. Arrays provide static memory management. Hence, during execution, the size can
neither be grown nor shrunk.
2. Static allocation in an array is a problem associated with implementation in many
programming languages.
3. An array is inefficient when often data is inserted or deleted as insertion or deletion of an
element in an array needs a lot of data movement.
4. A drawback due to the simplicity of arrays is the possibility of referencing a non existent
element by using an index outside the valid range. This is known as exceeding the array
bounds. The result is a program working with incorrect data. In the worst case, the whole
system can crash.
Applications of Arrays
The following list indicates where arrays are most beneficial:
1. Arrays are useful to form the basis for several more complex data structures such as
heaps and hash tables and
2. Arrays can be used to represent strings, stacks, and queues.
3. Arrays can be used to store two-dimensional data when represented as matrix and
matrix operations.
4. They can also be used for indexing, searching, and sorting keys,
Each string is terminated by a special character, that is, null character ‘\0’. This null
character indicates the end or termination of each string.
1. We first find the reverse of the string and then compare it with the original string. If they
match, then the string is a palindrome; otherwise, it is not. This approach needs n
comparisons if the string length is n and an additional array to store the reversed string.
2. The other approach does not need n comparisons but just n/2 comparisons. We can
compare the first character with the last. If they match, then again match the second
character with the second last. Continue this process till the middle of the string. We can set
two indices from both the ends and compare till the indices do not overlap. The mismatch
of characters indicates that the string is not a palindrome. This approach does not need an
additional data structure
Palindrome program:
#include<iostream.h>
class String
{
char Str[30];
public:
String(){};
void getdata();
void display();
int Length();
void palindrome();
};
void String::getdata()
{
cout<<"Enter a string";
cin>>Str;
}
void String::display()
{
for(int i=0;i<Length();i++)
cout<<Str[i];
cout<<endl;
}
int String :: Length()
{
int length = 0, i;
for(i = 0; Str[i] != '\0'; i++)
length++;
return(length);
}
void main()
{
String a;
a.getdata();
a.palindrome();
}
CHAPTER – 3
STACK
1. CONCPET OF STACK
Stack is a linear list where all insertions and deletions are made only at one end. This
end is called as top.
Consider a stack of books on a table. We can easily put a new book on the top of the
stack, and similarly, we can easily remove the topmost book. In the same way, only the
topmost element of a stack can be accessed. The direct access of other intermediate
positions is not feasible. Elements may be added to or removed from only one end, called
the top of a stack. Elements are removed from stack in the reverse order of the insertion
sequence. So a stack is called Last In First Out (LIFO) data structure.
Each stack abstract data type (ADT) has a data member, named as top, which points
to the topmost element in the stack. There are two basic operations push and pop that can
be performed on a stack. Insertion of an element in the stack is called push and deletion of
an element from the stack is called pop.
C
B
A
Here A is the bottommost element and C is the topmost element.
Stack ADT:
Stack ADT describe data members of Stack and operations to be performed on data.
A stack class definition defines following members.
• Date members:
o One dimensional array
o Size variable
o Top variable
• Member Functions:
o Create: Creation of empty stack. Constructor is used to create empty
stack automatically when the object is created.
o Push: Insertion of an element into stack. It takes one parameter.
o Pop: Deletion of an element from stack. It always returns topmost
element of stack.
o Traverse: Visiting each element of stack. We can visit forward or
backward.
The definition of class stack is as follows:
class Stack
{
private:
int s[50],size,top;
public:
Stack(int x)
{
size=x;
top=-1;
}
void push(int Element);
int pop();
void display();
};
3. STACK OPERATIONS
Stack is linear data structure in which insertions and deletions are made at only end
called top. Stack is LIFO data structure. The three basic stack operations are Push, Pop, and
Traverse.
• Push
The push operation inserts an element on the top of the stack. The recently added
element is always at the top of the stack. Before every push, we must ensure whether there
is a space for a new element. When there is no space for the new element, then stack is said
to be full. If top=size-1 then stack is said to be full, no push operation can perform. The
push operation increments top value and add new elements at top position.
C
B B
A A A
top = -1 push(A) push(B) push(C)
stack is top = 0 top = 1 top = 2
empty
Algorithm
Push(int element)
1. if (top==size-1)
2. print “Stack is full”;
3. else
4. Increment top by 1; top++
5. Add element at top, S[top]=element;
A stack may also be used to keep track of the parentheses count. Whenever a left
parenthesis is encountered, it is pushed onto the stack, and whenever a right parenthesis is
encountered, the stack is examined. If the stack is empty, then the string is declared to be
invalid. In addition, when the end of the string is reached, the stack must be empty;
otherwise, the string is declared to be invalid.
Processing of Function Calls
One natural application of stacks, which arises in computer programming, is the
processing of function calls and their terminations. The program must remember the place
where the call was made so that it can return there after the function is complete. Suppose
we have three functions, say, A, B, and C, and one main program. Let the main invoke A, A
invoke B, and B in turn invoke C. Then, B will not have finished its work until C has
finished and returned. Similarly, main is the first to start work, but it is the last to be
finished, not until sometime after A has finished and returned. Thus, the sequence by
which a function actively proceeds is summed up as the LIFO or FILO property,
It can be observed that the main program is invoked first but finished last, whereas
the function C is invoked last but finished first.
Reversing a String with a Stack
For example, the expression (A+B) x C is an infix expression. In postfix notation, the
operator is written after its operands, whereas in prefix notation, the operator precedes its
operands.
Infix: (A+B) x C
Prefix: x+ABC
Postfix: AB+Cx
Infix to postfix conversion using stack
To convert Infix Expression into Postfix Expression using a stack data structure, we
can use the following steps...
1. Read all the symbols one by one from left to right in the given infix expression.
2. If the reading symbol is operand, then directly add it to post fix expression string.
3. If the reading symbol is left parenthesis ‘ ( ‘ , the Push it on to the stack.
4. If the reading symbol is right parenthesis ‘)’, then Pop all the contents of stack until
respective left parenthesis is poped and add each poped symbol to resulted string.
5. If the reading symbol is operator, then pop the operators which are already on the stack
that have higher or equal precedence than current operator add them to result string.
Now push current operator onto the stack.
UNIT-II
CHAPTER – 1
RECURSION
1. INTRODUCTION TO RECURSION
Recursion means calling itself. A function that calls itself is called as recursive
function. The recursive function definition has one statement that calls itself. Recursive
function implements iterations.
Recursive function always consists of two main parts.
• A terminating case that indicates when the recursion will finish (exit condition).
• A call to itself that must make progress towards the terminating case.
To solve a recursive problem using functions, the problem must have an end
condition that can be stated in non-recursive terms.
Let us consider an example of computing the factorial of a number. The factorial of a
number, say n, is equal to the product of all the integers from 1 to n. The factorial of n is
denoted as
n! = 1 x 2 x 3 x 4 x …….x (n-1) x n
We can give recursion definition for factorial of n. It can be defined as
n! = n x (n-1)! where 1! = 1
This recursive definition of factorial has two steps, as follows:
1. If n <= 1, then factorial of n = 1
2. Otherwise, factorial of n = n x Factorial of (n - 1)
The C++ Code for a recursive to find factorial of a number is.
int factorial(int n)
{
if(n<=1) // end condition
return 1;
else
return factorial(n - 1) * n;
}
// Write a program to find factorial of a number using recursive function
#include<iostream.h>
long fact(int n)
{
if(n==0||n==1)
return (1);
else
return n*fact(n-1);
}
void main()
{
int n;
long f;
cout<<"Enter a number"<<endl;
cin>>n;
f=fact(n);
cout<<"Factorial="<<f;
}
3. VARIANTS OF RECURSION
The recursive functions are categorized as direct, indirect, tail, linear, and tree recursions.
a. Direct Recursion
Recursion is said to be direct when a function calls itself directly. The Factorial()
function is an example of direct recursion.
int Factorial(int n)
{
if(n<=1) // end condition
return 1;
else
return Factorial(n - 1) * n;
}
b. Indirect Recursion
A function is said to be indirectly recursive if it calls another function, which in
turn calls it. The following code is an example of indirect recursion.
int abc(int x)
{
------
------
xyz();
}
void xyz()
{
----
-----
abc();
}
c. Tail Recursion
A recursive function is said to be tail recursive if there are no pending operations to
be performed on return from a recursive call. Tail recursion is also used to return the
value of the last recursive call as the value of the function. The following code is an
example for Tail Recursion. The Binary search function is Tail recursion.
int Binary_Search(int A[], int low, int high, int key)
{
int mid;
if(low <= high)
{
mid = (low + high)/2;
if(A[mid] == key)
return mid;
else if(key < A[mid])
return Binary_Search(A, low, mid - 1, key);
else
return Binary_Search(A, mid + 1, high, key);
}
return -1;
}
d. Linear Recursion
Depending on the way the recursion grows, it is classified as linear or tree. A
recursive unction is said to be linearly recursive when no pending operation involves
another recursive call, for example, the Fact() function. This is the simplest form of
recursion and occurs when an action has a simple repetitive structure consisting of some
basic steps followed by the action again. The Factorial() function in Program Code 4.2 is
an example of linear recursion.
e. Tree Recursion
In a recursive function, if there is another recursive call in the set of operations to
be completed after the recursion is over, this is called a tree recursion. Examples of tree
recursive functions are the quick sort and merge sort algorithms, the FibSeries algorithm,
and so on.
The Fibonacci Series is 0, 1, 1, 2, 3, 5, 8, 13 ……
The Fibonacci function Fibseries() is defined as
FibSeries(0) = 0
FibSeries(1)=1
FibSeries(2)=0+1= 1 = FibSeries(0)+FibSeries(1)
FibSeries (3)=1+1=2= FibSeries(1)+FibSeries(2)
FibSeries(4)=1+2=3= FibSeries(2)+FibSeries(3)
Similarly FibSeries(n)=FibSeries(n-2) + FebSeries(n-1)
CHAPTER – 2
QUEUES
1. CONCEPT OF QUEUE
A Queue is a linear list or an ordered list where data can be inserted at one end and
deleted from another end. The end at which data is inserted is called the rear and the end
from which date is deleted is called the front. The Queue data structure guarantee that the
data is processed in the sequence in which they are entered. That means a queue is a first in
first out (FIFO) or last in last out (LILO) structure.
Consider an ordered list L = {a1, a2, a3, a4, …, an}. If we assume that L represents a
queue, then a1 is the front end element and an is the rear-end element.
front rear
2. REPRESENTATION OF QUEUES USING ARRAYS
Queue can be implemented using either arrays or linked lists. The one dimensional
array is used to represent queue. The first element stored at 0 th position, second element at
1st position, third element at 2nd position. Similarly nth element stored at (n-1)th position.
Declare a one dimensional array and two variables front and rear.
Queue ADT: Queue ADT describes data members of Queue and operations to be
performed on data. A Queue class definition defines following members.
• Data members:
▪ One dimensional array
▪ Size variable, front and rear variable
• Member Functions:
▪ Create: Creation of empty Queue. Constructor is used to create empty Queue
automatically when the object is created.
▪ Insert: Insertion of an element into Queue. It takes one parameter.
▪ Deletion: Deletion of an element from queue. It always returns front element of
Queue.
▪ Traverse: Visiting each element of Queue. We can visit forward or backward.
The class definition of queue is as follows:
class Queue
{
int q[50],size,front,rear;
public:
Queue(int x)
{
size=x;
front=-1;
rear=-1;
}
void insertion(int);
void deletion();
void display();
};
3. QUEUE OPERATIONS
A Queue is a linear list or an ordered list where data can be inserted at one end and
deleted from another end. Queue is FIFO data structure. We can perform three basic
operations on queue, insertion, deletion and traverse.
• Insertion:
The insertion operation inserts an element in the queue if it is not full. If rear = size -1
then queue is full, we cannot add element. Otherwise increment rear value, insert element
at rear position.
insertion(15)
0 1 2 3 4 0 1 2 3 4
15
Insertion(20) Insertion(35)
0 1 2 3 4 0 1 2 3 4
15 20 15 20 35
• Deletion:
The deletion operation removes an element from the front of the queue if it is not
empty. If rear = front then queue is empty, we cannot delete element. Otherwise increment
front value, and delete element at front position.
Deletion()
0 1 2 3 4 0 1 2 3 4
15 20 35 15 20 35
• Traverse:
Traversing means visiting each element from front to rear and display them. In
queue elements are available from front+1 to rear.
void display()
{
if(front==rear)
cout<<”Queue is empty”;
else
{
Cout<<”Queue elements are”<<endl;
for(int i=front+1;i<=rear;i++)
cout<<q[i];
}
}
Write a program to implement insert and deletion operation on queue using array
#include<iostream.h>
class Queue
{
int q[50],n,f,r;
public:
Queue(int x)
{
n=x;
f=r=-1;
}
void insert(int);
void deletion():
void display();
};
void main()
{
Queue q(5);
q.insert(10);
q.insert(20);
q.insert(30);
q.display();
q.deletion();
q.display();
}
Limitation of Queue:
The problem with linear queue is that it is not possible to insert element at deleted
position. The insert operation on linear queue gives queue is full even though there is an
enough space in front of the queue. There are two solutions for this problem. One is shifting
element towards front side while deleting element. Second solution is use of Circular
Queues.
4. CIRCULAR QUEUES
Circular Queue is Queue in the elements arranged in circular format. The technique
that wraparound the queue from end to start upon reaching end of queue is called Circular
Queue. Here we add elements to the queue and reach end of the queue, the next element is
stored in first slot the array if it is empty.
. . 20 30 20
.
2 1 2 1 2 1 2 1
Insert(20)
Insert(10) n=4 Insert(30)
n=4 n=4
r=-1 r=1 n=4
r=0 f=-1 r=2
f=-1 f=-1
c=0 c=2 f=-1
c=1 c=3
0
3 0 0 3 0
. . 3 3 40 50
40 40 50
30 20 20 30 20 30 .
30
2 1 2 1 2 1 2 1
deletion
Deletion() Insert(40) Insert(50) n=4
n=4 n=4 n=4 r=0
r=2 r=3 r=(r+1)%n=0 f=1
f=0 f=0 f=0 c=3
c=2 c=3 c=4
void main()
{
CQueue q(5);
q.insert(10);
q.insert(20);
q.insert(30);
q.display();
q.deletion();
q.display();
q.insert(40);
q.insert(50);
q.insert(60);
q.display();
5. DEQUEUE
The word deque is a short form of double-ended queue. Deque Defines a data
structure where elements can be added or deleted at either the front end or the rear end.
Thus, deque is a generalization of both a stack and a queue. It supports both stack-like and
queue-like capabilities. It is a sequential container that is optimized for fast index-based
access and efficient insertion at either of its ends. Deque can be implemented as either a
continuous deque or as a linked deque.
The deque ADT combines the characteristics of stacks and queues. Similar to stacks
and queues, a deque permits the elements to be accessed only at the ends. However, a
deque allows elements to be added at and removed from either end. The following are the
four operations associated with deque:
6. Applications of Queues
Queues are a useful representation of problems for different applications. The
common applications of Queue are as follows:
• Queues are useful in a time sharing computer system where many users share
a system simultaneously.
• A Concept of Queue used to share resource among multiple request on first
come first serve bases
• Queues are used for finding a path using the breadth first search of graphs.
• Queues are used in Job Scheduling algorithms.
CHAPTER – 3
LINKED LIST
1. CONCEPT OF LINKED LIST
A linked list is an ordered collection of data in which each element (node) contains a
data item and one or two link fields. These link field references to its successor (and/or
predecessor). In a linked list, before adding any element to the list, a memory space for that
node must be allocated. A link is made from each item to the next item in the list.
Node 1 Node 2 Node 3
Data Link Data Link Data Link
X1 X2 X3 Null
Each node of the linked list has at least the following two elements:
1. The data member(s) being stored in the list.
2. A pointer or link to the next element in the list.
The last node in the list contains a null pointer to indicate that it is the end of the list
Deletion of a Node:
In Single linked list, we can delete a node from the beginning, middle or from the
end of the list.
Let us consider a situation when the node is deleted from beginning of the list.
Step 1: Create a temp node and assign Head node to temp node
Node *temp;
temp=Head;
Step 2: If temp nod is null then List is empty otherwise go to Step 3
Step 3: If the delete the first node, then node is taken as Head node. Assign temp node link
to Head node, display data of temp node and delete temp node.
if(temp==NULL)
cout<<"Empty List";
else
{
Head=temp->link;
cout<<"Deleted"<<temp->data<<endl;
delete temp;
}
A program to implement insert, deletion and traverse operation on single linked list.
#include<iostream.h>
class Node
{
public:
int data;
Node *link;
};
class List
{
private:
Node *Head,*Tail;
public:
List()
{
Head=Tail=NULL;
}
void insert(int);
void display();
void deletion();
};
void List::insert(int x)
{
Node *NewNode;
NewNode=new Node;
NewNode->data=x;
NewNode->link=NULL;
if(Head==NULL)
{
Head=NewNode;
Tail=NewNode;
}
else
{
Tail->link=NewNode;
Tail=NewNode;
}
}
void List::display()
{
Node *temp;
temp=Head;
if(temp==NULL)
{
cout<<"Empty List";
}
else
{
while(temp!=NULL)
{
int x=temp->data;
cout<<x<<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void List::deletion()
{
Node *temp;
temp=Head;
if(temp==NULL)
cout<<"Empty List";
else
{
Head=temp->link;
cout<<"Deleted"<<temp->data<<endl;
delete temp;
}
}
void main()
{
List l;
l.create(10);
l.create(30);
l.display();
l.create(60);
l.create(70);
l.create(80);
l.create(90);
l.display();
l.deletion();
l.display();
}
5. Representation of Stack using Single Linked List
A stack implemented using a linked list is also called linked stack. Each element of the
stack will be represented as a node of the list. The addition and deletion of a node will be
only at one end. The first node is considered to be at the top of the stack, and it will be
pointed to by a pointer called top. The last node is the bottom of the stack, and its link field
is set to Null. An empty stack will have Top = Null.
A linked stack with elements (X, Y, Z) in order (X on top) may be represented as
shown below.
Top
X
Z Null
A CPP program to implement push and pop operations on stack using linked list
#include<iostream.h>
class node
{
public:
int data;
node *link;
};
class stack
{
node *top;
public:
stack()
{
top=NULL;
}
void push(int);
void pop();
void display();
};
void stack::push(int x)
{
node *temp=new node();
temp->data=x;
temp->link=top;
top=temp;
}
void stack::pop()
{
if(top==NULL)
cout<<"stack is empty"<<endl;
else
{
int x=top->data;
cout<<"deleted data is "<<x<<endl;
top=top->link;
}
}
void stack::display()
{ node *temp=top;
if(temp==NULL)
cout<<"stack is empty";
else
{ cout<<"Stack elements are\n";
while(temp!=NULL)
{ cout<<temp->data<<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void main()
{
stack s;
s.push(10);
s.push(20);
s.push(30);
s.display();
s.pop();
s.display();
}
6. Representation of Queues using Single linked list
We can also represent queues using single linked list. The linked list which
represents queue data structure is known as linked queue. In linked queue each element of
the queue will be represented as a node of the list. We need two pointers front and rear. The
addition will be done at one end and deletions are done from another end. We can add a
node at the rear and delete a node from the front. The empty queue will have front = rear =
NULL.
Node 1 Node 2 Node 3
front X Y Z Null rear
Here front always points to the first node of queue and rear points to the last node of
queue. Queue empty condition is simply checked by comparing the front or rear with
NULL.
A CPP program to perform insert and deletion operations on queue using single linked
list.
#include<iostream.h>
class node
{
public:
int data;
node *link;
};
class queue
{
node *front, *rear;
public:
queue()
{
front=rear=NULL;
}
void insert(int x);
void deletion();
void display();
};
void queue::insert(int x)
{
node *temp=new node;
temp->data=x;
temp->link=NULL;
if(rear==NULL)
{ front=temp;
rear=temp;
}
else
{ rear->link=temp;
rear=temp;
}
}
void queue::deletion()
{
if(rear==NULL)
cout<<"Queue is empty"<<endl;
else
{ cout<<"deleted element is \t"<<front->data<<endl;
front=front->link;
if(front==NULL)
rear=NULL;
}
}
void queue::display()
{
if(rear==NULL)
cout<<"Queue is empty"<<endl;
else
{
cout<<”Queue elements are\n”;
node *temp=front;
while(temp!=NULL)
{ cout<<temp->data<<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void main()
{
queue q;
q.insert(10);
q.insert(20);
q.insert(30);
q.insert(40);
q.display();
q.deletion();
q.display();
}
UNIT - III
CHAPTER –1
TREES
1. TREE DEFINITION AND BASIC TERMINOLOGY
Tree is a non linear data structure. Tree is a finite set of elements arranged in
hierarchical manner. The top most element of the tree is known as root elements.
B C D
E F G
Root Node: The node at the top of the tree is called root node. There is only one root node
in a tree.
Child Node: The nodes that connected to root node are called child nodes of root node.
Siblings: Children of same parent are called as siblings
Leaf Node: The nodes with no children are called as Leaf node
Degree of a Node: The degree of a node is the number of child nodes it has
Degree of a Tree: The degree of a tree is the maximum degree of a node in the tree.
Height of a Tree: The number of levels of a Tree except root node is known height of a tree.
2. TYPES OF TREE:
Binary Tree: Binary Tree is a Tree in which each node as at most two children. In binary
Tree a node may have 0, 1 or 2 child nodes. The degree of a binary tree is 0, 1 or 2. The
following trees are binary Trees.
A A A A
B B C B C
D
Full Binary Tree: Binary Tree is a full binary tree if it contains the maximum possible
number of nodes in all levels. In full binary tree, each node has two children or no child at
all. The total number of nodes in a full binary tree of height h is 2h+1-1.
A
B C
D E F G
Complete Binary Tree: A complete binary tree is a binary tree in which every level except
the last, have the maximum number of possible nodes and all nodes are as far Left as
possible. A
B C
D E F
Binary Search Tree: Binary Search Tree is a binary Tree in which left side node values are
less than root node value and right side node values are greater than or equal to root node
value.
E
B G
A C F H
3. BINARY TREE
Binary Tree is a Tree in which each node as at most two children. In binary Tree a
node may have 0, 1 or 2 child nodes. The degree of a binary tree is 0, 1 or 2. The following
trees are binary Trees.
A A A A
B B C B C
D
Properties of a Binary Tree:
Let T be a tree. Then the following are the properties of binary tree.
1. A Binary tree may be empty
2. The degree of each node may be 0,1 or 2
3. There exists a unique path between every two nodes.
4. The maximum number of nodes of level n in a binary tree is 2n-1, where n>=1;
5. The maximum number of nodes of binary tree with height h is 2h-1
B C Index 0 1 2 3 4 5 6
Value A B C D NULL E F
D E F
Array representation of binary tree has certain drawbacks. There will be a lot of
unused space. Then there is memory wastage for empty nodes.
Linked Representation of Binary Tree
In a linked representation, all the nodes are created dynamically. Double linked list
is used to represent binary trees. Each node contains one data field and two link fields. One
link field is to reference to left child node and another one is to reference to right child
node. For leaf nodes link fields consists NULL values.
The root of the tree is stored in the data member root of the tree. This data member
provides an access pointer to the tree.
A tree can be represented as collection of nodes. A node can be represented as
follows.
class Node
{
public:
char data;
Node *lc;
Node *rc;
};
A A
B C B C
D E F D E F
B N G N
N A N N E N
B N G
N A N N E N N K N
B C
The C++ Code to traverse a binary tree using preorder technique is as follows.
void preorder(Node *temp)
{
if(temp!=NULL)
{
cout<<temp->data;
preorder(temp->Lchild);
preorder(temp->Rchild);
}
}
• Inorder Traversal: (Left-Root-Right)
In Inorder traversal, the left sub tree is visited first in Inorder followed by the root
and then the right sub tree in Inorder. This can be defined as the following:
1. Traverse Left sub tree of the root node in Inorder
2. Visit Root Node
3. Traverse Right sub tree of the root node in Inorder
Example:
Let us consider following binary tree traversal.
A
B C
Inorder Traversal: B → A → C
The C++ Code to traverse a binary tree using Inorder technique is as follows.
void inorder(Node *temp)
{
if(temp!=NULL)
{
inorder(temp->Lchild);
cout<<temp->data;
inorder(temp->Rchild);
}
}
• Postorder Traversal: (Left-Right-Root)
In Postorder traversal, the left sub tree is visited first in Postorder followed by the
right sub tree in post order and then the root. This can be defined as following:
1. Traverse Left sub tree of root node in Postorder traversal
2. Traverse Right sub tree of root node in Postorder traversal
3. Visit Root Node
Example
Let us consider following binary tree traversal.
A
B C
Postorder Traversal: B→C→ A
The C++ Code to traverse a binary tree using Postorder techniques is as follows.
void postorder(Node *temp)
{
if(temp!=NULL)
{
postorder(temp->Lchild);
postorder(temp->Rchild);
cout<<temp->data;
}
}
Let us consider following binary tree. Write Preorder, Inorder and Postorder traversals.
B C
D E F
Preorder Traversal: A→ B→ D→ C→ E→ F
Inorder Traversal: D→ B → A → E → C→ F
Postorder Traversal: D→ B→E→F→C→A
x -
A B C D
• Decision tree
We can us trees to arrange the outcomes of various decisions in the required order.
We can denote these actions in the form of tree called decision tree.
The decision tree is a classifier in the form of a tree where each node is either a
branch node or a leaf node. Here branch node is a decision node that denotes some test to
be carried and the leaf node denotes the final value based on result of condition.
If
condition
Yes No
Value Value
CHAPTER –2
GRAPHS
1. INTRODUCTION AND BASIC TERMINOLOGY
Graphs: A graph G is a finite set of vertices V and edges E. The set of vertices is a non
empty set of nodes and set of edges contains pairs of vertices.
G=(V,E)
V={v1,v2,v3,v4,……………..}
E={(v1,v2),(v1,v3),(v2,v4),………}
Directed Graph: In a graph, an edge that directed from one node to another is called a
directed edge. A graph in which every edge is directed is called as a directed graphs or
diagraph.
A city map showing only the one-way streets is an example of a directed graph.
A graph G=(V,E)
1 2
Here V = {1,2,3,4}
E = {(1,2),(2,3),(3,4),(4,1)}
3 4
Undirected Graph: In a graph, an edge that has no specific direction is called an
undirected edge. A graph in which every edge is undirected is called as a undirected graph.
A Graph G=(V,E) 1 2
Here V={1,2,3,4}
E = {(1,2),(1,4),(2,1),(2,3),(3,2),(3,4),(4,1),(4,3)}
3 4
Weighted Graph: A graph where weights are assigned to every edge is called a
weighted graph.
Degree of vertex:
In a directed graph, the number of edges incident from a vertex is its outdegree and
the number of edges incident to it is an indegree. The sum of indegree and outdegree is the
degree of a vertex.
Degree of Vertex 1 = indegree + outdegree 1 2
=1+2=3
Degree of Vertex 2 =2+0=2
Degree of Vertex 3 =1+1=2 3 4
Degree of Vertex 4 =1+2=3
In an undirected graph, the number of edges incident to a vertex is called as degree
of a vertex.
1 2
Degree of Vertex 1 = 2
Degree of Vertex 2 = 3
Degree of Vertex 3 = 2
Degree of Vertex 4 = 3 3 4
2. REPRESENTATION OF GRAPHS
A Graph is a finite set of vertices and edges. There are two standard representation
of a graph. They are as follows.
a. Adjacency Matrix (Sequential representation)
b. Adjacency List (Linked Representation)
a. Adjacency Matrix:
Adjacency Matrix is a square, two-dimensional array with one row and one column
for each vertex in the graph. For a graph with ‘n’ vertices a two dimensional array of size
n X n is required to represent a graph.
An entry into this two dimensional array is based on the following property.
A[i][j] = { 1 if there exists an edge between i , j
0 if there is no edge between i, j
where i, j are vertices }
Eg: Let us consider following directed graph
1 2
3 4
3 4
b. Adjacency List
Adjacency List is a collection of linked lists, one list for vertex of the graph. In this
representation ‘n’ linked lists are required to represent a graph with ‘n’ vertices. Each list
headed with a vertex and nodes in the list represent the vertices that are adjacent to head
node. Each node of the list has one data fields that stores vertex and one linked field.
Eg: Let us consider following directed graph
1 2
3 4
The above directed graph can be represented using adjacency List as follows:
1 2 3 N
2 N
3 4 N
4 1 2 N
Delete Vertex
The delete vertex operation deletes a vertex and all the incident edges on that vertex and
returns the modified graph
Insert Edge
The insert edge operation adds an edge incident between two vertices. In an
undirected graph, for adding an edge, the two vertices u and v are to be specified, and for a
directed graph along with vertices, the start vertex and the end vertex should be known.
Delete Edge
The delete edge operation removes one edge from the graph. Let the graph G be
G(V, E). Now, deleting the edge (u, v) from G deletes the edge incident between vertices u
and v and keeps the incident vertices u, v.
Is_empty
The is_empty operation checks whether the graph is empty. If the set of vertices V is
null then the graph is empty. If graph is empty it returns true otherwise it returns false.
4. GRAPH TRAVERSAL
Graph traversal is also known as searching through a graph. It means systematically
passing through the edges and visiting the vertices of the graph. There are two types of
graph traversal techniques. They are as follows:
a. Depth First Search (DFS)
b. Breadth First Search (BFS)
a. Depth First Search (DFS)
In DFS from the currently visited vertex in the graph, we keep searching deeper
whenever possible. The Depth Fist Search can be implemented by using stack. The DFS
traversal procedure is as follows:
1. Select a vertex ‘X’ and make it as visited.
2. Select an unvisited vertex ‘Y’ which is adjacent to ‘X’
3. Repeat step 1 & 2 till all adjacent of vertices ‘Y’ are visited.
4. On reaching a vertex whole all adjacent vertices have been visited and go back to
last visited vertex which has unvisited adjacent vertex.
5. Go back to step 1
6. Terminate the searching when all vertices are visited.
2 5 6
1 8
3
4 7
DFS:
1 →2→5→7→8→3→4→6
b. Breadth First Search (BFS)
In BFS search starts from a vertex ‘v’ and visits all adjacent vertices of ‘V’. In each
level visits all adjacent vertices of selected vertex. Continues this process until the entire
graph has been traversed. The Breadth First Search (BFS) can be implemented by using
Queue. The BFS traversal procedure is as follows:
1. Select a vertex ‘X’ and mark it as visited
2. Visit all adjacent unvisited vertices to ‘X’
2 5 6
1 8
3
4 7
BFS: 1 → 2 → 3 →5 →4 → 6 → 7→ 8
5. SPANNING TREE
Spanning tree is a sub graph of a graph consisting of at least one adjacent vertex for
each vertex and there is no cycle. There is a single path between the vertices.
A minimum spanning tree of a weighted graph is the spanning tree of graph whose
edges sum to minimum weight. There can be more than one minimum spanning tree for a
graph.
Weighted graph Spanning Tree Minimum Spanning Tree
4 1
1 2 1 2 2
4 4
15 1
6 6
15
10 3 4
3
3 4 12 4 12
12
The minimum spanning trees are useful in many applications. The two popular
methods used to find minimum spanning tree of a graph are as follows:
a. Prim’s Algorithm
b. Kruskal’s Algorithm
a. Prim’s Algorithm
The Prim’s Algorithm was discovered by R.C. Prim in the 1950s. Prim’s algorithm
starts from one vertex and grows the rest of the tree by adding one vertex at a time, by
adding the associated edges. This algorithm builds a tree by iteratively adding edges. At
each iteration, a next minimum cost edge and that don’t form cycle is added to the tree.
Let us consider the following weighted graph
4
1 2
15 6
10
3 4
5
Using Prim’s algorithm, we get a minimum spanning tree for this graph in the
following steps:
1. Let 1 be the start vertex.
The adjacent vertices of 1 are 2 and 4. The nearest vertices is 4 with edge (1,2) and
weight 4. Add edge (1,2)
1 2
4
2. Among the adjacent vertices to 1 and 2, the vertex 4 is the nearest vertex with edge
(2,4) and weight 6. Add edge (2,4)
1 2
4
6
3. Among the adjacent vertices to 1,2 and 4, the vertex 3 is the nearest vertex with edge
(4,3) and weight 4. Add edge (4,3)
1 2
4
6
3 4
5
As all the vertices are added, the algorithm ends. The resultant spanning tree is as
shown below with a total weight of 15.
1 2
4
6
3 4
5
b. Kruskal’s Algorithm
Kruskal’s Algorithm was discovered by J.B. Kruskal in 1950s. The Kruskal’s
Algorithm starts with all vertices. Each vertex is a connected component. At each step, we
add an edge to Minimum spanning Tree by examining the edges. If the edge does not form
a cycle, only then an edge is added. At end of the algorithm all vertices are connected with
no cycle with minimum weight.
Let us consider following weighted graph
4
1 2
15 6
10
3 4
5
Let us arrange the edges in an increasing order of their weights: (1,2), (3,4), (2,4),
(1,4), and (2,3) with weights 4,5,6,10 and 15 respectively.
Let us starts Kruskal’s Algorithm with following initial minimum spanning tree.
1 2
3 4
1 2
4
3 4
2. Selected edge (3,4) with weight 5. As the addition of edge to the existing tree does
not from a cycle, an edge is added.
1 2
4
3 4
5
3. Selected edge (2,4) with weight 6. As the addition of edge to the existing tree does
not from a cycle, an edge is added.
1 2
4
6
3 4
5
The resultant Minimum Spanning Tree with weight 15 is as follows.
1 2
4
6
3 4
5
CHAPTER – 3
HASHING
1. INTRODUCTION AND KEY TERMS
Hashing is a technique used for storing and retrieving information associated with a
key value. Hashing is a method of a directly computing the address of the record with the
help of a key by using a suitable mathematical function called the hash function.
The hash function computes the address on hash table. The resulting address is used
as the basis storing and retrieving records and this address is called the home address of
the record.
For an array to store a record in a hash table, the hash function is applied to the key
of the record being stored, returning an index within the range of the hash table. The item is
then stored in the table at that index position. To retrieve an item from a hash table, the
same scheme that was used to store the record is followed.
Basic Terminology
Hash function: Hash function is a mathematical function that maps a key into an index
(or address) in the hash table for storing and retrieving records.
Hash Table: Hash table is an array of size M.
Bucket: A bucket is an index position in a hash table that can store more than one record.
The size of the buckets may 1 , 2 or 3. If the bucket size is 2, then the same index may
mapped with two keys.
Collision: The result of two keys hashing into the same address is called collision.
Synonym: Keys that hash to the same address are called synonyms.
Full table: A full table is a hash table in which all locations are occupied.
Rehashing: When the collision occurs, we use a strategy to choose a sequence of
alternative locations, within bucket table so as to place the record. This is known as
rehashing.
2. HASH FUNCTIONS
There are many methods of implementing hash function. The popular hash
functions are as follows:
a. Division Method
b. Multiplication Method
c. Extraction Method
d. Mid-Square Hashing
e. Folding Technique
a. Division Method:
The division method is one way to create hash functions. The functions take the
following form
Hash(key)=key % M, Where M is the size of the hash table.
When the key is divided by M, then the remainder is in the range of 0 to M-1 and
this remainder is used as the hash address.
b. Multiplication Method
The multiplication method is another way to create hash functions. The
multiplication method works as follows
i. Multiply the key ‘KEY’ by a constant A where 0<A<1 and extract the fractional
part of Key X A
ii. Then multiply this value by M and take the floor of the result. Where M is size of
the hash table.
The functions take the form
Hash (KEY) = M X ((KEY X A) % 1)
Where (KEY X A) % 1 is the fractional part of KEY X A.
Since 0 < [(KEY X A)%1 ]<1 the range of Hash (KEY) is from 0 to M-1.
The commonly used values of A = (sqrt(5)-1)/2 = 0.618034
c. Extraction Method
When a portion of the key is used for address calculation, the technique is called as
the extraction method. In digit extraction, a few digits are selected, extracted from the key
and are used as the address.
We can selected odd digits or even digits to extract hash address. For example, for
key 345678, if we select odd digits, the address is 357.
d. Mid-Square Hashing:
Mid-Square hashing suggests to take the square of the key and extract the middle
digits of the squared key as the address. The difficulty is when the key is large. So Mid-
Square is used when the key size is less than or equal to 4 digits.
For example, for key 2134, the square of the key is 4553956, the hashed address is
539.
e. Folding Technique
In this technique, the key is sub divided into subparts that are combined or folded
and then combined to form the address. For a key with digits, we can subdivide the digits
into three parts, add them up and use the result as an address. Here the size of the subparts
of the key is the same as that of the address.
There are two types of folding methods.
i. Fold Shift: Key value is divided into several parts of the size of the address. Left, Right,
Middle parts are added.
For Example, if the key is 984634312, the subparts are 984, 634 and 312. The sum is
984+634+312 = 1930. Now discard digit 1 and the address is 930.
ii. Fold boundary: Key value is divided into parts of the size of the address. The parts are
folded and add them up.
For Example, if the key is 984634312, the subparts are 984, 634 and 312. The sum
of the reverse of the parts is 489+436+213 = 1138. Now discard digit 1 and the address is
138.
3. COLLISION RESOLUTION STRATEGIES
The result of two keys hashing into the same address but bucket size is 1 then it is
called collision. If Hash (key1) = Hash(key2), then key1 and Key2 are synonym and if
bucket size is 1, we say that collision has occurred. Now we have to store the record key2 at
some other location using one the several collision resolution strategies.
The collision resolution strategies are as follows:
a. Open Addressing
b. Chaining
a. Open Addressing
In open addressing, when collision occurs, it is resolved by finding an available
empty location other than the home address. If Hash(Key) is not empty, the positions are
probed in the following sequence until an empty location is found.
Three techniques are commonly used to compute the probe sequences required for
open addressing—linear probing, quadratic probing, and Double Hashing.
Linear Probing
A hash table in which a collision is resolved by placing the item in the next empty
place followed by the home address is called linear probing. This strategy looks for the next
free location until it is found. The Linear probing uses following function to find next
empty location.
Hash1(key) = (Hash(key)+i) % M where M is size of the hash table, i=1,2,3….
Let M be 100 and Key1 be 945 and Key2 be 2645. Now the
Index Key
Key1 hashes to location 45. Now the Key2 also maps to address 45
0
and collision occurs as address 45 already occupied.
1
Hash1(2645) = (Hash(2645)+1) % 100 ..
= (45+1) %100 45 945
Hash2(2645) =46 46 2645
..
The location 46 is empty and the key 2645 is stored here. 99
Quadratic Probing:
In quadratic probing, we add the offset as the square of the collision probe number.
In quadratic probing, the empty location is searched by using the following formula:
Hash1(key)=(Hash(key)+i2) % M where 1<i<(M-1)/2 and M is size of the hash table
Here M is a prime number, quadratic probing covers all the buckets in the table.
Hash(12) = 12%7= 5 0 13
Hash(15) = 15%7= 1 1 15
Hash(32) = 32%7 = 4 2
Hash(24) = 24%7 = 3 3 24
b. Chaining
Chaining uses concept of linked list inside the hash table. The technique chaining,
chains together all the records that hash to the same address. Instead of relocating, a linked
list of synonyms is created whose head is the home address of synonyms. We need to
handle pointers to form a chain of synonyms. The extra memory is needed for storing
pointers.
0
1 15
2
3 24
4 3
5 12
6 20 13
UNIT – IV
CHAPTER – 1
SEARCHING AND SORTING
1. SEARCH TECHNIQUES
The process of locating target data is known as searching. Depending on the way data
is scanned for searching a particular record, the search techniques are categorized as
follows:
1. Sequential search
2. Binary search
3. Fibonacci search
4. Index sequential search
5. Hashed search
2. SEQUENTIAL SEARCH
Sequential search is a method for finding a target value within a list. It sequentially
checks each element of the list for the target value from starting to ending until a match is
found or all the elements have been searched. Sequential search is also called as linear
search. Sequential search suitable when the data is stored in an unordered manner and there
is no way to directly access the data elements.
Let us consider an array of size 6. The key value to search is 11.
Here a[ ]={10,15,9,11,16,20} n=6, and key=11.
The sequential search begins by comparing a[0] to key value, if they not match
compare a[1] to key value, if they not match compare a[2] to key value and so on. Continue
this process until a match found or all the elements have been searched.
void seqsearch(int a[ ],int n,int k)
{
int flog=0,p;
for(int i=0;i<=n;i++)
{ if(a[i]==k)
{ flog=1;
p=i+1;
break;
}
}
if(flog==1)
cout<<k<<"found at \t"<<p<<endl;
else
cout<<k<<"is not found"<<endl;
}
Time complexity of Sequential search:
Here 1 comparison is required if the key data is placed at the first location, two
comparisons are required to if the key data is placed at 2nd location. Similarly n
comparisons are required if the key data is placed at the nth location.
Best case is 1
Worst case is n
Average case is= (1+2+3+….+n)/n
=(n+1)/2
The binary search algorithm can be implemented using non recursive and recursive
functions.
Internal sorting: Any sort algorithm that uses main memory exclusively during the sorting
is called as an internal sort algorithm. Internal sorting is faster than external sorting. The
various internal sorting techniques are the following:
1. Bubble sort
2. Insertion sort
3. Selection sort
4. Quick sort
5. Heap sort
6. Shell sort
7. Bucket sort
8. Radix sort
9. File sort
External Sorting: Any sort algorithm that uses external memory, such as tape or disk,
during the sorting is called as an external sort algorithm. Merge sort uses external memory.
5. BUBBLE SORT
The bubble sort is the simplest sorting technique. It is also the slowest. The bubble
sort works by comparing each item in the list with the item next to it and swapping them if
required. The algorithm repeats this process until all the elements in correct order.
In bubble sort to arrange elements in ascending order the process is as follows:
The 0th element is compared with 1st element. If 0th element is greater then they are
interchanged. Again 1st element is compared with second 2nd element, if 1st is greater, then
they are interchanged. Like this all elements are compared with next element and
interchanged if required. In the completion of first iteration the large element placed at last
position.
Similarly in the 2nd iteration, the 2nd largest element placed at second last position.
This process continues until all the elements in the list placed in correct order. To sort any
array of size n requires n-1 iterations.
For example consider an array of 5 elements as shown below.
a[5]={50,40,16,20,22}
The above array can be sorted as follows
6. SELECTION SORT
The selection sort is the simplest sorting technique. The selection sort algorithms
construct the sorted sequence, one element at a time by adding elements to the sorted
sequence in order. At each step, the next element to be added to the sorted sequence is
selected from the remaining elements.
In selection sort, there are two steps. In the first step, find the smallest element in the
list. In the second step, swap the smallest element with the element at the first position.
Then, find the next smallest element and swap with the element at the second position.
Repeat these steps until all elements get arranged at proper positions.
To find minimum value, first take 0th element as minimum element. Now compare
this minimum element with next elements if it is less than minimum element, take new
element is minimum element. Now again compare minimum element with next element.
After comparing with all elements swap minimum element with first element.
Index 0 1 2 3 4 Explanation
Given Array 50 40 16 20 22
50 40 16 20 22 40<50 →true
Step 1
M Now m=1
16 16<40 →true
Step 2 50 40 20 22
M Now m=2
Iteration 1 16 20<16→false
First min=0 Step 3 50 40 20 22
M No change
16 22<16 →false
Step 4 50 40 20 22
M No change
Step 5 16 40 50 20 22 Now swap 0th and 2nd pos element
16 40 50 20 22 50<40 → false
Step 1
M No change
40 50 20 22 20<40 → true
Iteration 2 Step 2 16
M Now m=3
First min=1 40 50 20 22 22<20 → false
Step 3 16
M No change
16 20 50 40 22 Now swap 1 and 3 pos element
st rd
Step 4
40 40<50 → true
Step 1 16 20 50 22
M Now m=3
Iteration 3 22 22<40 → true
First min=2 Step 2 16 20 50 40
M Now m=4
16 20 22 40 50 Now swap 2 and 4 pos element
nd th
Step 3
40 50<40 → false
Step 1 16 20 22 50
Iteration 4 M No change
First min=3 No swapping required as no
Step 2 16 20 22 40 50
change in m value
Finally Sorted Array 16 20 22 40 50
7. INSERTION SORT
Insertion sort is a simple sorting algorithm. It inserts each item into is proper place in
the final list. The array is virtually split into a sorted and an unsorted part. Values from the
unsorted part are picked and placed at the correct position in the sorted part.
To sort an array in ascending order the following steps are required
1. Iterate from array[1] to arr[n]
2. Compare the current element to its predecessor.
3. If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position back to make space for the
swapped element.
For example consider an array of 5 elements as shown below.
a[5]={50,40,16,20,22}
The above array can be sorted as follows
Iteration 1: Consider the first list is sorted and insert the second number 40 in the first list
Sorted Array Unsorted array
Array 50 40 16 20 22
Index 0 1 2 3 4
Sorted Unsorted
Array 40 50 16 20 22
Index 0 1 2 3 4
Index 0 1 2 3 4
Array 22 40 16 20 30
Now select pivot=a[0]=22;
Let us find the elements larger than pivot 22 from left side. If a[lower]<pivot →
lower++ otherwise element at front is larger. Here 40<22 is false so element at index 1 is
larger than pivot.
Let us find the elements smaller than pivot 22 from right side. If a[upper]>pivot →
upper-- otherwise element at upper is smaller. Here 30>22 is true, now upper=3 again
check with pivot element, 20>22 false so element at index 3 is smaller than pivot.
Let swap elements at index 1 and 3.
Index 0 1 2 3 4
Array 22 40 16 20 30
Index 0 1 2 3 4
Array 22 20 16 40 30
Let us again start scanning from both directions to find larger items from left side
and smaller items from right side. Whenever lower and upper bounds crossed, swap the
pivot with element at rear.
Here lower=1, 20<22 → true, now lower=2. Now 16<22→ lower=3 but 40<22 →
false. Stop increment.
Here upper=3, 40>22 → true, now upper=2, Now 16>22 → false. Stop decrement.
Here lower=3 and upper= 2. Hence Upper and lower bounds have crossed, now
swap the pivot with element at index 2.
Index 0 1 2 3 4
Array 22 20 16 40 30
Index 0 1 2 3 4
Array 16 20 22 40 30
Index 0 1 2 3 4
Array 16 20 22 40 30
Recursively applying similar steps to each sub part on the left and right side of the
pivot, the list is sorted in ascending order.
Index 0 1 2 3 4
Array 16 20 22 30 40
Algorithm
Repeat process till low<high
1. Select pivot = a[low];
2. Lower=low+1, higher=high;
3. While A[lower]<=pivot → increment index lower
4. While A[upper>=pivot → decrement index upper
5. Swap A[lower] with A[upper]
6. Repeat steps 3,4,5 till lower < upper
7. If upper > lower
Swap pivot with A[upper];
8. Call quick sort (low,upper-1)
9. Call quick sort(upper+1, high)
10. Stop
9. MERGE SORT
The most common sorting techniques in external sorting is merge sort. Merging is
the processes of combining two or more sorted files into the third sorted file. We can use a
technique of merging two sorted lists. Divide and conquer is a general algorithm design
used for merge sort. Merge sort has three steps to sort an input sequence S with n elements.
o Divide – Partition S into two sequences S1 and S2 of about n/2 elements
o Recur – Recursively sort S1 and S2
o Conquer – Merge S1 and S2 into a sorted sequence
Algorithm
1. If( size===1)
Return (S)
2. Find the middle point to divide the array into two halves
Middle m=(lower+upper)/2
3. Call merge sort for first half
Mergesort(array, lower, middle)
4. Call merge sort for second half
Mergesort(array, middle+1, upper)
5. Merge the two halves sorted in step2 and step 3
Merge(array, lower, middle, upper)
0 1 2 3 4 5 6
10 9 18 3 16 25 5
10 9 18 3 16 25 5
10 9 18 3 16 25 5
10 9 18 3 16 25
9 10 3 18 16 25 5
3 9 10 18 5 16 25
3 5 9 10 16 18 25
CHAPTER – 2
HEAPS
1. CONCEPT OF HEAPS
A Heap is a sorting technique. A Heap is a binary tree having the following
properties:
1. It is a complete binary tree, that is, each level of the tree is completely filled, except
the bottom level, where it is filled from left to right.
2. It satisfies the heap-order property, that is, the key value of each node is greater
than or equal to the key value of its children or the key value of each node is lesser
than or equal of the key value of its children.
The following binary trees are heaps
50 50
45 40 70 80
30 38 32 10 78 83
7 8
6 7
4 3
5
50
70 80
78 83
Max – Heap:
In Max – Heap the key value of each node greater than or equal to the key value its
children. In general the heap refers to a max-heap.
50
45 40
30 38 32 10
2. IMPLEMENTATION OF HEAP
Like binary trees, heaps can be implemented using arrays or linked list.
Implementation of heaps using arrays is easy task.
We simply number the nodes in the heap from top to bottom, and from left to right
at each level. The root of the tree is number 0. Every ith node is stored in the ith location of
the array. The root of the tree is stored at index 0, its left child at index 1 and its right child
at index 2 and so on. To represent a heap with height n, a one dimensional array of size
2n+1-1 is required.
The following rules can be used to decide the location of any ith node of a tree.
• The parent of i = (i-1)/2 , if i=0, then it is the root that has no parent.
• Left Child of i = 2i+1; if 2i>=n, then i has no left child
• Right Child of i=2i+2; if (2i+1)>=m, the i has no right child
Let us consider following heap tree
50
45 40
30 38 32 10
The implementation of this heap tree required a one dimensional array of size 7
Index 0 1 2 3 4 5 6
Value 50 45 40 30 38 32 10
45
45
38 40 38 40
27 22 30 27 22 30 44
Here, 44 is greater than its parent 40, hence it is an invalid heap. We therefore exchange 40
and 44 and call reheapUp to test current position in the heap. The node is placed at the
correct position, and hence the operation stops. We get the heap as shown below.
45
38 44
27 22 30 40
ReheapDown
When we have a complete binary tree that satisfies the heap-order-property except
in the root position, we need the reheapDown operation. Reheapdown operation is
required when the root is deleted from the tree, leaving two disjointed heaps.
The Reheapdown operation compares root with its children and select larger of the
two to exchange it with the root.
Let us consider a broken heap as shown below
32
38 40
27 22
30
Here , the root 32 is smaller than its children. It is not heap. We apply reheapdown
operation to create a heap. We compare the its children and select the larger of the two to
exchange it with the root, which is now 40. Exchange 32 and 40. We obtain following tree.
40
38 32
27 22 30
45
Index 0 1 2 3 4 5 6
38 40 Value 45 38 40 27 22 30
27 22 30
Let us consider that the element 44 is to be inserted. Initially, 44 is stored at the last empty
location. Now the tree is as follows
45
38 40 Index 0 1 2 3 4 5 6
Value 45 38 40 27 22 30 44
27 22 30 44
Here, 44 is greater than its parent 40, hence it is an invalid heap. We therefore
exchange 40 and 44 and call reheapUp to test current position in the heap. The node is
placed at the correct position, and hence the operation stops. We get the heap as shown
below.
45
Index 0 1 2 3 4 5 6
38 44 Value 45 38 44 27 22 30 40
27 22 30 40
Delete:
The delete operation removes root element from tree. Thus the heap is
without a root. To reconstruct the heap, we move the data in the last heap node to
the root and perform reheapDown.
Let us consider the heap tree as shown below.
45
Index 0 1 2 3 4 5 6
42 44
Value 45 42 44 27 22 30 40
27 22 30 40
When the delete operation is performed for the heap, the element at root ie 45 is
deleted and the last node 40 is placed at root value. Now the tree is as shown below
40
Index 0 1 2 3 4 5 6
42 44
Value 40 42 44 27 22 30
27 22 30
Here, 40 is smaller than its children, hence it is not heap. Now apply reheapdown is
performed to reconstruct a heap. The reconstructed heap is as shown below.
44
42 40 Index 0 1 2 3 4 5 6
Value 44 42 40 27 22 30
27 22 30
4. HEAP SORT
The steps for building heap sort are as follows:
1. Build the heap tree
2. Start delete Heap operations, storing each deleted element at the end of the heap
array.
If we want the elements to be sorted in ascending order, we need to build the
Max Heap tree.
Consider the array as 12,18,5,7,25,20
Build the Heap Tree:
The heap tree for above array is as follows:
Add 12 add 18 Reheapup() Add 5 Add 7
12 12 18 18
18
18 12 12 5
12 5
7
Add 25 Rehashup Add 20 Rehashup
18 25
25 25
12 5 18 5
18 5 18 20
7 25 7 12 20 7 12
7 12 5
25
18 20 Index 0 1 2 3 4 5 6
7 12 Value 25 18 20 7 12 5
5
Heap sort:
The following steps illustrate sorting by performing delete operation till the heap is empty.
Delete top element 25:
Delete the root element 25. The last element 5 is placed at top. Now swap root
element 25 with last element 5.
18 20 Index 0 1 2 3 4 5 6
Value 5 18 20 7 12 25
7 12
Perform reheapdown operation to reconstruct heap tree. Now the heap tree is as follows
20 Index 0 1 2 3 4 5 6
18 5 Value 20 18 5 7 12 25
7 12
Delete top element 20:
Delete the root element 20. The last element 12 is placed at top. Now swap root
element 20 with last element 12
12
Index 0 1 2 3 4 5 6
18 5
Value 12 18 5 7 20 25
7
Perform reheapdown operation to reconstruct heap tree. Now the heap tree is as
follows 18
Index 0 1 2 3 4 5 6
12 5
Value 18 12 5 7 20 25
7
Delete top element 18:
Delete the root element 18. The last element 7 is placed at top. Now swap root
element 18 with last element 7
7 Index 0 1 2 3 4 5 6
12 5 Value 7 12 5 18 20 25
Perform reheapdown operation to reconstruct heap tree. Now the heap tree is as
follows 12
Index 0 1 2 3 4 5 6
7 5
Value 12 7 5 18 20 25
7 Value 5 7 12 18 20 25
Perform reheapdown operation to reconstruct heap tree. Now the heap tree is as
follows
7 Index 0 1 2 3 4 5 6
5 Value 7 5 12 18 20 25
Index 0 1 2 3 4 5 6
Value 5 7 12 18 20 25
7. A program to implement insert and deletion operation on Circular Queue using Array
#include<iostream.h>
class CQueue
{
int q[50],n,f,r,c;
public:
CQueue(int x)
{
n=x;
f=r=-1;
c=0;
}
void insert(int);
void deletion();
void display();
};
void CQueue :: insert(int x)
{
if(c==n)
cout<<”Queue is full”;
else
{
r++;
q[r%n]=x;
c++;
}
}
void CQueue :: deletion()
{
if(c==0)
cout<<”Queue is empty”;
else
{
f++;
cout<<”Deleted “<<q[f%n];
c--;
}
}
void main()
{
CQueue q(5);
q.insert(10);
q.insert(20);
q.insert(30);
q.display();
q.deletion();
q.display();
}
void Single::display()
{
Node *temp;
temp=Head;
if(temp==NULL)
{
cout<<"Empty List";
}
else
{
while(temp!=NULL)
{
cout<< temp->data <<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void Single::deletion()
{
Node *temp;
temp=Head;
if(temp==NULL)
cout<<"Empty List";
else
{
Head=temp->link;
cout<<"Deleted"<<temp->data<<endl;
delete temp;
}
}
void main()
{
Single s;
s.insert(10);
s.insert(30);
s.display();
s.insert(60);
s.insert(70);
s.insert(80);
s.display();
s.deletion();
s.display();
}
9. A program to implement Push and Pop operations on Stack using linked list
#include<iostream.h>
class node
{
public:
int data;
node *link;
};
class Lstack
{
node *top;
public:
Lstack()
{
top=NULL;
}
void push(int);
void pop();
void display();
};
void Lstack::push(int x)
{
node *temp=new node;
temp->data=x;
temp->link=top;
top=temp;
}
void Lstack::pop()
{
if(top==NULL)
cout<<"stack is empty"<<endl;
else
{
cout<<"deleted data is "<< top->data <<endl;
top=top->link;
}
}
void Lstack::display()
{ node *temp=top;
if(temp==NULL)
cout<<"stack is empty";
else
{
cout<<"Stack elements are\n";
while(temp!=NULL)
{ cout<<temp->data<<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void main()
{
Lstack s;
s.push(10);
s.push(20);
s.push(30);
s.display();
s.pop();
s.display();
}
10. A program to implement insert and deletion operation on Queue using linked list
#include<iostream.h>
class node
{ public:
int data;
node *link;
};
class Lqueue
{
node *front, *rear;
public:
Lqueue()
{
front=rear=NULL;
}
void insert(int x);
void deletion();
void display();
};
void Lqueue::insert(int x)
{
node *temp=new node;
temp->data=x;
temp->link=NULL;
if(rear==NULL)
{ front=temp;
rear=temp;
}
else
{ rear->link=temp;
rear=temp;
}
}
void Lqueue::deletion()
{
if(rear==NULL)
cout<<"Queue is empty"<<endl;
else
{ cout<<"deleted element is \t"<<front->data<<endl;
front=front->link;
if(front==NULL)
rear=NULL;
}
}
void Lqueue::display()
{
if(rear==NULL)
cout<<"Queue is empty"<<endl;
else
{
cout<<”Queue elements are\n”;
node *temp=front;
while(temp!=NULL)
{ cout<<temp->data<<"\t";
temp=temp->link;
}
}
cout<<endl;
}
void main()
{
Lqueue q;
q.insert(10);
q.insert(20);
q.insert(30);
q.insert(40);
q.display();
q.deletion();
q.display();
}
else
cout<<k<<"is not found"<<endl;
}
void main()
{ int a[50],n,k;
cout<<"Enter size of the array\n";
cin>>n;
cout<<"Enter "<<n<<"values"<<endl;
for(int i=0;i<n;i++)
cin>>a[i];
cout<<"Enter elemnet to search"<<endl;
cin>>k;
seqsearch(a,n,k);
}
13. A program to sort the given list of numbers in ascender order using bubble sort
#include<iostream.h>
void main()
{
int a[50],n,i,j;
cout<<"Enter size of the array\n";
cin>>n;
cout<<"Enter "<<n<<"values"<<endl;
for(i=0;i<n;i++)
cin>>a[i];
for(i=1;i<n;i++)
{
for(j=0;j<n-i;j++)
{
if(a[j]>a[j+1])
{
int t=a[j];
a[j]=a[j+1];
a[j+1]=t;
}
}
}
cout<<"sorted List is\n";
for(i=0;i<n;i++)
cout<<a[i]<<endl;
}
14. A program to sort the given list of numbers in ascender order using selection sort
#include<iostream.h>
void main()
{
int a[50],n,i,j;
cout<<"Enter size of the array\n";
cin>>n;
cout<<"Enter "<<n<<"values"<<endl;
for(i=0;i<n;i++)
cin>>a[i];
for(i=0;i<n-1;i++)
{
for(j=i+1;j<n;j++)
{
if(a[i]>a[j])
{
int t=a[i];
a[i]=a[j];
a[j]=t;
}
}
}
cout<<"sorted List is\n";
for(i=0;i<n;i++)
cout<<a[i]<<endl;
}
15. A program to sort the given list of numbers in ascender order using insertion sort
#include<iostream.h>
void main()
{
int a[50],n,i,j,k,t;
cout<<"Enter size of the array\n";
cin>>n;
cout<<"Enter "<<n<<"values"<<endl;
for(i=0;i<n;i++)
cin>>a[i];
for(i=1;i<n;i++)
{ for(j=0;j<i;j++)
{
if(a[j]>a[i])
{
t=a[j];
a[j]=a[i];
for(k=i;k>j;k--)
{
a[k]=a[k-1];
}
a[k+1]=t;
}
}
}
cout<<"sorted List is\n";
for(i=0;i<n;i++)
cout<<a[i]<<endl;
}
* * *
FACULTY OF SCIENCE
8.5c. (CBCS) ll-Year (lll-Sem) Backlog Examinations, Sep/Oct_2021
Computer Science-lll
(Data Structurgs Using C++)
Time: 2 Hours Max Marks: 8O
:
Answer any Pour questions from the following. (4x20=80 Mark)
Describe the detailed process of evaluating postfix elipression.
Explain the concept of Circular eueue anj its operutions using a C++ program.
write about Recursive function call using c++ example program and illustrations.
Develop an algorithm for preorder tree traversar along with neat step by step
illustrations.
write an algorithm for Quick Sort and demonstrate using sample data erements.
3. Explain the concept of Queue and itt oPerations using a C++ program'
4. Write about single linked list and its operations using C++ code snlppets'
5. Develop an algorithm for lnorder tree traversal along with neat step by JteP illustrationt.
7. Write an algorithm for Merge 5ort and demonstrate using sample data elementt'
(4x20=8O Mark)
4. Explain the concept of linked list ADT and data structure of a node.
7. Write the algorithm and progranr for merge sort with an example.
FACULTY OF SCIENCE
2017
B.Sc. ll Year (lll Sernester) Regular Examinations' November/December
COMPUTER SCIENCE
(Data Structures)
[Marks E0
3 Hoursl
Part- A (4x5=20Marks)
1. Explain about reversing a string.
procedure?
2. What is the purpose of a stack in implementing a recursive
Construct the binary tree from following:
In order 8,4, 10, 9. 11' 2' 5 1'6'3'7
Find post order traversal.
4. Sort the elements using Quicksort: 52, 38, 81 ,22 48 13' 6S 93
Part- B (4 x 15 = 60 Marks)
5. (a) Write short notes on the following:
(i) Memory representation ancl Adclress calculation of 1 -D ' 2-D arrays
(ii) Different types of Data Structures
Or
(b) Explain aboui properties of binary search tree Construct birlary search tree for
following: 12.47,88,57, 85 43, 15, 20
(a) Explain and write an algorithm for nrerge sorting' Sod the
elenrents using merge
8.
sorr 38, 52, 13,60,42,13,41 '78'85
Or