MP-ANS
MP-ANS
Section – A
(10 × 2 = 20 Marks)
2. What is recursion?
3. Define a stack.
A stack is a linear data structure that follows the Last-In, First-Out (LIFO) principle.
Elements are added and removed only from one end, called the "top" of the stack.
A binary search tree (BST) is a rooted binary tree data structure where each node has at most
two children, referred to as the left child and the right child.
It maintains the property that for any given node, all values in its left subtree are less than the
node's value, and all values in its right subtree are greater than the node's value.
5. Define a graph.
A graph is a non-linear data structure consisting of a set of vertices (or nodes) and a set of
edges (or arcs) connecting pairs of these vertices.
It is used to represent relationships between discrete objects.
A circular queue is a linear data structure that operates similarly to a regular queue but with
the last element connected to the first element, forming a circle.
This arrangement allows for efficient use of memory by reusing dequeued spaces at the front
of the queue.
Stack: Follows LIFO (Last-In, First-Out) principle. Operations are Push (add) and Pop
(remove) from the top.
Queue: Follows FIFO (First-In, First-Out) principle. Operations are Enqueue (add) at the rear
and Dequeue (remove) from the front.
A non-linear data structure is one where data elements are not arranged sequentially or
linearly.
Each element can be connected to multiple other elements, allowing for complex
relationships (e.g., trees, graphs).
In-order Traversal: Visits the left subtree, then the root, then the right subtree (Left-Root-
Right).
Pre-order Traversal: Visits the root, then the left subtree, then the right subtree (Root-Left-
Right).
Post-order Traversal: Visits the left subtree, then the right subtree, then the root (Left-
Right-Root).
A priority queue is an abstract data type similar to a regular queue, but each element has a
"priority" associated with it.
Elements with higher priority are served before elements with lower priority, or elements
with the same priority are served according to their order in the queue.
Dynamic memory allocation is the process of allocating memory at runtime (during program
execution) rather than at compile time.
This allows programs to manage memory more flexibly, requesting memory as needed and
releasing it when no longer required (e.g., using malloc(), calloc(), realloc(), free() in
C).
Section – B
(6 × 5 = 30 Marks)
13. Explain tree traversal methods (Inorder, Preorder, Postorder) with an example.
Key Points:
Tree traversal refers to the process of visiting each node in a tree data structure exactly once.
There are three common depth-first traversal methods.
Example Tree:
A
/ \
B C
/ \
D E
1. Inorder Traversal (Left -> Root -> Right)
o Explanation: Traverse the left subtree, then visit the root node, and finally traverse the right
subtree. For Binary Search Trees, Inorder traversal yields elements in non-decreasing order.
o Output for Example: D B E A C
2. Preorder Traversal (Root -> Left -> Right)
o Explanation: Visit the root node first, then traverse the left subtree, and finally traverse the
right subtree. This is useful for creating a copy of the tree or for prefix expressions.
o Output for Example: A B D E C
3. Postorder Traversal (Left -> Right -> Root)
o Explanation: Traverse the left subtree, then traverse the right subtree, and finally visit the
root node. This is useful for deleting a tree or for postfix expressions.
o Output for Example: D E B C A
Key Points:
Definition: Bubble sort is a simple sorting algorithm that repeatedly steps through the list,
compares adjacent elements, and swaps them if they are in the wrong order. The pass through
the list is repeated until no swaps are needed, which indicates that the list is sorted.
Working Principle: In each pass, the largest unsorted element "bubbles up" to its correct
position at the end of the unsorted portion.
Algorithm (Pseudocode):
BubbleSort(arr, n)
FOR i FROM 0 TO n-2 DO // Outer loop for passes
swapped = FALSE
FOR j FROM 0 TO n-2-i DO // Inner loop for comparisons and
swaps
IF arr[j] > arr[j+1] THEN
// Swap arr[j] and arr[j+1]
temp = arr[j]
arr[j] = arr[j+1]
arr[j+1] = temp
swapped = TRUE
END IF
END FOR
IF swapped == FALSE THEN // Optimization: If no swaps in a
pass, array is sorted
BREAK
END IF
END FOR
END BubbleSort
Explanation:
1. The outer loop runs n-1 times (where n is the number of elements), representing the number
of passes needed.
2. The inner loop iterates through the unsorted part of the array. The n-2-i ensures that already
sorted elements at the end are not re-compared.
3. It compares arr[j] with arr[j+1]. If arr[j] is greater, they are swapped.
4. A swapped flag is used for optimization: if a pass completes without any swaps, it means the
array is already sorted, and the algorithm can terminate early.
Time Complexity:
o Best Case: O(n) (already sorted)
o Average Case: O(n^2)
o Worst Case: O(n^2)
Key Points:
Concept: A circular queue is a linear data structure, similar to a regular queue, but with the
last element connected to the first element, forming a circle. This allows for more efficient
use of memory space.
Pointers: It uses two pointers: front (or head) and rear (or tail).
o front points to the starting element of the queue.
o rear points to the last element of the queue.
Working:
o When an element is enqueued, rear is incremented circularly.
o When an element is dequeued, front is incremented circularly.
o The front and rear pointers "wrap around" to the beginning of the array when they reach
the end.
Conditions:
o Queue Empty: front == -1 (initial state) or front == rear + 1 (after all elements are
dequeued and rear wraps around). Some implementations use front == rear and
differentiate with a counter or boolean flag.
o Queue Full: When the next position for rear (calculated circularly) becomes equal to front.
For example, (rear + 1) % MAX_SIZE == front.
Advantages:
o Efficient memory utilization as unused slots at the beginning of the array can be reused.
o Avoids the "queue full" problem of linear queues where rear might reach the end of the
array even if front has advanced, leaving empty space at the beginning.
Diagram:
Initial Empty Queue (Size 5):
[ ] [ ] [ ] [ ] [ ]
0 1 2 3 4
front = -1, rear = -1
After Enqueue(10), Enqueue(20), Enqueue(30):
[10][20][30][ ] [ ]
0 1 2 3 4
front = 0, rear = 2
After Dequeue() (10 is removed):
[ ] [20][30][ ] [ ]
0 1 2 3 4
front = 1, rear = 2
After Enqueue(40), Enqueue(50):
[ ] [20][30][40][50]
0 1 2 3 4
front = 1, rear = 4
After Enqueue(60) (Wraps around):
[60][20][30][40][50]
0 1 2 3 4
front = 1, rear = 0 (wrapped)
Queue Full Condition:
[60][70][80][90][50] (Example if front was at 1, rear at 0, and we try
to add 100)
0 1 2 3 4
front = 1, rear = 0
(rear + 1) % MAX_SIZE = (0+1)%5 = 1, which is equal to front.
Key Points:
Key Points:
Concept: Dynamic memory allocation in C allows programs to request and release memory
during runtime. This is crucial when the size of data structures is not known at compile time
or when memory needs to vary.
Header: These functions are typically declared in the <stdlib.h> header file.
Functions:
1. malloc() (Memory Allocation)
Purpose: Allocates a block of memory of a specified size in bytes. It returns a void* pointer
to the beginning of the allocated block, or NULL if allocation fails. The allocated memory is
uninitialized (contains garbage values).
Syntax: void* malloc(size_t size);
Example:
C
int *ptr;
ptr = (int *) malloc(5 * sizeof(int)); // Allocates space for 5
integers
if (ptr == NULL) { /* handle error */ }
// Use ptr to access memory
free(ptr); // Release memory
C
int *arr;
arr = (int *) calloc(10, sizeof(int)); // Allocates space for 10
integers, all initialized to 0
if (arr == NULL) { /* handle error */ }
// Use arr
free(arr); // Release memory
3. realloc() (Re-allocation)
Purpose: Changes the size of an already allocated block of memory. It can either extend or
shrink the block. The contents of the old block are preserved up to the minimum of the old
and new sizes. Returns void* or NULL.
Syntax: void* realloc(void* ptr, size_t new_size);
Example:
C
int *ptr;
ptr = (int *) malloc(5 * sizeof(int)); // Allocate for 5 ints
// ... use ptr ...
ptr = (int *) realloc(ptr, 10 * sizeof(int)); // Reallocate for 10 ints
if (ptr == NULL) { /* handle error */ }
// Use updated ptr
free(ptr); // Release memory
4. free() (Deallocation)
Purpose: Deallocates the memory block previously allocated by malloc(), calloc(), or
realloc(). Releasing memory is crucial to prevent memory leaks.
Syntax: void free(void* ptr);
Example: (See examples above for malloc, calloc, realloc where free is used)
Key Points:
Definition: An adjacency matrix is a square matrix used to represent a finite graph. The rows
and columns are labeled with the graph's vertices.
Structure: If a graph has n vertices, its adjacency matrix will be an n x n matrix, say Adj.
Representation for Undirected Graph:
o Adj[i][j] = 1 if there is an edge between vertex i and vertex j.
o Adj[i][j] = 0 if there is no edge between vertex i and vertex j.
o For an undirected graph, the matrix is symmetric (Adj[i][j] == Adj[j][i]).
Representation for Directed Graph:
o Adj[i][j] = 1 if there is a directed edge from vertex i to vertex j.
o Adj[i][j] = 0 if there is no directed edge from vertex i to vertex j.
o The matrix may not be symmetric.
Representation for Weighted Graph:
o Instead of 1, Adj[i][j] stores the weight of the edge between vertex i and vertex j.
o 0 or infinity (or a very large number) can be used to indicate no edge.
Example (Undirected Graph):
Vertices: 0, 1, 2, 3
Edges: (0,1), (0,2), (1,2), (2,3)
Adjacency Matrix (4x4):
0 1 2 3
0 | 0 1 1 0
1 | 1 0 1 0
2 | 1 1 0 1
3 | 0 0 1 0
Advantages:
o Simple to implement and understand.
o Checking for an edge between two specific vertices (e.g., Adj[i][j]) is O(1) time
complexity.
o Adding or removing an edge is O(1).
Disadvantages:
o Space Complexity: O(V^2) where V is the number of vertices. This can be very inefficient
for sparse graphs (graphs with few edges) as it stores many zeros.
o Finding all neighbors of a vertex requires iterating through an entire row or column (O(V)
time).
Key Points:
Concept: Postfix expressions (Reverse Polish Notation) are evaluated using a stack.
Operands are pushed onto the stack, and when an operator is encountered, the top two
operands are popped, the operation is performed, and the result is pushed back onto the stack.
Algorithm:
1. Create an empty stack.
2. Scan the postfix expression from left to right.
3. If the scanned character is an operand, push it onto the stack.
4. If the scanned character is an operator:
Pop two operands from the stack (operand2 first, then operand1).
Perform the operation (operand1 operator operand2).
Push the result back onto the stack.
5. After scanning the entire expression, the value remaining on the stack is the result.
C Program:
C
#include <stdio.h>
#include <stdlib.h> // For malloc, free, exit
#include <string.h> // For strlen
#include <ctype.h> // For isdigit
// Stack structure
typedef struct {
int items[MAX_STACK_SIZE];
int top;
} Stack;
int main() {
char expression[] = "231*+9-"; // Example: (2 + (3 * 1)) - 9 = (2 +
3) - 9 = 5 - 9 = -4
// char expression[] = "5 3 + 6 2 / *"; // Example: (5+3) * (6/2) =
8 * 3 = 24
return 0;
}
Key Points:
Concept: A linear queue is a FIFO (First-In, First-Out) data structure where elements are
inserted at one end (rear) and deleted from the other end (front).
Pointers: Uses front and rear pointers.
o front: Points to the first element.
o rear: Points to the last element.
o Initially, front = -1 and rear = -1 for an empty queue.
Assumptions:
o The queue is implemented using an array queue[MAX_SIZE].
o MAX_SIZE is the maximum capacity of the queue.
1. Enqueue (Insertion) Algorithm:
o Purpose: To add an element to the rear of the queue.
o Algorithm:
1. Check for Queue Full (Overflow): If rear == MAX_SIZE - 1, print "Queue Overflow" and
return.
2. Handle Empty Queue: If front == -1 (queue is initially empty), set front = 0.
3. Increment Rear: Increment rear by 1 (rear = rear + 1).
4. Insert Element: Place the ITEM at queue[rear].
o Pseudocode:
o ENQUEUE(queue, front, rear, MAX_SIZE, ITEM)
o IF rear == MAX_SIZE - 1 THEN
o PRINT "Queue Overflow"
o RETURN
o END IF
o
o IF front == -1 THEN
o front = 0
o END IF
o
o rear = rear + 1
o queue[rear] = ITEM
o END ENQUEUE
2. Dequeue (Deletion) Algorithm:
o Purpose: To remove an element from the front of the queue.
o Algorithm:
1. Check for Queue Empty (Underflow): If front == -1 or front > rear, print "Queue
Underflow" and return an error/NULL.
2. Retrieve Element: Get the element at queue[front] into a temporary variable (e.g.,
DELETED_ITEM).
3. Increment Front: Increment front by 1 (front = front + 1).
4. Handle Last Element Deleted: If front > rear after incrementing, it means the last
element has been deleted. Reset front = -1 and rear = -1 to signify an empty queue.
5. Return Element: Return DELETED_ITEM.
o Pseudocode:
o DEQUEUE(queue, front, rear)
o IF front == -1 OR front > rear THEN
o PRINT "Queue Underflow"
o RETURN ERROR_VALUE (or NULL)
o END IF
o
o DELETED_ITEM = queue[front]
o front = front + 1
o
o IF front > rear THEN // If queue becomes empty after deletion
o front = -1
o rear = -1
o END IF
o
o RETURN DELETED_ITEM
o END DEQUEUE
Here are the answers to Section C, providing detailed explanations, examples, algorithms,
and code where requested, keeping the 10-mark allocation per question in mind.
Section – C
(3 × 10 = 30 Marks)
22. What is a binary search tree? Explain insertion and deletion operations with an
example and diagrams.
Key Points:
Definition: A Binary Search Tree (BST) is a node-based binary tree data structure that has
the following properties for every node:
o The value of all nodes in the left subtree is less than the value of the node.
o The value of all nodes in the right subtree is greater than the value of the node.
o Both the left and right subtrees must also be binary search trees.
o There are no duplicate nodes (though some definitions allow duplicates, placing them
typically in the right subtree).
Importance: BSTs allow for efficient searching, insertion, and deletion operations with an
average time complexity of O(log n), making them suitable for dynamic datasets.
Insertion Operation
Concept: To insert a new node into a BST, we traverse the tree from the root, comparing the
new node's value with current node's value to decide whether to go left or right.
Algorithm:
1. If the tree is empty, the new node becomes the root.
2. Otherwise, start from the root.
3. Compare the ITEM to be inserted with the current_node->data:
If ITEM < current_node->data, go to the left child.
If ITEM > current_node->data, go to the right child.
4. Repeat step 3 until you reach a NULL child pointer.
5. Insert the new node as the left or right child of the current_node based on the last
comparison.
Example & Diagram: Insert elements 50, 30, 70, 20, 40, 60, 80 into an empty BST.
1. Insert 50 (Root):
2. 50
3. Insert 30: (30 < 50, go left, insert as left child of 50)
4. 50
5. /
6. 30
7. Insert 70: (70 > 50, go right, insert as right child of 50)
8. 50
9. / \
10. 30 70
11. Insert 20: (20 < 50, go left; 20 < 30, go left, insert as left child of 30)
12. 50
13. / \
14. 30 70
15. /
16. 20
17. Insert 40: (40 < 50, go left; 40 > 30, go right, insert as right child of 30)
18. 50
19. / \
20. 30 70
21. / \
22. 20 40
23. Insert 60: (60 > 50, go right; 60 < 70, go left, insert as left child of 70)
24. 50
25. / \
26. 30 70
27. / \ /
28. 20 40 60
29. Insert 80: (80 > 50, go right; 80 > 70, go right, insert as right child of 70)
30. 50
31. / \
32. 30 70
33. / \ / \
34. 20 40 60 80
Deletion Operation
Concept: Deleting a node from a BST is more complex as it must maintain the BST
properties. There are three cases for deletion:
1. Case 1: Node to be deleted is a leaf node (no children).
2. Case 2: Node to be deleted has one child.
3. Case 3: Node to be deleted has two children.
Algorithm:
1. Search for the node: First, find the node to be deleted.
2. Case 1 (No children):
Simply remove the node. Set its parent's child pointer to NULL.
3. Case 2 (One child):
Replace the node with its single child. Link the parent of the deleted node to the child of the
deleted node.
4. Case 3 (Two children):
Find the inorder successor (smallest node in the right subtree) OR the inorder predecessor
(largest node in the left subtree).
Copy the value of the inorder successor (or predecessor) to the node that needs to be deleted.
Delete the inorder successor (or predecessor) from its original position (this deletion will be
either Case 1 or Case 2).
Example & Diagram: Consider the BST formed above:
50
/ \
30 70
/ \ / \
20 40 60 80
1. Delete 20 (Case 1: Leaf Node):
Node 20 has no children. Set 30's left child to NULL.
Key Points:
Graph Traversal: Both BFS and DFS are algorithms used to explore or search a graph (or
tree). They define the order in which nodes are visited.
Concept: BFS explores all the neighbor nodes at the current depth level before moving on to
the nodes at the next depth level. It uses a queue data structure to keep track of the nodes to
be visited.
Analogy: Like ripples in a pond, expanding outwards from a central point.
Algorithm:
1. Start by putting any one of the graph's vertices (the source) at the back of a queue.
2. Mark the source vertex as visited.
3. While the queue is not empty:
Dequeue a vertex V.
Visit/process V.
Enqueue all unvisited neighbors of V and mark them as visited.
Example: Consider the following undirected graph. Start BFS from node 'A'.
A -- B
| / |
C -- D
o Vertices: {A, B, C, D}
o Edges: (A,B), (A,C), (B,C), (B,D), (C,D)
BFS Steps:
Concept: DFS explores as far as possible along each branch before backtracking. It uses a
stack data structure (or implicitly, the call stack of recursion) to keep track of the path.
Analogy: Like navigating a maze by always going forward until a dead end, then
backtracking to the last choice point to try another path.
Algorithm (Recursive):
1. Mark the current vertex V as visited and process it.
2. For each unvisited neighbor U of V:
Recursively call DFS on U.
Algorithm (Iterative using Stack):
1. Start by pushing any one of the graph's vertices (the source) onto a stack.
2. While the stack is not empty:
Pop a vertex V from the stack.
If V has not been visited:
Mark V as visited and process it.
Push all unvisited neighbors of V onto the stack (order might matter for specific traversal
paths, typically reversed to maintain order).
Example: Consider the same undirected graph. Start DFS from node 'A'.
A -- B
| / |
C -- D
o Vertices: {A, B, C, D}
o Edges: (A,B), (A,C), (B,C), (B,D), (C,D)
3. DFS(A):
Visit A. Mark A visited.
A'sneighbors: B, C.
DFS(B):
Visit B. Mark B visited.
B'sneighbors: A (visited), C, D.
DFS(C):
Visit C. Mark C visited.
C'sneighbors: A (visited), B (visited), D.
DFS(D):
Visit D. Mark D visited.
D'sneighbors: B (visited), C (visited). No unvisited neighbors.
Return from DFS(D).
Return from DFS(C).
Return from DFS(B).
Return from DFS(A). Traversal Order: A -> B -> C -> D
24. Write a menu-driven C program to perform all stack operations using arrays.
Key Points:
Stack Implementation: Using a fixed-size array and a top variable to keep track of the
topmost element.
Operations:
o push(): Adds an element to the top. Checks for overflow.
o pop(): Removes and returns the top element. Checks for underflow.
o peek(): Returns the top element without removing it. Checks for underflow.
o isEmpty(): Checks if the stack is empty.
o isFull(): Checks if the stack is full.
Menu-Driven: Provides options for the user to select desired stack operations.
C Program:
C
#include <stdio.h>
#include <stdlib.h> // For exit()
switch (choice) {
case 1:
printf("Enter value to push: ");
scanf("%d", &value);
push(value);
break;
case 2:
pop(); // No need to store return value unless needed
break;
case 3:
peek(); // No need to store return value unless needed
break;
case 4:
if (isEmpty()) {
printf("\nStack is empty.\n");
} else {
printf("\nStack is NOT empty.\n");
}
break;
case 5:
if (isFull()) {
printf("\nStack is full.\n");
} else {
printf("\nStack is NOT full.\n");
}
break;
case 6:
display();
break;
case 7:
printf("Exiting program. Goodbye!\n");
exit(0); // Exit the program
default:
printf("Invalid choice. Please try again.\n");
}
}
return 0;
}
Explain Stack using Array and Linked List with operations and code.
Stack using Array Implementation
(5 Marks)
Key Points:
1. Definition: A Stack is a linear data structure that follows the LIFO (Last-In, First-
Out) principle for insertion and deletion operations.
2. Array Representation:
o A stack can be implemented using a one-dimensional array.
o It uses a TOP variable (or pointer) to keep track of the topmost element in the stack.
Initially, TOP is often set to -1 (indicating an empty stack).
o The size of the stack is fixed when implemented with an array, meaning it can store
only a predefined number of elements.
3. Operations:
o PUSH (Insertion): Adds an element to the top of the stack.
Algorithm:
Step 1: Check if the stack is full (Stack Overflow condition). If TOP is equal to
MAX_SIZE - 1, the stack is full.
Step 2: If not full, increment TOP.
Step 3: Insert the new ITEM at the Stack[TOP] position.
Pseudocode:
PUSH(Stack, TOP, MAX_SIZE, ITEM)
IF TOP == MAX_SIZE - 1 THEN
PRINT "Stack Overflow"
RETURN
ELSE
TOP = TOP + 1
Stack[TOP] = ITEM
END IF
END PUSH
o POP (Deletion): Removes the topmost element from the stack.
Algorithm:
Step 1: Check if the stack is empty (Stack Underflow condition). If TOP is equal to -1,
the stack is empty.
Step 2: If not empty, retrieve the element at Stack[TOP].
Step 3: Decrement TOP.
Pseudocode:
POP(Stack, TOP)
IF TOP == -1 THEN
PRINT "Stack Underflow"
RETURN NULL
ELSE
ITEM = Stack[TOP]
TOP = TOP - 1
RETURN ITEM
END IF
END POP
o PEEK/TOP: Returns the topmost element without removing it.
o ISEMPTY: Checks if the stack is empty.
o ISFULL: Checks if the stack is full (only for array implementation).
4. Advantages: Simple to implement.
5. Disadvantages: Fixed size (static), leading to potential Stack Overflow if the number
of elements exceeds the array capacity.
(5 Marks)
Key Points:
1. Definition: A stack can be efficiently implemented using a linked list, where elements
(nodes) are dynamically allocated.
2. Linked List Representation:
o Each node in the linked list contains data and a next pointer (or link) to the next
node.
o A TOP pointer keeps track of the last inserted element (the top of the stack).
o The TOP pointer always points to the first node of the linked list (which represents
the top of the stack).
3. Operations:
o PUSH (Insertion): Inserts a new element as the top element of the stack.
Algorithm:
Step 1: Create a new node PTR and allocate memory for it.
Step 2: Assign the ITEM to PTR->INFO.
Step 3: Set PTR->LINK = TOP.
Step 4: Update TOP = PTR.
Pseudocode:
PUSH(TOP, ITEM)
PTR = new Node()
PTR->INFO = ITEM
PTR->LINK = TOP
TOP = PTR
END PUSH
o POP (Deletion): Removes the topmost node from the stack (the node pointed to by
TOP).
Algorithm:
Step 1: Check if the stack is empty (Stack Underflow condition). If TOP is NULL, the
stack is empty.
Step 2: If not empty, store the INFO of the TOP node into a temporary variable N.
Step 3: Move TOP to TOP->LINK.
Step 4: Free the memory of the original TOP node.
Pseudocode:
POP(TOP)
IF TOP == NULL THEN
PRINT "Stack is Empty (Underflow)"
RETURN NULL
ELSE
N = TOP->INFO
TEMP = TOP
TOP = TOP->LINK
FREE(TEMP)
RETURN N
END IF
END POP
o PEEK/TOP: Returns the data of the TOP node without removing it.
o ISEMPTY: Checks if TOP is NULL.
4. Advantages:
o Dynamic size: The stack can grow or shrink as needed, avoiding the "Stack Overflow"
issue due to fixed capacity.
o Efficient memory utilization.
5. Disadvantages:
o Requires more memory per element (due to storing pointers).