0% found this document useful (0 votes)
3 views

SiddharthashankarDaa

The document covers fundamental concepts in algorithms, including definitions and properties of algorithms, complexity analysis, and various sorting methods like bucket sort and radix sort. It also discusses data structures such as Binary Search Trees, Red-Black Trees, and B-trees, along with their properties and differences. Additionally, it explores algorithmic paradigms like Divide and Conquer and Greedy approaches, providing examples and complexities for problems like the knapsack problem and matrix multiplication.

Uploaded by

foreducation1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

SiddharthashankarDaa

The document covers fundamental concepts in algorithms, including definitions and properties of algorithms, complexity analysis, and various sorting methods like bucket sort and radix sort. It also discusses data structures such as Binary Search Trees, Red-Black Trees, and B-trees, along with their properties and differences. Additionally, it explores algorithmic paradigms like Divide and Conquer and Greedy approaches, providing examples and complexities for problems like the knapsack problem and matrix multiplication.

Uploaded by

foreducation1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

SECTION A

UNIT 1

a) Define the term Algorithm with its properties.

Definition: An algorithm is a step-by-step procedure or formula for solving a problem or


accomplishing a task. It is a finite set of instructions that, when executed, produces a
result.

Properties of an Algorithm:

1. Finiteness: The algorithm must terminate after a finite number of steps.


2. Definiteness: Each step must be clearly defined and unambiguous.
3. Input: An algorithm takes zero or more inputs.
4. Output: It produces one or more outputs as a result.
5. Effectiveness: All operations in the algorithm must be basic enough to be executed
within a finite amount of time.

b) Describe the various asymptotic notations used in the growth of


functions.

Asymptotic Notations:

1. Big O (OO): Represents the upper bound of an algorithm’s time complexity,


indicating the worst-case scenario.
a. Example: O(n2)O(n^2), O(nlog⁡n)O(n \log n).
2. Omega (Ω\Omega): Represents the lower bound, indicating the best-case scenario.
a. Example: Ω(n)\Omega(n), Ω(1)\Omega(1).
3. Theta (Θ\Theta): Represents the tight bound, indicating the average-case scenario
where the function is bounded both above and below.
a. Example: Θ(n2)\Theta(n^2).
4. Little-o (oo): Represents a strict upper bound, where the function grows slower
than a specific bound.
a. Example: f(n)=o(n2)f(n) = o(n^2).
5. Little-omega (ω\omega): Represents a strict lower bound, where the function
grows faster than a specific bound.
a. Example: f(n)=ω(n)f(n) = \omega(n).

c) Discuss the term complexity of an algorithm.

Definition: Complexity of an algorithm refers to the measure of the resources required


(time and space) to execute the algorithm as a function of the input size.

Types of Complexity:

1. Time Complexity: Measures the amount of time an algorithm takes to complete.


2. Space Complexity: Measures the amount of memory space required to run the
algorithm.

d) Discuss the various cases of complexity analysis.

1. Best Case: The minimum time taken by the algorithm for any input of size nn.
Represented as Ω\Omega.
2. Worst Case: The maximum time taken by the algorithm for any input of size nn.
Represented as OO.
3. Average Case: The expected time taken by the algorithm, averaged over all possible
inputs. Represented as Θ\Theta.

e) Can we show 2n+1=O(2n)2n+1 = O(2^n)?

Yes, we can show that 2n+1=O(2n)2n+1 = O(2^n).

Proof:

By definition, f(n)=O(g(n))f(n) = O(g(n)) if there exist constants c>0c > 0 and n0>0n_0
> 0 such that f(n)≤c⋅g(n)f(n) \leq c \cdot g(n) for all n≥n0n \geq n_0.

For f(n)=2n+1f(n) = 2n+1 and g(n)=2ng(n) = 2^n:

• As n→∞n \to \infty, 2n2^n grows exponentially while 2n+12n+1 grows linearly.
• Thus, there exists a constant c>0c > 0 such that 2n+1≤c⋅2n2n+1 \leq c \cdot 2^n.
• Hence, 2n+1=O(2n)2n+1 = O(2^n).

f) Write the algorithm of bucket sort and discuss its complexity.

Algorithm:

1. Create kk empty buckets.


2. Distribute the input elements into these buckets.
3. Sort each bucket individually using a sorting algorithm.
4. Concatenate all the buckets to get the sorted array.

Complexity:

• Best Case: O(n+k)O(n + k), assuming uniform distribution and kk buckets.


• Worst Case: O(n2)O(n^2), if all elements are in one bucket and sorted using a
quadratic sorting algorithm.
• Average Case: O(n+k)O(n + k).

g) Analyze the complexity of shell sort in all cases.

Time Complexity:

1. Best Case: O(nlog⁡n)O(n \log n), when the array is already sorted.
2. Worst Case: O(n3/2)O(n^{3/2}), depending on the gap sequence.
3. Average Case: Depends on the gap sequence, often approximated as
O(n5/4)O(n^{5/4}).

h) Describe the substitution method for solving recurrence relations.

Substitution Method: A technique to solve recurrence relations by guessing a solution and


using mathematical induction to prove it.

Steps:

1. Guess the form of the solution.


2. Substitute the guess into the recurrence relation.
3. Solve for constants and prove the guess is correct.

Example: Solve T(n)=2T(n/2)+nT(n) = 2T(n/2) + n :

Guess T(n)=cnlog⁡nT(n) = cn \log n. Substituting and simplifying confirms the solution.

i) Write a short note on Radix sort and discuss its complexity.

Radix Sort: A non-comparative sorting algorithm that processes integer digits from least
significant to most significant.

Steps:

1. Find the maximum number to determine the number of digits.


2. Sort the array using counting sort for each digit, starting from the least significant
digit.

Complexity:

• Time Complexity: O(d⋅(n+b))O(d \cdot (n + b)), where dd is the number of digits


and bb is the base.
• Space Complexity: O(n+b)O(n + b).

j) Apply the master method to solve the recurrence relation


T(n)=T(2n/3)+1T(n) = T(2n/3) + 1 .

Recurrence Relation: T(n)=T(2n/3)+1T(n) = T(2n/3) + 1 .

1. Compare with the general form: T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n), where


a=1a = 1, b=3/2b = 3/2, and f(n)=1f(n) = 1.
2. Calculate p=log⁡b(a)=log⁡3/2(1)=0p = \log_b(a) = \log_{3/2}(1) = 0.
3. Compare f(n)=1f(n) = 1 with np=n0=1n^p = n^0 = 1 . Since f(n)=np⋅log⁡knf(n)
= n^p \cdot \log^k n for k=0k = 0, this falls under case 2 of the master theorem.

Solution: T(n)=Θ(log⁡n)T(n) = \Theta(\log n).


UNIT 2

a) Define the term BST with its properties.

Definition: A Binary Search Tree (BST) is a binary tree where each node has the following
properties:

1. The left subtree of a node contains only nodes with values less than the node's key.
2. The right subtree of a node contains only nodes with values greater than the node's
key.
3. Both left and right subtrees must also be BSTs.

Properties:

1. In-order traversal of a BST gives a sorted sequence.


2. Search, insertion, and deletion operations have an average-case complexity of
O(log⁡n)O(\log n), but worst-case complexity can be O(n)O(n) for a skewed tree.

b) Define the term R-B tree with its properties.

Definition: A Red-Black Tree is a self-balancing binary search tree where each node has an
additional attribute called color, which can be either red or black.

Properties:

1. Each node is either red or black.


2. The root is always black.
3. Red nodes cannot have red children (no two consecutive red nodes).
4. Every path from a node to its descendant NULL nodes has the same number of
black nodes.
5. The height of the tree is O(log⁡n)O(\log n).
c) How is an R-B tree different from a BST?

Feature Binary Search Tree (BST) Red-Black Tree (R-B Tree)


No self-balancing Maintains balance using color
Balancing
mechanism. properties.
Worst-case Can degrade to O(n)O(n)
Always O(log⁡n)O(\log n).
height (skewed tree).
Rebalancing may occur to maintain
Insert/Delete No rebalancing required.
properties.
Structure Simple binary structure. Additional color property and rules.

d) Explain B-tree. What is the significance of degree in B-tree?

Definition: A B-tree is a self-balancing search tree where nodes can have multiple keys
and children. It is designed to work well on disk-based systems by minimizing disk I/O
operations.

Significance of Degree:

• The degree (tt) determines the minimum and maximum number of keys a node can
hold:
o Minimum keys per node: t−1t - 1.
o Maximum keys per node: 2t−12t - 1.
• The degree also affects the branching factor, where the number of children is at
most 2t2t.

e) Write the properties of B-tree.

1. All leaves are at the same level.


2. A node can have at most 2t−12t - 1 keys and at least t−1t - 1 keys (except the root).
3. The number of children of a node is one more than the number of keys.
4. The keys within a node are sorted in ascending order.
5. The tree is balanced, with height O(log⁡tn)O(\log_t n).
f) Define Binomial tree with example.

Definition: A Binomial Tree BkB_k is a recursive structure:

1. B0B_0 consists of a single node.


2. BkB_k is formed by linking two Bk−1B_{k-1} trees, where the root of one becomes a
child of the other.

Example:

• B0B_0: A single node.


• B1B_1: Two nodes, with one being the root and the other its child.
• B2B_2: Four nodes, arranged in a binary tree with the root at the top.

g) Discuss Binomial heap with example.

Definition: A Binomial Heap is a collection of binomial trees that satisfies the min-heap
property (parent node ≤ child nodes). Each binomial tree in the heap has a unique degree.

Example:

• A binomial heap with elements [1,3,7,9,12][1, 3, 7, 9, 12] could consist of a B0B_0


tree with root 1 and a B2B_2 tree with root 3.

h) Analyze the cases used in Binomial heap union.

Union of two binomial heaps involves:

1. Merging: Combine the two heaps into a single sorted list of binomial trees based on
degree.
2. Linking: Merge trees of the same degree by linking one as a child of the other.
3. Reordering: Ensure no two binomial trees of the same degree exist.

Complexity: O(log⁡n)O(\log n), where nn is the total number of elements.


i) Define Fibonacci heap.

Definition: A Fibonacci Heap is a collection of trees that supports operations such as


insertion, deletion, and finding the minimum in O(1)O(1) amortized time. It uses lazy
merging to improve efficiency.

Key Features:

1. Trees are not strictly binomial but are loosely organized.


2. Operations like decrease-key and delete are efficient due to structural flexibility.

j) Give the difference between Binomial & Fibonacci heap.

Feature Binomial Heap Fibonacci Heap


Collection of binomial trees.
Structure Collection of unordered trees.

O(log⁡n)O(\log n) for decrease-


Amortized O(1)O(1) for decrease-key
key and delete.
Operations and insert.

Less efficient for dynamic


More efficient due to lazy
Efficiency operations.
merging.

Complexity of
O(log⁡n)O(\log n). O(1)O(1).
Union
UNIT 3

a) Explain Divide & Conquer algorithms.

Definition: Divide and Conquer is an algorithmic paradigm that solves a problem by


dividing it into smaller sub-problems, solving them recursively, and combining their
solutions.

Steps:

1. Divide: Break the problem into smaller sub-problems of the same type.
2. Conquer: Solve the sub-problems recursively.
3. Combine: Merge the solutions of the sub-problems to solve the original problem.

Examples: Merge Sort, Quick Sort, Binary Search, etc.

b) Describe the convex hull problem with its solution.

Definition: The convex hull of a set of points is the smallest convex polygon that can
enclose all the given points.

Solution:

Algorithms to solve the convex hull problem:

1. Graham's Scan: Sort points by polar angle, then construct the hull using a stack.
Time complexity: O(nlog⁡n)O(n \log n).
2. Jarvis March (Gift Wrapping): Start with the leftmost point and wrap around the
points. Time complexity: O(nh)O(nh), where hh is the number of hull vertices.

c) Discuss the concept of greedy approach to solve problems.

Definition: The greedy approach involves solving problems by making the locally optimal
choice at each step, assuming that this will lead to the globally optimal solution.
Steps:

1. Choose the best possible option at the current step.


2. Repeat for subsequent steps until the problem is solved.

Examples:

• Kruskal's and Prim's algorithms for Minimum Spanning Tree (MST).


• Fractional Knapsack problem.

d) Analyze the knapsack problem by greedy approach.

Fractional Knapsack Problem:

1. Problem: Given nn items with weights and values, maximize the total value for a
given weight capacity.
2. Approach:
a. Calculate value-to-weight ratio for each item.
b. Sort items by this ratio in descending order.
c. Pick items with the highest ratio until the capacity is filled.

Complexity: O(nlog⁡n)O(n \log n) due to sorting.

Note: The greedy approach does not work for the 0/1 Knapsack problem.

e) Discuss the matrix multiplication problem.

Problem: Multiply two matrices AA (p×qp \times q) and BB (q×rq \times r) to get a
resultant matrix CC (p×rp \times r).

Traditional Method: Requires O(pqr)O(pqr) operations.

Optimized Approaches:

1. Strassen's Algorithm: Reduces complexity to O(n2.81)O(n^{2.81}).


2. Divide and Conquer: Breaks matrices into smaller sub-matrices for recursive
multiplication.
f) Discuss the concept of spanning tree.

Definition: A spanning tree of a graph is a subgraph that includes all vertices of the graph,
is connected, and has no cycles.

Properties:

1. A graph with VV vertices has V−1V-1 edges in a spanning tree.


2. Minimum Spanning Tree (MST) minimizes the total edge weight.

Applications:

• Network design, circuit layout, and clustering.

g) Give the name of any two algorithms to find MST.

1. Kruskal's Algorithm: Sort edges by weight and add them to the MST if they don’t
form a cycle.
2. Prim's Algorithm: Start with a single vertex and grow the MST by adding the
smallest edge connecting a vertex in the MST to one outside it.

h) How to evaluate single-source shortest path problem?

Evaluation:

1. Use an algorithm that computes the shortest path from a source vertex to all other
vertices in a weighted graph.
2. Ensure no negative weight cycles exist if using algorithms like Dijkstra’s.

i) Which algorithms are used to solve single-source shortest path


problems?

1. Dijkstra's Algorithm: Works with non-negative edge weights. Complexity:


O(V2)O(V^2) or O(E+Vlog⁡V)O(E + V \log V) using a priority queue.
2. Bellman-Ford Algorithm: Handles graphs with negative weights. Complexity:
O(VE)O(VE).
j) Discuss the concept of optimal reliability allocation.

Definition: Optimal reliability allocation involves distributing resources across


components in a system to maximize the system’s overall reliability while minimizing cost.

Approach:

1. Identify reliability requirements for each subsystem.


2. Use optimization techniques to allocate reliability such that the total cost is
minimized while meeting the reliability constraint.

Applications: Network design, fault-tolerant systems, and critical infrastructure planning.

UNIT 4

a) Explain Dynamic algorithms to solve problems.

Definition: Dynamic Programming (DP) is an optimization technique used to solve


problems by breaking them into overlapping subproblems, solving each subproblem once,
and storing its solution for future use (memoization or tabulation).

Steps:

1. Define the subproblem.


2. Write a recurrence relation for the problem.
3. Use a bottom-up or top-down approach to solve the problem.

Examples:

• Fibonacci sequence, Longest Common Subsequence (LCS), and Matrix Chain


Multiplication.
b) Analyze the knapsack problem by dynamic approach.

0/1 Knapsack Problem:

1. Problem: Given nn items with weights and values, maximize the total value for a
given weight capacity WW, where an item can either be included or excluded.
2. Dynamic Programming Approach:
a. Let dp[i][w]dp[i][w] represent the maximum value obtainable using the first
ii items with weight limit ww.
b. Recurrence relation: dp[i][w]={dp[i−1][w]if
wi>wmax⁡(dp[i−1][w],dp[i−1][w−wi]+vi)otherwisedp[i][w] =
\begin{cases} dp[i-1][w] & \text{if } w_i > w \\ \max(dp[i-1][w], dp[i-1][w-
w_i] + v_i) & \text{otherwise} \end{cases}
c. Base case: dp[0][w]=0dp[0][w] = 0 .

Time Complexity: O(nW)O(nW), where nn is the number of items, and WW is the


capacity.

c) How to evaluate all-pair shortest path problem?

Evaluation:

1. Use algorithms like the Floyd-Warshall or repeated application of Dijkstra's


algorithm to compute the shortest paths between all pairs of vertices in a weighted
graph.
2. Represent the graph using an adjacency matrix or list for efficient computations.

d) Analyze an algorithm to solve all-pair shortest path problem.

Algorithm: Floyd-Warshall Algorithm

1. Initialize the distance matrix: dist[i][j]=weight(i,j)dist[i][j] = weight(i, j) , where


(i,j)(i, j) is an edge; if no edge exists, set dist[i][j]=∞dist[i][j] = \infty.
2. For each vertex kk: dist[i][j]=min⁡(dist[i][j],dist[i][k]+dist[k][j])dist[i][j] =
\min(dist[i][j], dist[i][k] + dist[k][j])
3. Repeat for all pairs of vertices.
Time Complexity: O(V3)O(V^3), where VV is the number of vertices.

Space Complexity: O(V2)O(V^2).

e) Discuss the concept of Backtracking.

Definition: Backtracking is a recursive algorithmic technique to solve problems by


exploring all possible solutions and discarding solutions that do not satisfy constraints.

Key Idea: It works by building a solution incrementally and abandoning a path


("backtracking") as soon as it determines that the path cannot lead to a valid solution.

Examples: N-Queens problem, Sudoku, Graph Coloring.

f) Explain N-queen problem by backtracking.

Problem: Place NN queens on an N×NN \times N chessboard such that no two queens
attack each other.

Backtracking Approach:

1. Place a queen in a row.


2. Check if it is safe (no other queen in the same row, column, or diagonal).
3. Recursively place queens in subsequent rows.
4. If placing a queen leads to a conflict, backtrack to try a different position.

g) Apply backtracking to solve 4-queen problem.

Steps:

1. Place the first queen in any column of the first row.


2. Proceed to the next row and place a queen in a safe column.
3. Repeat for all rows.
4. If no safe column exists, backtrack to the previous row and move the queen to the
next possible column.
Solution:

For a 4x4 board, the two valid solutions are:

• Solution 1: [0Q00000QQ00000Q0]\begin{bmatrix} 0 & Q & 0 & 0 \\ 0 & 0 & 0 & Q


\\ Q & 0 & 0 & 0 \\ 0 & 0 & Q & 0 \end{bmatrix}
• Solution 2: [00Q0Q000000Q0Q00]\begin{bmatrix} 0 & 0 & Q & 0 \\ Q & 0 & 0 & 0
\\ 0 & 0 & 0 & Q \\ 0 & Q & 0 & 0 \end{bmatrix}

h) Apply backtracking to solve graph coloring problem.

Problem: Assign colors to vertices of a graph such that no two adjacent vertices share the
same color.

Backtracking Approach:

1. Start with the first vertex and assign it a color.


2. Proceed to the next vertex and assign the first available color that doesn’t conflict
with adjacent vertices.
3. If no color is available, backtrack and change the color of the previous vertex.

Example:

For a graph with three vertices V1,V2,V3V_1, V_2, V_3 and edges V1−V2,V2−V3V_1-V_2,
V_2-V_3:

• Possible coloring with 2 colors:


V1:Red,V2:Blue,V3:RedV_1: \text{Red}, V_2: \text{Blue}, V_3: \text{Red}.

i) Discuss the concept of Branch & Bound technique.

Definition: Branch and Bound is an optimization technique used to solve combinatorial


and optimization problems. It explores all possible solutions in a systematic manner but
uses bounds to prune suboptimal solutions.

Steps:

1. Branch: Divide the problem into smaller subproblems.


2. Bound: Compute an upper or lower bound for the objective function to eliminate
suboptimal solutions.
3. Explore: Select promising branches and repeat.

Examples: Traveling Salesman Problem, 0/1 Knapsack problem.

j) Discuss the concept of Sum of Subset problem.

Definition: The Sum of Subset problem determines whether a subset of a given set of
numbers adds up to a specific target sum.

Backtracking Approach:

1. Include the current element in the subset.


2. Check if the current subset sum equals the target sum.
3. If it does, stop; otherwise, proceed to the next element.
4. Backtrack to try other possibilities.

Example:

Set: [3,34,4,12,5,2][3, 34, 4, 12, 5, 2] , Target: 99.

• One solution: Subset [4,5][4, 5].

UNIT 5

a) Discuss the concept of string matching.

Definition: String matching is the process of finding the occurrence of a pattern PP in a


given text TT. It is commonly used in searching, text processing, and bioinformatics.

Applications:

1. Search engines.
2. DNA sequence analysis.
3. Spam filters and plagiarism detection.

b) Give the name of any two algorithms to solve the string matching
problem.

1. Knuth-Morris-Pratt (KMP) Algorithm.


2. Boyer-Moore Algorithm.

c) Compare any two string matching algorithms by their complexity.

1. Naïve Algorithm:
a. Complexity: O(nm)O(nm), where nn is the length of the text and mm is the
length of the pattern.
b. Works by checking all possible positions in the text.
2. KMP Algorithm:
a. Complexity: O(n+m)O(n + m).
b. Preprocesses the pattern to create a "longest prefix suffix" (LPS) array to skip
unnecessary comparisons.

Comparison:

KMP is more efficient than the Naïve algorithm due to its preprocessing step that avoids
redundant comparisons.

d) Define NP class.

Definition: NP (Nondeterministic Polynomial time) is the class of decision problems for


which a solution can be verified in polynomial time by a deterministic Turing machine.

Key Properties:

1. If a problem is in NP, we can verify its solution in polynomial time.


2. Examples: Hamiltonian Path, Subset Sum, Traveling Salesman Problem (decision
version).
e) Evaluate NP problem.

An NP problem involves determining whether a solution exists for a problem within


polynomial time.

• Example: The Subset Sum Problem asks if there is a subset of numbers in a given
set that sums to a specific value.
• Verification: If a solution (subset) is given, it can be verified in polynomial time by
summing the subset elements.

f) Discuss Approximation algorithms.

Definition: Approximation algorithms are used to find near-optimal solutions for


optimization problems, especially NP-hard problems, in polynomial time.

Key Features:

1. Provide guarantees on the quality of the solution (approximation ratio).


2. Do not necessarily yield the exact solution but are faster than exact algorithms.

Examples:

• Traveling Salesman Problem (TSP).


• Vertex Cover Problem.

g) Discuss Randomized algorithms.

Definition: Randomized algorithms use random numbers at some point during their
execution to make decisions.

Types:

1. Las Vegas Algorithms: Always produce a correct result but the runtime is
probabilistic. Example: Randomized Quick Sort.
2. Monte Carlo Algorithms: May produce incorrect results with a small probability.

Applications: Cryptography, game theory, and probabilistic data structures (e.g., Bloom
filters).
h) Write any approximation algorithm.

Example: Approximation Algorithm for Vertex Cover Problem:

1. Start with an empty vertex cover.


2. Pick any edge (u,v)(u, v) and add both uu and vv to the vertex cover.
3. Remove all edges incident to uu and vv.
4. Repeat until no edges remain.

Approximation Ratio: 22.

Complexity: O(E)O(E), where EE is the number of edges.

i) Give the comparison of Randomized & Approximation algorithms.

Aspect Randomized Algorithms Approximation Algorithms

Introduce randomness to solve Find near-optimal solutions to


Purpose
problems or optimize tasks. NP-hard problems.

Always deterministic but may be


Outcome May be probabilistic or guaranteed.
suboptimal.

Applications Cryptography, game theory. TSP, Vertex Cover, Set Cover.

j) Discuss Randomized Quick Sort.

Concept: Randomized Quick Sort selects a pivot randomly instead of using a fixed strategy
(e.g., first or last element). This helps avoid the worst-case scenario of O(n2)O(n^2) in
poorly distributed inputs.

Steps:

1. Select a random pivot from the array.


2. Partition the array around the pivot.
3. Recursively sort the left and right partitions.
Complexity:

• Average Case: O(nlog⁡n)O(n \log n).


• Worst Case: O(n2)O(n^2), though rare due to randomization.

SECTION B

UNIT 1

a) Discuss the asymptotic notations with a diagram.

Asymptotic Notations are mathematical tools to describe the running time or space
complexity of an algorithm as the input size (nn) Grows large.

1. Big-O Notation (OO):


a. Describes the upper bound or worst-case complexity.
b. Example: O(n2)O(n^2) for Selection Sort.
2. Theta Notation (Θ\Theta):
a. Describes the tight bound, i.e., when an algorithm's running time is both
upper and lower bounded.
b. Example: Merge Sort is Θ(nlog⁡n)\Theta(n \log n).
3. Omega Notation (Ω\Omega):
a. Describes the lower bound or best-case complexity.
b. Example: Ω(n)\Omega(n) for Linear Search.

Diagram:

| Running Time
| Θ(f(n))
| / \
| / \
| O(f(n)) -> Upper Bound (worst-case)
| Ω(f(n)) -> Lower Bound (best-case)
|_________________________
Input Size (n)

b) Analyze and evaluate the running time complexity of Selection Sort in


all cases.

Selection Sort iteratively selects the smallest element and places it at the correct
position.

1. Best Case: O(n2)O(n^2)


a. Even if the array is sorted, it compares all elements.
2. Worst Case: O(n2)O(n^2)
a. Performs n−1+n−2+...+1=O(n2)n-1 + n-2 + ... + 1 = O(n^2) comparisons.
3. Average Case: O(n2)O(n^2).

c) Write the algorithm of Insertion Sort. Sort the elements 5,3,8,1,4,6,25, 3,


8, 1, 4, 6, 2 by using insertion sort.

Algorithm:

1. Start from the second element (index 1).


2. Compare the current element with elements in the sorted portion of the array.
3. Shift larger elements to the right.
4. Insert the current element into its correct position.
5. Repeat until the entire array is sorted.

Sorting 5,3,8,1,4,6,25, 3, 8, 1, 4, 6, 2 :

1. Initial Array: 5,3,8,1,4,6,25, 3, 8, 1, 4, 6, 2


2. Pass 1: 3,5,8,1,4,6,23, 5, 8, 1, 4, 6, 2
3. Pass 2: 3,5,8,1,4,6,23, 5, 8, 1, 4, 6, 2
4. Pass 3: 1,3,5,8,4,6,21, 3, 5, 8, 4, 6, 2
5. Pass 4: 1,3,4,5,8,6,21, 3, 4, 5, 8, 6, 2
6. Pass 5: 1,3,4,5,6,8,21, 3, 4, 5, 6, 8, 2
7. Pass 6: 1,2,3,4,5,6,81, 2, 3, 4, 5, 6, 8
d) Write the algorithm of Counting Sort. Sort the elements 2,5,3,0,2,3,0,32,
5, 3, 0, 2, 3, 0, 3 by using Counting Sort.

Algorithm:

1. Count the occurrences of each element.


2. Compute the cumulative count.
3. Place elements into the sorted array based on the cumulative counts.

Sorting 2,5,3,0,2,3,0,32, 5, 3, 0, 2, 3, 0, 3 :

1. Frequency Array:
Count[0]=2,Count[1]=0,Count[2]=2,Count[3]=3,Count[4]=0,Count[5]=1Count[0]
=2, Count[1]=0, Count[2]=2, Count[3]=3, Count[4]=0, Count[5]=1 .
2. Cumulative Count:
Count[0]=2,Count[1]=2,Count[2]=4,Count[3]=7,Count[4]=7,Count[5]=8Count[0]
=2, Count[1]=2, Count[2]=4, Count[3]=7, Count[4]=7, Count[5]=8 .
3. Sorted Array: 0,0,2,2,3,3,3,50, 0, 2, 2, 3, 3, 3, 5 .

e) Write an algorithm to sort the given array using Quick-sort. Illustrate


the operation of the PARTITION procedure on the array 36,15,40,136, 15,
40, 1.

Algorithm:

1. Select a pivot element.


2. Partition the array such that elements less than the pivot are on the left and greater
on the right.
3. Recursively apply Quick Sort to the left and right partitions.

PARTITION Procedure:

1. Initial Array: 36,15,40,136, 15, 40, 1 . Pivot: 11.


2. Partition Step: 1,15,40,361, 15, 40, 36 (swap 11 with 3636).
3. Pivot Position: 0.

Sorted Array: 1,15,36,401, 15, 36, 40 .


f) Solve the recurrence:

Given:

1. T(1)=1T(1) = 1
2. T(n)=4T(n/3)+n2T(n) = 4T(n/3) + n^2 for n≥2n \geq 2.

Solution using Master Theorem:

The recurrence follows the form T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n), where:

• a=4a = 4, b=3b = 3, f(n)=n2f(n) = n^2.

Step 1: Compute p=log⁡b(a)p = \log_b(a):

p=log⁡3(4)≈1.2619p = \log_3(4) \approx 1.2619.

Step 2: Compare pp with the degree of f(n)=n2f(n) = n^2:

• Degree of f(n)f(n) is 22.


• Since p<2p < 2, the solution is dominated by f(n)f(n).

Result:

T(n)=Θ(n2)T(n) = \Theta(n^2).

g) Write all 3 cases of Master Method and solve T(n)=7T(n/3)+n2T(n) =


7T(n/3) + n^2.

Master Method Cases:

For T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n):

1. Case 1: p<dp < d (subproblem dominates): T(n)=Θ(nd)T(n) = \Theta(n^d).


2. Case 2: p=dp = d (balanced growth): T(n)=Θ(ndlog⁡n)T(n) = \Theta(n^d \log n).
3. Case 3: p>dp > d (divide dominates): T(n)=Θ(np)T(n) = \Theta(n^p).

Given Recurrence:

• a=7a = 7, b=3b = 3, f(n)=n2f(n) = n^2.


• Compute p=log⁡b(a)=log⁡3(7)≈1.7712p = \log_b(a) = \log_3(7) \approx
1.7712.

Compare pp with d=2d = 2:

• Since p<dp < d, the solution is dominated by f(n)f(n).

Result:

T(n)=Θ(n2)T(n) = \Theta(n^2).

h) Solve the recurrence T(n)=T(n/2)+1T(n) = T(n/2) + 1 using


Substitution Method.

Assume: T(n)=klog⁡2(n)+cT(n) = k \log_2(n) + c.

Base Case:

• T(1)=1T(1) = 1, so klog⁡2(1)+c=1k \log_2(1) + c = 1.


• c=1c = 1.

Inductive Step:

• Substituting into the recurrence:


T(n)=T(n/2)+1T(n) = T(n/2) + 1 .
T(n)=klog⁡2(n/2)+c+1T(n) = k \log_2(n/2) + c + 1.
T(n)=k(log⁡2(n)−1)+c+1T(n) = k (\log_2(n) - 1) + c + 1.
T(n)=klog⁡2(n)−k+c+1T(n) = k \log_2(n) - k + c + 1.

Solve for kk:

Comparing coefficients, k=1k = 1.

Final Solution:

T(n)=Θ(log⁡2(n))T(n) = \Theta(\log_2(n)).
i) Solve the recurrence T(n)=4T(n/2)+n2log⁡nT(n) = 4T(n/2) + n^2 \log
n using Master Method.

Given:

• a=4a = 4, b=2b = 2, f(n)=n2log⁡nf(n) = n^2 \log n.

Step 1: Compute p=log⁡b(a)p = \log_b(a):

p=log⁡2(4)=2p = \log_2(4) = 2.

Step 2: Compare pp with the degree of f(n)=n2log⁡nf(n) = n^2 \log n:

• f(n)=n2log⁡nf(n) = n^2 \log n has degree 2+ϵ2 + \epsilon, where ϵ>0\epsilon >
0.
• Since p=2<2+ϵp = 2 < 2 + \epsilon, the solution is dominated by f(n)f(n).

Result:

T(n)=Θ(n2log⁡n)T(n) = \Theta(n^2 \log n).

j) Solve the recurrence T(n)=c+3T(n−1)T(n) = c + 3T(n-1) using Iteration


Method.

Expand the recurrence:

1. T(n)=c+3T(n−1)T(n) = c + 3T(n-1).
2. T(n−1)=c+3T(n−2)T(n-1) = c + 3T(n-2).
3. Substituting: T(n)=c+3[c+3T(n−2)]=c+3c+9T(n−2)T(n) = c + 3[c + 3T(n-2)] =
c + 3c + 9T(n-2).
4. Generalize: T(n)=c(1+3+32+...+3n−1)+3nT(0)T(n) = c(1 + 3 + 3^2 + ... + 3^{n-
1}) + 3^n T(0).

Geometric Series:

• Sum of the series 1+3+32+...+3n−1=3n−13−1=3n−121 + 3 + 3^2 + ... + 3^{n-


1} = \frac{3^n - 1}{3 - 1} = \frac{3^n - 1}{2}.

Result:
T(n)=Θ(3n)T(n) = \Theta(3^n).

UNIT 2

a) Discuss R-B tree insertion cases.

Insertion in a Red-Black (R-B) tree follows these rules to maintain its properties (balance
and coloring):

1. Case 1: New node is the root


a. Color the new root black.
2. Case 2: Parent is black
a. The tree remains balanced; no changes required.
3. Case 3: Parent is red and the uncle is red
a. Recolor the parent and uncle to black and the grandparent to red.
b. Repeat the process for the grandparent if necessary.
4. Case 4: Parent is red and the uncle is black
a. Perform rotations (left or right) to balance the tree and recolor.

b) Create an R-B tree by inserting the elements


10,18,7,15,16,30,25,40,60,2,1,7010, 18, 7, 15, 16, 30, 25, 40, 60, 2, 1, 70 .

We will follow the Red-Black tree insertion rules. Below is a step-by-step process:

1. Insert 1010:
a. Becomes the root and is colored black.
2. Insert 1818:
a. Place as a red child of 1010.
3. Insert 77:
a. Place as a red child of 1010.
4. Insert 1515:
a. Causes Case 3 (recoloring): 1010 becomes red, 77 and 1818 become black.
5. Insert 1616:
a. Causes Case 4 (rotation): Perform a left rotation on 1515.
6. Continue inserting 30,25,40,60,2,1,7030, 25, 40, 60, 2, 1, 70 , applying recoloring and
rotations as needed.

c) If the number of nodes n≥1n \geq 1, then for nn keys in a B-tree of


height hh and minimum degree t≥2t \geq 2, the height of the tree will be
h≤log⁡t((n+1)/2)h \leq \log_t((n + 1)/2).

Proof Sketch:

1. Minimum number of keys: A B-tree of height hh and minimum degree tt has at


least 2th−12t^{h-1} keys.
2. Solve for hh: Rearrange to find hh in terms of nn. h≤log⁡t((n+1)/2)h \leq \log_t((n
+ 1)/2)

d) Delete EE, FF, and MM from a given B-Tree where degree (tt) = 3.

Given t=3t = 3 :

1. Deletion in a B-Tree involves maintaining balance and the properties of the tree:
a. Case 1: If the key to delete is in a leaf node, simply remove it.
b. Case 2: If the key is in an internal node, replace it with its in-order
predecessor or successor and delete recursively.
c. Case 3: If the node has fewer than t−1t-1 keys, borrow or merge nodes to
maintain balance.

Steps:

1. Delete EE: If EE is in a leaf, remove it directly. If not, replace it with its in-order
predecessor or successor.
2. Delete FF: Apply similar steps as for EE.
3. Delete MM: Apply the deletion process recursively, maintaining the B-Tree
properties.
e) Explain the process (cases) of insertion operation in B-tree.

Insertion in a B-tree involves the following cases:

1. Case 1: The node is not full


a. Insert the key in sorted order.
2. Case 2: The node is full
a. Split the full node into two nodes.
b. Move the middle key to the parent node.
c. If the parent is also full, recursively split the parent.
3. Steps of Insertion:
a. Start at the root.
b. Traverse to the appropriate child node.
c. If the child node is full, split it.
d. Repeat until a suitable position is found.

f) Give the comparison between R-B tree & B-tree.

Feature Red-Black Tree B-Tree

Multi-way tree (can have more than 2


Structure Binary tree
children).

Balanced by splitting or merging


Balance Balanced through color rules.
nodes.

In-memory applications (e.g., Disk-based applications (e.g.,


Use Case
maps, sets). databases).

O(log⁡tn)O(\log_t n), where tt is the


Height O(log⁡n)O(\log n)
degree.
Search
O(log⁡n)O(\log n) O(log⁡tn)O(\log_t n).
Complexity
g) Explain & write an algorithm for union of two binomial heaps. Also
discuss the time complexity for the same.

Union of Binomial Heaps:

Combines two binomial heaps into one by merging the root lists and ensuring the binomial
heap properties are maintained.

Algorithm:

1. Merge the root lists of the two heaps in increasing order of degree.
2. Traverse the merged list to combine trees of the same degree:
a. If three trees of the same degree are encountered, keep the leftmost one
separate and combine the other two.
3. Update the head pointer of the new heap.

Time Complexity:

• O(log⁡n)O(\log n), where nn is the number of nodes in the heap.

h) Write the algorithm for Extract-Min Key from a binomial heap.

Algorithm:

1. Find the root with the minimum key in the root list.
2. Remove this root and retrieve its children.
3. Reverse the order of the children and treat them as a separate binomial heap.
4. Union the resulting heap with the remaining heap.

Time Complexity:

• O(log⁡n)O(\log n), where nn is the total number of nodes.

i) Discuss the decrease key operation in Fibonacci heap with an example.

Decrease Key Operation in Fibonacci Heap:

1. Find the node where the key is to be decreased.


2. Update the key and check if it violates the min-heap property.
3. If it does, cut the node and move it to the root list.
4. Cascade Cut: Recursively cut the parent of the node if it has lost a child before.

Example:

1. Initial Fibonacci Heap: 10


/ \
20 30
/
40

2. Decrease key of 4040 to 55.


a. Cut 4040 and move it to the root list.
b. Resulting Heap: 10
/ \
20 30 5

Time Complexity:

• Amortized O(1)O(1).

j) Discuss the various operations performed on Fibonacci heap.

Operations on Fibonacci Heap:

1. Insert:
a. Add a new node to the root list.
b. Time Complexity: O(1)O(1).
2. Find Min:
a. Return the root with the minimum key.
b. Time Complexity: O(1)O(1).
3. Union:
a. Merge two heaps by concatenating their root lists.
b. Time Complexity: O(1)O(1).
4. Extract Min:
a. Remove the minimum root and union its children with the root list.
b. Time Complexity: O(log⁡n)O(\log n).
5. Decrease Key:
a. Decrease the value of a node’s key and perform cascading cuts if necessary.
b. Amortized Time Complexity: O(1)O(1).
6. Delete:
a. Decrease the key to −∞-\infty, making it the minimum.
b. Extract the minimum.
c. Time Complexity: O(log⁡n)O(\log n).

UNIT 3

a) Discuss Strassen’s Matrix Multiplication Method. Compute the product


of the given matrices using Strassen’s formula.

Strassen's matrix multiplication method reduces the complexity of multiplying two n×nn
\times n matrices from the traditional O(n3)O(n^3) to approximately
O(n2.81)O(n^{2.81}). It achieves this by reducing the number of multiplications required.

Steps:

1. Divide the matrices AA and BB into four submatrices:

A=[A11A12A21A22],B=[B11B12B21B22]A = \begin{bmatrix} A_{11} & A_{12} \\ A_{21}


& A_{22} \end{bmatrix}, \quad B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22}
\end{bmatrix}

2. Compute the following intermediate matrices:

M1=(A11+A22)(B11+B22)M_1 = (A_{11} + A_{22})(B_{11} + B_{22})


M2=(A21+A22)B11, M3=A11(B12−B22), M4=A22(B21−B11)M_2 = (A_{21} +
A_{22})B_{11}, \, M_3 = A_{11}(B_{12} - B_{22}), \, M_4 = A_{22}(B_{21} - B_{11})
M5=(A11+A12)B22, M6=(A21−A11)(B11+B12), M7=(A12−A22)(B21+B22)M_5 =
(A_{11} + A_{12})B_{22}, \, M_6 = (A_{21} - A_{11})(B_{11} + B_{12}), \, M_7 = (A_{12} -
A_{22})(B_{21} + B_{22})
3. Combine results to form the product matrix:

C11=M1+M4−M5+M7, C12=M3+M5C_{11} = M_1 + M_4 - M_5 + M_7, \, C_{12} = M_3 +


M_5 C21=M2+M4, C22=M1−M2+M3+M6C_{21} = M_2 + M_4, \, C_{22} = M_1 - M_2 +
M_3 + M_6

Example Calculation: Provide AA and BB, and their product will be computed step-by-
step.

b) Explain Graham’s Scan Method to Solve the Convex Hull Problem.

Graham's Scan is a method to find the convex hull of a set of points in O(nlog⁡n)O(n \log
n) time. The convex hull is the smallest polygon that encloses all the points.

Steps:

1. Find the pivot point: Choose the point with the lowest y-coordinate (and the lowest
x-coordinate if ties exist).
2. Sort points: Sort all points based on the polar angle they make with the pivot.
3. Process points: Use a stack to construct the convex hull:
a. Push the first three points onto the stack.
b. For each subsequent point, check the orientation of the top two points of the
stack and the current point.
c. If they form a right turn, pop the top point. Otherwise, push the current
point.
4. Continue until all points are processed.

c) What is an optimization problem? How can the greedy method solve


optimization problems?

Optimization Problem:

An optimization problem seeks to find the best solution from a set of feasible solutions.
Examples include finding the shortest path, minimal spanning tree, or maximum profit.

Greedy Method:
A greedy algorithm solves optimization problems by:

1. Choosing the best option at each step (locally optimal choice).


2. Ensuring that this choice leads to a globally optimal solution (if the problem
satisfies greedy-choice property and optimal substructure).

Examples:

• Kruskal’s Algorithm: To find a minimum spanning tree.


• Huffman Coding: For data compression.

d) Discuss the concept of the Activity Selection Problem (Optimal


Reliability Allocation).

Activity Selection Problem:

This is a classical optimization problem where we aim to select the maximum number of
activities that don't overlap, given their start and finish times.

Steps:

1. Sort the activities by their finish times.


2. Select the first activity.
3. For each subsequent activity, check if its start time is greater than or equal to the
finish time of the previously selected activity.
4. If yes, select it.

Optimal Reliability Allocation:

This involves distributing resources among components of a system to maximize overall


system reliability while meeting cost constraints.

e) Compute a schedule where the largest number of activities take place.

Given:

• S={A1,A2,…,A10}S = \{A_1, A_2, \dots, A_{10}\}


• Start times Si={1,2,3,4,7,8,9,9,11,12}S_i = \{1, 2, 3, 4, 7, 8, 9, 9, 11, 12\}
• Finish times Fi={3,5,4,7,10,9,11,13,12,14}F_i = \{3, 5, 4, 7, 10, 9, 11, 13, 12, 14\}

Steps:

1. Sort activities by their finish times.


a. Sorted order: {A1,A3,A2,A4,A5,A6,A7,A9,A10,A8}\{A_1, A_3, A_2, A_4, A_5,
A_6, A_7, A_9, A_{10}, A_8\}.
2. Select activities:
a. Select A1A_1 (finish time = 3).
b. Skip A3A_3 (start time = 3 < previous finish time = 3).
c. Select A2A_2 (finish time = 5).
d. Select A4A_4 (finish time = 7).
e. Skip A5,A6,A7A_5, A_6, A_7 (start times conflict).
f. Select A9A_9 (finish time = 12).
g. Select A10A_{10} (finish time = 14).

Selected Activities: {A1,A2,A4,A9,A10}\{A_1, A_2, A_4, A_9, A_{10}\}.

f) Knapsack Problem: Finding the Optimal Solution

Given Data:

• Items: A,B,C,D,E,FA, B, C, D, E, F
• Weight: {100,50,40,20,10,10}\{100, 50, 40, 20, 10, 10\}
• Value: {40,35,20,4,10,6}\{40, 35, 20, 4, 10, 6\}
• Knapsack Capacity: 100100.

Approach: Greedy Method (Fractional Knapsack)

1. Calculate the value-to-weight ratio for each item:

Ratio=ValueWeight\text{Ratio} = \frac{\text{Value}}{\text{Weight}}
a. A:0.4,B:0.7,C:0.5,D:0.2,E:1.0,F:0.6A: 0.4, B: 0.7, C: 0.5, D: 0.2, E: 1.0, F: 0.6 .
2. Sort items by ratio (descending order):
a. Order: E,B,F,C,A,DE, B, F, C, A, D.
3. Add items to the knapsack:
a. Add EE (Weight = 10, Value = 10). Remaining Capacity = 9090.
b. Add BB (Weight = 50, Value = 35). Remaining Capacity = 4040.
c. Add FF (Weight = 10, Value = 6). Remaining Capacity = 3030.
d. Add a fraction of CC (30/40=0.7530/40 = 0.75, Value = 20×0.75=1520
\times 0.75 = 15).

Optimal Solution:

• Selected Items: E,B,F,Fraction of CE, B, F, \text{Fraction of } C.


• Total Value: 10+35+6+15=6610 + 35 + 6 + 15 = 66.

g) Prim’s Algorithm to Generate a Minimum Cost Spanning Tree

Steps:

1. Start with any node.


2. Initialize a set TT containing the starting node.
3. Select the smallest edge connecting a node in TT to a node outside TT.
4. Add the selected edge and the new node to TT.
5. Repeat until all nodes are included.

Algorithm:

Prim(Graph, Start):
1. Initialize MST = ∅, Visited = {Start}
2. While MST contains fewer than |V| - 1 edges:
a. Find the minimum edge (u, v) where u ∈ Visited and v ∉ Visited.
b. Add edge (u, v) to MST.
c. Add v to Visited.
3. Return MST.

Complexity:

• Using Min-Heap: O(Elog⁡V)O(E \log V).

h) Kruskal’s Algorithm to Generate a Minimum Cost Spanning Tree

Steps:
1. Sort all edges by weight.
2. Initialize MST as an empty set.
3. Add edges to the MST in increasing order of weight, ensuring no cycles are formed
(using Union-Find).
4. Stop when MST contains ∣V∣−1|V| - 1 edges.

Algorithm:

Kruskal(Graph):
1. Initialize MST = ∅
2. Sort edges by weight.
3. For each edge (u, v) in sorted order:
a. If u and v are in different sets:
i. Add (u, v) to MST.
ii. Union(u, v).
4. Return MST.

Complexity:

• Sorting Edges: O(Elog⁡E)O(E \log E).


• Union-Find: O(Eα(V))O(E \alpha(V)).

i) Dijkstra’s Algorithm to Solve the Single Source Shortest Path Problem

Steps:

1. Initialize distance d[v]d[v] for all vertices as ∞\infty, except source ss where
d[s]=0d[s] = 0.
2. Use a priority queue to store vertices based on their distance from ss.
3. Extract the vertex uu with the minimum distance.
4. For each neighbor vv of uu, update d[v]d[v] if a shorter path is found through uu.
5. Repeat until all vertices are processed.

Algorithm:

Dijkstra(Graph, Source):
1. Initialize distances: d[v] = ∞ for all v ≠ Source, d[Source] = 0.
2. PriorityQueue = {Source}.
3. While PriorityQueue is not empty:
a. u = Extract-Min(PriorityQueue).
b. For each neighbor v of u:
i. If d[u] + weight(u, v) < d[v]:
- Update d[v].
- Add/Update v in PriorityQueue.
4. Return distances.

Complexity:

• Using Min-Heap: O(Elog⁡V)O(E \log V).

j) Bellman-Ford Algorithm to Solve Single Source Shortest Path

Steps:

1. Initialize distances d[v]=∞d[v] = \infty, d[source]=0d[source] = 0.


2. For each vertex, relax all edges V−1V-1 times:
a. If d[u]+weight(u,v)<d[v]d[u] + weight(u, v) < d[v] , update d[v]d[v].
3. Check for negative weight cycles:
a. If d[u]+weight(u,v)<d[v]d[u] + weight(u, v) < d[v] , a negative cycle exists.

Algorithm:

BellmanFord(Graph, Source):
1. Initialize distances: d[v] = ∞ for all v ≠ Source, d[Source] = 0.
2. For i = 1 to |V| - 1:
a. For each edge (u, v) with weight w:
i. If d[u] + w < d[v]:
- Update d[v].
3. For each edge (u, v) with weight w:
a. If d[u] + w < d[v]:
- Negative weight cycle detected.
4. Return distances.

Example Graph: Let me know if you'd like the specific solution for the provided graph.

Complexity:
• O(V⋅E)O(V \cdot E).

UNIT 4

a) What is 0/1 Knapsack Problem? Solve the Problem Using Dynamic


Programming

Definition:

The 0/1 Knapsack problem is a combinatorial optimization problem where each item can
either be included (1) or excluded (0) in a knapsack. The goal is to maximize the total profit
while not exceeding the knapsack's weight capacity.

Given Data:

• Knapsack Capacity: 1010.


• Profit: {1,6,18,22,28}\{1, 6, 18, 22, 28\}.
• Weight: {1,2,5,6,7}\{1, 2, 5, 6, 7\}.

Dynamic Programming Solution:

Define DP[i][w]DP[i][w] as the maximum profit using the first ii items with a knapsack
capacity ww.

Recurrence Relation:

DP[i][w]={DP[i−1][w],if
weight[i]>wmax⁡(DP[i−1][w],profit[i]+DP[i−1][w−weight[i]]),otherwise.DP[i][w] =
\begin{cases} DP[i-1][w], & \text{if } weight[i] > w \\ \max(DP[i-1][w], profit[i] + DP[i-
1][w - weight[i]]), & \text{otherwise}. \end{cases}

Steps:

1. Initialize DP[0][w]=0DP[0][w] = 0 for all ww.


2. Fill the table row by row using the recurrence relation.

Solution Table:
Item/Weigh
0 1 2 3 4 5 6 7 8 9 10
t
00 0 0 0 0 0 0 0 0 0 0 0
11 0 1 1 1 1 1 1 1 1 1 1
22 0 1 6 7 7 7 7 7 7 7 7
33 0 1 6 7 7 18 19 24 25 25 25
44 0 1 6 7 7 18 22 24 28 29 40
55 0 1 6 7 7 18 22 28 29 34 40

Result:

• Maximum Profit: 4040.


• Selected Items: A4(weight=6),A5(weight=7)A_4 (weight=6), A_5 (weight=7) .

b) Write Warshall’s and Floyd’s Algorithm for All-Pairs Shortest Path

Warshall’s Algorithm (For Transitive Closure)

Warshall’s algorithm finds the transitive closure of a graph.

Steps:

1. Let TCTC be the adjacency matrix of the graph.


2. For k=1k = 1 to nn:
a. For i=1i = 1 to nn:
i. For j=1j = 1 to nn: TC[i][j]=TC[i][j]∨(TC[i][k]∧TC[k][j]).TC[i][j] =
TC[i][j] \lor (TC[i][k] \land TC[k][j]).

Complexity:

• O(V3)O(V^3), where VV is the number of vertices.

Floyd’s Algorithm (For All-Pairs Shortest Path)

Steps:

1. Initialize the distance matrix DD:


a. D[i][j]=weight of edge (i,j)D[i][j] = \text{weight of edge } (i, j), or ∞\infty if
no edge exists.
2. For k=1k = 1 to nn:
a. For i=1i = 1 to nn:
i. For j=1j = 1 to nn: D[i][j]=min⁡(D[i][j],D[i][k]+D[k][j]).D[i][j] =
\min(D[i][j], D[i][k] + D[k][j]).

Complexity:

• O(V3)O(V^3).

c) Algorithm for Longest Common Subsequence (LCS)

Steps:

1. Let XX and YY be two strings with lengths mm and nn.


2. Define L[i][j]L[i][j] as the length of LCS for the first ii characters of XX and jj
characters of YY.

Recurrence Relation:

L[i][j]={0,if i=0 or j=0L[i−1][j−1]+1,if


X[i]=Y[j]max⁡(L[i−1][j],L[i][j−1]),otherwise.L[i][j] = \begin{cases} 0, & \text{if } i = 0
\text{ or } j = 0 \\ L[i-1][j-1] + 1, & \text{if } X[i] = Y[j] \\ \max(L[i-1][j], L[i][j-1]), &
\text{otherwise}. \end{cases}

Algorithm:

LCS(X, Y):
1. Initialize L[0][j] = L[i][0] = 0 for all i, j.
2. For i = 1 to m:
For j = 1 to n:
If X[i] == Y[j]:
L[i][j] = L[i-1][j-1] + 1
Else:
L[i][j] = max(L[i-1][j], L[i][j-1])
3. Return L[m][n].

Complexity:
• O(m⋅n)O(m \cdot n).

d) Find at least 4 solutions for the 8-Queen problem solved by


Backtracking Approach

The 8-Queens problem is a classic combinatorial problem where the goal is to place 8
queens on a chessboard such that no two queens threaten each other. This means no two
queens can be in the same row, column, or diagonal.

Backtracking Approach:

1. Place a queen in the first row.


2. Move to the next row and try to place another queen in a position where it isn't
threatened by any previously placed queens.
3. If no valid position exists in the current row, backtrack to the previous row and move
the previously placed queen to its next possible position.
4. Repeat the process until all queens are placed.

Steps:

1. Use a recursive function to explore all possibilities.


2. Check constraints for rows, columns, and diagonals.
3. Print the solution when all 8 queens are placed successfully.

4 Solutions:

1. Solution 1:

(1,1),(2,5),(3,8),(4,6),(5,3),(6,7),(7,2),(8,4)(1, 1), (2, 5), (3, 8), (4, 6), (5, 3), (6, 7), (7, 2),
(8, 4).

Chessboard:

Q . . . . . . .
. . . . Q . . .
. . . . . . . Q
. . . . . Q . .
. . Q . . . . .
. . . . . . Q .
. Q . . . . . .
. . . Q . . . .

2. Solution 2:

(1,1),(2,6),(3,8),(4,3),(5,7),(6,4),(7,2),(8,5)(1, 1), (2, 6), (3, 8), (4, 3), (5, 7), (6, 4), (7, 2),
(8, 5).

Chessboard:

Q . . . . . . .
. . . . . Q . .
. . . . . . . Q
. . Q . . . . .
. . . . . . Q .
. . . Q . . . .
. Q . . . . . .
. . . . Q . . .

3. Solution 3:

(1,2),(2,4),(3,6),(4,8),(5,3),(6,1),(7,7),(8,5)(1, 2), (2, 4), (3, 6), (4, 8), (5, 3), (6, 1), (7, 7),
(8, 5).

Chessboard:

. Q . . . . . .
. . . Q . . . .
. . . . . Q . .
. . . . . . . Q
. . Q . . . . .
Q . . . . . . .
. . . . . . Q .
. . . . Q . . .
4. Solution 4:

(1,2),(2,5),(3,7),(4,3),(5,1),(6,6),(7,8),(8,4)(1, 2), (2, 5), (3, 7), (4, 3), (5, 1), (6, 6), (7, 8),
(8, 4).

Chessboard:

. Q . . . . . .
. . . . Q . . .
. . . . . . Q .
. . Q . . . . .
Q . . . . . . .
. . . . . Q . .
. . . . . . . Q
. . . Q . . . .

e) What is Optimal Coloring of a Graph? Give an Example

Definition:

Graph coloring is the assignment of colors to the vertices of a graph such that no two
adjacent vertices share the same color. Optimal coloring uses the minimum number of
colors to achieve this.

Applications:

• Scheduling problems.
• Register allocation in compilers.
• Map coloring.

Example:

Consider the following graph:

• Vertices: V={A,B,C,D,E}V = \{A, B, C, D, E\}.


• Edges: E={(A,B),(A,C),(B,D),(C,D),(D,E)}E = \{(A, B), (A, C), (B, D), (C, D), (D, E)\}.
Steps:

1. Start with AA: Assign color 1.


2. Assign color 2 to BB since it’s adjacent to AA.
3. Assign color 2 to CC (it’s not adjacent to BB).
4. Assign color 1 to DD (adjacent to BB and CC).
5. Assign color 2 to EE (adjacent to DD).

Result:

• Colors assigned: A=1,B=2,C=2,D=1,E=2A = 1, B = 2, C = 2, D = 1, E = 2 .


• Minimum colors: 22.

f) Write a short note on Hamiltonian Cycle with an Example

A Hamiltonian Cycle is a closed path in a graph that visits each vertex exactly once and
returns to the starting vertex. It is a special case of the Hamiltonian path, where the
starting and ending points are the same.

Key Properties:

1. Every vertex is visited exactly once.


2. The cycle ends at the starting vertex.
3. A graph can have one or more Hamiltonian cycles or none at all.

Example:

Consider a graph with 5 vertices: V={A,B,C,D,E}V = \{A, B, C, D, E\} and edges:

E={(A,B),(B,C),(C,D),(D,E),(E,A),(A,C),(B,E)}E = \{(A, B), (B, C), (C, D), (D, E), (E, A), (A, C),
(B, E)\}.

A Hamiltonian Cycle in this graph is:

A→B→C→D→E→AA \to B \to C \to D \to E \to A.


Applications:

1. Solving puzzles like the Traveling Salesman Problem.


2. Optimizing circuits in networking and computer science.

g) Sum of Subset Problem Using Backtracking

Problem Statement:

Given weights W={3,4,5,6}W = \{3, 4, 5, 6\}, find a subset whose sum equals 13 using
backtracking.

Backtracking Approach:

1. Start with an empty subset.


2. Explore each weight recursively by either including or excluding it.
3. If the current subset sum equals the target (13), a solution is found.
4. If the subset sum exceeds the target or no more weights are left, backtrack.

Steps:

1. Start with S=0S = 0 and explore:


a. W1=3W1 = 3, S=3S = 3.
b. W2=4W2 = 4, S=3+4=7S = 3 + 4 = 7.
c. W3=5W3 = 5, S=7+5=12S = 7 + 5 = 12 .
d. Include W4=6W4 = 6, S=12+6=13S = 12 + 6 = 13 . Solution found.

Solution:

Subset: {3,4,6}\{3, 4, 6\}.


h) Differentiate Between Backtracking and Branch & Bound

Feature Backtracking Branch & Bound

Uses depth-first search to explore Uses breadth-first search or best-


Approach
the solution space. first search.

Solution Explores all possible solutions Prunes subtrees that cannot lead
Space systematically. to optimal solutions.

May not guarantee the optimal


Optimality Guarantees the optimal solution.
solution.

Problems like N-Queens, Sudoku, Problems like Knapsack, Traveling


Applications
and Subset Sum. Salesman.

Relatively less efficient for large


Efficiency More efficient due to pruning.
solution spaces.

i) Travelling Salesman Problem (TSP)

The Travelling Salesman Problem (TSP) is a classic optimization problem in the field of
operations research and computer science. In this problem, a salesman is given a set of
cities to visit. The goal is to determine the shortest possible route that the salesman can
take to visit each city exactly once and return to the starting city.

Objective:

The objective is to minimize the total travel distance (or cost) while visiting each city
exactly once and returning to the origin.

Example:

If there are 4 cities, A, B, C, and D, the salesman needs to find the shortest route that
allows him to visit each city once and return to the starting point.
Approach to Solve TSP:

One common approach to solving the TSP is using the Brute Force Method. Here's how it
works:

1. Generate all possible permutations of the cities to be visited.


2. Calculate the total distance for each permutation.
3. Select the permutation with the shortest total distance.

However, this approach is computationally expensive because the number of


permutations grows factorially as the number of cities increases (i.e., for nn cities, there
are n!n! permutations). As a result, this method becomes impractical for large numbers of
cities.

For larger datasets, more efficient approaches are used, such as:

• Dynamic Programming (Held-Karp algorithm)


• Greedy algorithms
• Approximation algorithms (e.g., Christofides' algorithm)

These methods aim to reduce the computational complexity and find near-optimal
solutions in a more efficient manner.

j) Knapsack Problem

The Knapsack Problem is a fundamental problem in combinatorial optimization. It


involves selecting a subset of items with given weights and values to maximize the total
value while staying within a specified weight limit (the capacity of the knapsack).

There are two main types of knapsack problems:

1. 0/1 Knapsack Problem: Each item can either be taken entirely or not at all (no
fractional items allowed).
2. Fractional Knapsack Problem: Items can be divided into fractions, allowing partial
selection of items.

Given the problem with the following data:

• Weights (W): (100,50,40,20,10,10)(100, 50, 40, 20, 10, 10)


• Values (V): (40,35,20,4,10,6)(40, 35, 20, 4, 10, 6)
• Knapsack Capacity: 50

Approach for Fractional Knapsack Problem:

To solve the fractional knapsack problem, we use the value-to-weight ratio for each item.
The idea is to prioritize items with the highest value-to-weight ratios and add them to the
knapsack until it reaches its capacity.

1. Step 1: Calculate the value-to-weight ratio for each item:


Value-to-weight ratio=ValueWeight\text{Value-to-weight ratio} =
\frac{\text{Value}}{\text{Weight}}
• For item 1: 40100=0.4\frac{40}{100} = 0.4
• For item 2: 3550=0.7\frac{35}{50} = 0.7
• For item 3: 2040=0.5\frac{20}{40} = 0.5
• For item 4: 420=0.2\frac{4}{20} = 0.2
• For item 5: 1010=1.0\frac{10}{10} = 1.0
• For item 6: 610=0.6\frac{6}{10} = 0.6
2. Step 2: Sort the items by value-to-weight ratio in descending order:
a. Item 5: ratio 1.0
b. Item 2: ratio 0.7
c. Item 3: ratio 0.5
d. Item 6: ratio 0.6
e. Item 1: ratio 0.4
f. Item 4: ratio 0.2
3. Step 3: Select items in this sorted order and fill the knapsack:
a. First, take Item 5 (weight 10, value 10) – the knapsack capacity reduces to
50−10=4050 - 10 = 40.
b. Next, take Item 2 (weight 50, value 35) – but since the capacity is only 40, we
can only take a fraction of Item 2. We can take 40/5040/50 of Item 2,
contributing 35×(40/50)=2835 \times (40/50) = 28 to the total value.

The total value of the knapsack will be 10+28=3810 + 28 = 38.

Thus, the optimal value that can be obtained with the given items and knapsack capacity
is 38.
UNIT 5

a) Define P & NP problems and discuss their significance.

Definition of P & NP:

• P (Polynomial Time): Problems that can be solved in polynomial time by a


deterministic Turing machine. These problems are considered "easy" or efficiently
solvable.
• NP (Nondeterministic Polynomial Time): Problems for which a solution, if
provided, can be verified in polynomial time by a deterministic Turing machine.

Significance:

1. Relation Between P and NP:


a. A central question in computer science is whether P=NPP = NP, i.e.,
whether every problem whose solution can be verified in polynomial time
can also be solved in polynomial time.
b. This is an open problem and one of the seven "Millennium Prize Problems."
2. Applications:
a. Understanding PP and NPNP helps in identifying hard problems, like the
Travelling Salesman Problem, Knapsack Problem, and others.
b. It is foundational to cryptography, optimization, and theoretical computer
science.
3. Implications:
a. If P=NPP = NP, many complex problems, such as cryptographic security,
could be solved efficiently, revolutionizing computation.
b) What are the algorithms used to solve the string-matching problem?
Write any one algorithm.

Algorithms for String Matching:

1. Naïve String-Matching Algorithm


2. Knuth-Morris-Pratt (KMP) Algorithm
3. Rabin-Karp Algorithm
4. Boyer-Moore Algorithm
5. Z Algorithm

Example Algorithm: Knuth-Morris-Pratt (KMP) Algorithm

The KMP algorithm efficiently searches for a pattern in a given text by preprocessing the
pattern to create a longest prefix-suffix (LPS) table. This table avoids unnecessary re-
checking of characters, leading to linear time complexity.

c) Write and explain the Naïve String Matching Algorithm. Discuss its time
complexity.

Naïve String-Matching Algorithm:

The Naïve algorithm checks for the occurrence of a pattern P[0...m−1]P[0...m-1] in a text
T[0...n−1]T[0...n-1] by sliding the pattern over the text one character at a time.

Steps:

1. Compare the pattern PP with the substring of TT starting at each position ii.
2. If P[0...m−1]=T[i...i+m−1]P[0...m-1] = T[i...i+m-1], report a match.
3. Repeat until all positions ii are checked.

Pseudocode:

for i = 0 to n - m:
for j = 0 to m - 1:
if T[i + j] != P[j]:
break
if j == m:
print "Pattern found at index", i

Time Complexity:

• Worst Case: O((n−m+1)×m)O((n - m + 1) \times m), when mismatches occur at


every character.
• Best Case: O(n)O(n), when m=1m = 1 or the pattern matches at the first position.

d) Write a short note on Rabin-Karp String Matching Algorithm. Also


discuss its significance in solving string matching problems.

Rabin-Karp Algorithm:

The Rabin-Karp algorithm uses hashing to find a pattern P[0...m−1]P[0...m-1] in a text


T[0...n−1]T[0...n-1]. It is particularly useful when multiple patterns need to be searched
simultaneously.

Steps:

1. Compute the hash value of the pattern PP and the first substring of text
T[0...m−1]T[0...m-1].
2. Slide the pattern over the text and recompute the hash value for
T[i...i+m−1]T[i...i+m-1] using a rolling hash technique.
3. Compare the hash values:
a. If the hash values match, verify the substring character by character to
confirm the match.
4. Repeat until all positions are checked.

Significance:

• Efficient for searching multiple patterns due to the use of hashing.


• Handles spurious matches by verifying characters after hash matches.
Time Complexity:

• Average Case: O(n+m)O(n + m), where nn is the length of the text and mm is the
length of the pattern.
• Worst Case: O(n×m)O(n \times m), if all hash values match but substrings differ.

e) Rabin-Karp String Matching Algorithm

Rabin-Karp Algorithm Explanation:

The Rabin-Karp algorithm uses hashing to efficiently search for a pattern in a given text. It
computes the hash values of the pattern and the substrings of the text and uses these
hash values to find potential matches. If the hash values of a substring and the pattern
match, a character-by-character comparison is performed to confirm the match.

Steps:

1. Hashing the Pattern and Text Substrings:


a. Compute the hash value of the pattern PP and the hash value of the first
substring of the text TT that is of the same length as the pattern.
2. Sliding the Pattern Over the Text:
a. Slide the pattern over the text, one character at a time. For each position,
compute the hash of the substring starting at that position and compare it
with the hash of the pattern.
3. Compare Hashes:
a. If the hash values match, perform a character-by-character comparison to
confirm that the pattern matches the substring (in case of hash collisions).
4. Rolling Hashing:
a. Instead of recomputing the hash for each new substring, the rolling hash
technique is used to update the hash efficiently as the window moves along
the text.

Pseudocode:

hashPattern = hash(P) // Compute hash of pattern P


for i = 0 to n - m:
hashText = hash(T[i...i+m-1]) // Compute hash of the substring
T[i...i+m-1]
if hashPattern == hashText:
if P == T[i...i+m-1]: // Check for actual match
print "Pattern found at index", i

Time Complexity:

• Average Case: O(n+m)O(n + m), where nn is the length of the text and mm is the
length of the pattern. This is because, on average, hashing is done in constant time.
• Worst Case: O(n×m)O(n \times m), if there are hash collisions, requiring
character-by-character comparison for each match.

The algorithm is efficient when multiple patterns are being searched, as the hashing step
can be reused.

f) Knuth-Morris-Pratt (KMP) Algorithm for Pattern Matching

KMP Algorithm Explanation:

The Knuth-Morris-Pratt (KMP) algorithm is an efficient string matching algorithm that


avoids unnecessary re-checking of characters by using information gathered from previous
matches. It preprocesses the pattern to create an array called the Longest Prefix-Suffix
(LPS) array, which helps in skipping characters during the matching phase.

Steps:

1. Preprocess the Pattern (LPS Array):


a. The LPS array is used to store the length of the longest proper prefix of the
pattern that is also a suffix for each prefix of the pattern.
b. This information is used to skip ahead in the pattern whenever a mismatch
occurs, avoiding rechecking of previously matched characters.
2. Pattern Matching Phase:
a. Compare the pattern with the text, using the LPS array to skip over sections
of the pattern that have already been matched.
b. If there is a mismatch, use the LPS array to skip some comparisons and
resume checking from a more advanced position.
Pseudocode:

1. Preprocessing to generate the LPS array:


lps[0] = 0
j = 0
for i = 1 to m - 1:
while (j > 0 and P[i] != P[j]):
j = lps[j - 1]
if P[i] == P[j]:
j++
lps[i] = j

2. Pattern Matching Phase:


i = 0 // Text index
j = 0 // Pattern index
while i < n:
if T[i] == P[j]:
i++
j++
if j == m:
print "Pattern found at index", i - m
j = lps[j - 1] // Use the LPS array to skip ahead
elif i < n and T[i] != P[j]:
if j > 0:
j = lps[j - 1]
else:
i++

Time Complexity:

• Preprocessing Time: O(m)O(m), where mm is the length of the pattern.


• Matching Time: O(n)O(n), where nn is the length of the text.

Thus, the overall time complexity is O(n+m)O(n + m), making KMP very efficient
compared to the Naïve String-Matching Algorithm.
g) Compare NP-Hard and NP-Complete Problems

NP-Hard Problems:

• Definition: NP-Hard problems are those problems for which no polynomial-time


solution is known, and solving any NP-Hard problem is at least as hard as solving
any problem in NP. This means that an NP-Hard problem can be harder than NP-
Complete problems, and it might not even belong to NP.
• Characteristics:
o NP-Hard problems are at least as difficult as the hardest problems in NP.
o They may not have a solution that can be verified in polynomial time.
o Some NP-Hard problems may not even be decision problems (i.e., they may
involve optimization).

NP-Complete Problems:

• Definition: NP-Complete problems are a subset of NP problems that are both in NP


and NP-Hard. In other words, these are the hardest problems in NP, and a solution
to any NP-Complete problem can be verified in polynomial time.
• Characteristics:
o They are solvable in non-deterministic polynomial time (NP).
o They are as hard as any other problem in NP (i.e., if an NP-Complete problem
can be solved in polynomial time, all problems in NP can also be solved in
polynomial time).
o Examples include the Travelling Salesman Problem, Knapsack Problem, and
Boolean satisfiability problem.

Key Differences:

Property NP-Hard NP-Complete

Belongs to NP? No, may not be in NP Yes, always in NP

Verification in
Not necessarily Yes
Polynomial Time
May or may not have a solution in A solution exists and can be
Solvability
polynomial time verified in polynomial time
Halting problem, Traveling Boolean satisfiability, 3-SAT,
Examples
Salesman (optimization version) Knapsack Problem

Conclusion:

• NP-Complete problems are the hardest problems in NP that can be solved in


polynomial time (if such a solution exists).
• NP-Hard problems are even harder than NP-Complete problems and may not
necessarily be solvable or verifiable in polynomial time.

h) Approximation Algorithm

Approximation Algorithm Explanation:

An Approximation Algorithm is an algorithm used to find near-optimal solutions to


optimization problems, where finding the exact optimal solution is either too time-
consuming or computationally infeasible, especially for NP-hard problems. Instead of
finding the exact optimal solution, approximation algorithms aim to provide a solution that
is close to the optimal solution, often within a factor of the optimal.

Approximation algorithms are mainly used for NP-hard problems that do not have known
efficient algorithms to find the exact optimal solution. These algorithms offer a trade-off
between time complexity and the quality of the solution.

Common Characteristics of Approximation Algorithms:

1. Efficiency: Approximation algorithms typically run in polynomial time.


2. Guaranteed Approximation: These algorithms guarantee that the solution will not
be worse than a specified factor of the optimal solution.
3. Greedy Methods: Many approximation algorithms are based on greedy techniques
where the best possible choice is made at each step.

Example of Approximation Problems:

• Traveling Salesman Problem (TSP): Finding the shortest possible route to visit all
cities. Approximation algorithms such as the Christofides’ algorithm guarantee a
solution within 1.5 times the optimal.
• Vertex Cover Problem: An approximation algorithm for the vertex cover problem
provides a solution that is at most 2 times the optimal.

Approximation Ratio:

The approximation ratio (or approximation factor) of an approximation algorithm is the


ratio between the cost of the solution found by the algorithm and the cost of the optimal
solution. This ratio helps quantify how close the solution produced by the approximation
algorithm is to the optimal solution.

Mathematically, the approximation ratio ρ\rho can be defined as:

ρ=C(A)C(O)\rho = \frac{C(A)}{C(O)}

Where:

• C(A)C(A) is the cost of the solution produced by the approximation algorithm.


• C(O)C(O) is the cost of the optimal solution.

If ρ=1\rho = 1, the algorithm is exact, i.e., it always produces the optimal solution. If
ρ>1\rho > 1, the algorithm's solution is at most ρ\rho times worse than the optimal
solution.

i) Introduction to Randomized Algorithms

Randomized Algorithms Explanation:

Randomized algorithms are algorithms that use random numbers or random choices
during execution to make decisions or improve performance. These algorithms are
designed to have a good expected performance, even though they may behave
unpredictably for specific inputs due to the inherent randomness.

There are two types of randomized algorithms:

1. Las Vegas Algorithms: Always produce a correct result, but their running time is
subject to randomness. The expected running time may be better than
deterministic algorithms.
2. Monte Carlo Algorithms: May produce incorrect results with a certain probability,
but their running time is fixed. The probability of error can be reduced by repeating
the algorithm multiple times.

Applications:

• Primality Testing: Randomized algorithms like the Miller-Rabin primality test


provide a probabilistic method to test if a number is prime.
• Sorting: Randomized QuickSort (explained in the next section) uses randomization
to improve its performance in certain cases.
• Graph Algorithms: Randomized algorithms are used in finding minimum cuts or
matching in graphs.

Advantages:

• Randomized algorithms can often provide solutions faster than deterministic


algorithms.
• They may provide simpler implementations and reduce the complexity of problems.

Disadvantages:

• Since the algorithm involves random choices, there is a small probability of


producing incorrect results (especially in Monte Carlo algorithms).
• Performance can vary depending on the specific input.

j) Randomized QuickSort

Randomized QuickSort Explanation:

Randomized QuickSort is a variant of the well-known QuickSort algorithm where the


choice of the pivot element is randomized. The main idea behind QuickSort is to divide the
array into smaller subarrays based on a pivot element, such that elements less than the
pivot come before it, and elements greater than the pivot come after it. Then, QuickSort is
recursively applied to the subarrays.
Steps for Randomized QuickSort:

1. Randomly Choose a Pivot:

Choose a pivot element randomly from the array instead of using the first element, the last
element, or the median as in traditional QuickSort.

2. Partition the Array:

Partition the array into two subarrays – one with elements smaller than the pivot and one
with elements greater than the pivot. The pivot is placed in its final position.

3. Recursively Sort Subarrays:

Recursively apply QuickSort to the subarrays formed on either side of the pivot.

Pseudocode for Randomized QuickSort:

function RandomizedQuickSort(A, low, high):


if low < high:
pivotIndex = RandomizedPartition(A, low, high)
RandomizedQuickSort(A, low, pivotIndex - 1)
RandomizedQuickSort(A, pivotIndex + 1, high)

function RandomizedPartition(A, low, high):


pivotIndex = Random(low, high) // Randomly choose a pivot
swap A[pivotIndex] with A[high] // Move pivot to the end
return Partition(A, low, high)

function Partition(A, low, high):


pivot = A[high]
i = low - 1
for j = low to high - 1:
if A[j] < pivot:
i = i + 1
swap A[i] with A[j]
swap A[i + 1] with A[high]
return i + 1
Time Complexity:

• Average Case Time Complexity: O(nlog⁡n)O(n \log n), where nn is the number of
elements in the array.
o On average, the random pivot divides the array into two equal halves,
resulting in a logarithmic number of recursive calls.
• Worst Case Time Complexity: O(n2)O(n^2), which happens when the pivot
always ends up being the smallest or largest element (i.e., when the array is already
sorted or nearly sorted).
• Best Case Time Complexity: O(nlog⁡n)O(n \log n), when the pivot divides the
array into roughly equal parts.

Why Randomization Helps:

Randomized QuickSort is less likely to encounter the worst-case scenario compared to the
standard deterministic QuickSort. The random selection of the pivot significantly reduces
the likelihood of encountering the worst-case time complexity, making it more efficient on
average.

You might also like