0% found this document useful (0 votes)
3 views

DSA_Unit_05

The document outlines various algorithm design techniques including Divide and Conquer, General Method, and specific sorting algorithms like Merge Sort and Quick Sort. It also discusses advanced topics such as Strassen’s Matrix Multiplication, Greedy Methods, and Dynamic Programming, highlighting their definitions, steps, advantages, and disadvantages. Additionally, it covers shortest path algorithms like Dijkstra’s and Bellman-Ford, along with the Knapsack Problem and Huffman Coding, providing a comprehensive overview of algorithmic strategies.

Uploaded by

Sumit Srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DSA_Unit_05

The document outlines various algorithm design techniques including Divide and Conquer, General Method, and specific sorting algorithms like Merge Sort and Quick Sort. It also discusses advanced topics such as Strassen’s Matrix Multiplication, Greedy Methods, and Dynamic Programming, highlighting their definitions, steps, advantages, and disadvantages. Additionally, it covers shortest path algorithms like Dijkstra’s and Bellman-Ford, along with the Knapsack Problem and Huffman Coding, providing a comprehensive overview of algorithmic strategies.

Uploaded by

Sumit Srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Design Techniques

1. Divide and Conquer


2. General Method
Definition:
Divide and Conquer is a problem- Definition:
solving approach in which a The General Method refers to a
problem is broken down into smaller systematic way of solving problems
subproblems, solved recursively, by applying predefined steps,
and then combined to form the final ensuring consistency and
solution. correctness. It provides a structured
approach for designing algorithms.
Steps Involved:
1. Divide: Break the main Key Components:
problem into smaller 1. Understanding the problem:
subproblems. Define the input, output, and
2. Conquer: Solve the constraints.
subproblems recursively. 2. Choosing an approach:
3. Combine: Merge the Select a suitable algorithmic
solutions of subproblems to strategy (e.g., Divide and
obtain the final result. Conquer, Greedy, Dynamic
Programming).
Example: 3. Implementation: Develop
• Merge Sort follows this and test the algorithm.
approach by recursively
dividing an array into halves, Example:
sorting each half, and then • Designing an algorithm for
merging them back together. finding the shortest path in a
graph can use Dijkstra’s
Advantages: algorithm, following a
• Reduces problem complexity. predefined sequence of steps.
• Efficient in solving recursive
problems. Advantages:
• Used in efficient sorting and • Provides a structured
searching algorithms (e.g., problem-solving approach.
Quick Sort, Merge Sort). • Helps in optimizing
algorithms for better
Disadvantages: efficiency.
• Requires additional memory
for recursive calls.
• Overhead of recursion may
increase execution time for 3. Strassen’s Matrix
smaller inputs. Multiplication
Definition: Disadvantages:
Strassen’s Matrix Multiplication is • More complex than the
an optimized algorithm for standard method.
multiplying two square matrices. It • Requires extra memory for
reduces the number of required storing submatrices.
multiplications, making it faster than
the conventional method.

Conventional Matrix Sorting Algorithms


Multiplication Complexity:
If two n × n matrices are multiplied 1. Insertion Sort
using the standard method, the time
complexity is O(n³). Definition:
Insertion Sort is a simple sorting
Strassen’s Approach: algorithm that builds a sorted array
Strassen introduced a recursive one element at a time by inserting
method that breaks a matrix into each element into its correct
submatrices and performs only 7 position.
multiplications instead of 8,
reducing complexity to O(n^2.81). Algorithm Steps:
1. Assume the first element is
Steps Involved: sorted.
1. Divide the given matrix into 4 2. Pick the next element and
submatrices. compare it with elements in
2. Compute 7 new matrices the sorted portion.
using special formulas. 3. Shift larger elements to the
3. Reconstruct the result matrix right.
using these 7 matrices. 4. Insert the element at the
correct position.
Example: 5. Repeat for all elements.
For two 2×2 matrices:
C=A×BC = A × BC=A×B Example:
Strassen computes 7 intermediate Sorting [5, 3, 4, 1, 2] using Insertion
matrices using sum and subtraction Sort:
operations before combining them to 1. [5], 3 → Insert 3 before 5 →
obtain the final matrix. [3, 5]
2. [3, 5], 4 → Insert 4 → [3, 4,
Advantages: 5]
• Faster than conventional 3. [3, 4, 5], 1 → Insert 1 at start
methods for large matrices. → [1, 3, 4, 5]
• Useful in graphics and 4. [1, 3, 4, 5], 2 → Insert 2 → [1,
scientific computations. 2, 3, 4, 5]
Time Complexity: Time Complexity:
• Best case: O(n) • Best case: O(n)
• Worst case: O(n²) • Worst case: O(n²)
• Space Complexity: O(1) (In- • Space Complexity: O(1)
place sort)
Advantages:
Advantages: • Easy to understand and
• Simple and easy to implement.
implement. • Works well on nearly sorted
• Efficient for small datasets or lists.
nearly sorted data.
Disadvantages:
Disadvantages: • Very slow for large datasets.
• Inefficient for large datasets.

3. Quick Sort
2. Bubble Sort
Definition:
Definition: Quick Sort is a Divide and
Bubble Sort repeatedly swaps Conquer sorting algorithm that
adjacent elements if they are in the selects a pivot element, partitions
wrong order, pushing the largest the array around the pivot, and
element to the right in each pass. recursively sorts the subarrays.

Algorithm Steps: Algorithm Steps:


1. Compare adjacent elements 1. Select a pivot element.
and swap if necessary. 2. Partition the array into two
2. Move to the next element and halves:
repeat. o Left half (smaller than
3. After each pass, the largest pivot).
element is placed at the end. o Right half (greater than
4. Repeat until no swaps are pivot).
needed. 3. Recursively apply Quick Sort
on both halves.
Example: 4. Combine sorted parts.
Sorting [5, 3, 8, 2, 1] using Bubble Time Complexity:
Sort: • Best case: O(n log n)
1. [3, 5, 2, 1, 8] • Worst case: O(n²)
2. [3, 2, 1, 5, 8] • Space Complexity: O(log n)
3. [2, 1, 3, 5, 8] (Recursive stack)
4. [1, 2, 3, 5, 8] (Sorted)
Advantages:
• Faster than Bubble and Algorithm Steps:
Insertion Sort. 1. Build a Max Heap from the
• Works well for large datasets. input array.
2. Extract the maximum element
Disadvantages: (root of heap) and swap with
• Worst case O(n²) if the pivot the last element.
is not chosen optimally. 3. Reduce heap size and restore
heap property.
4. Repeat until the heap is
empty.
4. Merge Sort
Time Complexity:
Definition: • Best case: O(n log n)
Merge Sort is a Divide and • Worst case: O(n log n)
Conquer sorting algorithm that • Space Complexity: O(1)
recursively divides an array into
halves, sorts each half, and then Advantages:
merges them. • Efficient for large datasets.
• Works well in memory-
Time Complexity: constrained environments.
• Best case: O(n log n)
• Worst case: O(n log n) Disadvantages:
• Space Complexity: O(n) • Not as fast as Quick Sort in
practice.
Advantages:
• Consistently efficient.
• Stable sort (preserves order of
equal elements). Greedy Method and Shortest Path
Algorithms
Disadvantages:
• Requires additional memory Greedy Method
for merging. The greedy method is a problem-
solving strategy that makes a
sequence of choices, each of which
looks best at the moment. It does not
5. Heap Sort consider the overall problem but
focuses on the current step. The
Definition: greedy method is used when a
Heap Sort uses a Binary Heap data problem has an optimal substructure
structure to sort elements by and follows the greedy choice
repeatedly extracting the property.
maximum/minimum element.
General Method without exceeding the weight
1. Optimal Substructure: A limit.
problem has an optimal • The greedy approach sorts the
substructure if an optimal items by value-to-weight
solution can be constructed ratio in descending order and
efficiently from optimal selects as much of the highest
solutions to its subproblems. ratio item as possible before
2. Greedy Choice Property: If moving to the next.
a globally optimal solution • Time Complexity: O(n \log
can be arrived at by making a n) due to sorting.
series of locally optimal
choices, then the problem can 0/1 Knapsack Problem
be solved using the greedy • Unlike the fractional knapsack
method. problem, items cannot be
3. Steps in Greedy Method: divided. Either the whole item
o Choose the best option is taken or it is not.
available at a given • The greedy method does not
step. work optimally for 0/1
o Reduce the problem to knapsack, and dynamic
a smaller subproblem. programming is used instead.
o Repeat the process until
the problem is solved.
Greedy algorithms do not always
provide the best solution for every Huffman Coding Algorithm
problem but work efficiently for Huffman coding is a lossless data
problems that exhibit greedy compression algorithm that assigns
properties. variable-length binary codes to
characters, minimizing the total
length of the encoded message.

Knapsack Problem Steps in Huffman Algorithm:


The Knapsack Problem is a classic 1. Count the frequency of each
problem in combinatorial character.
optimization that can be solved 2. Create a min-heap and insert
using the greedy approach in some all characters as individual
cases. nodes.
3. Extract two nodes with the
Fractional Knapsack Problem lowest frequency.
• Given a set of items, each 4. Create a new node with these
with a weight and value, two as children, with a
determine the fraction of each frequency equal to their sum.
item to include in a knapsack 5. Repeat steps 3-4 until only
to maximize total value one node remains (the root of
the Huffman tree). Time Complexity:
2
6. Assign '0' to the left and '1' to • O(V ) with an adjacency
the right at each level to matrix.
generate the codes. • O((V+E)log V) with a priority
queue (using a binary heap).
Time Complexity: O(n \log n) due
to heap operations.

Bellman-Ford Algorithm
The Bellman-Ford Algorithm finds
Single Source Shortest Path the shortest path from a source
Algorithms vertex to all other vertices, even if
Single-source shortest path the graph has negative weight edges
algorithms find the shortest paths (but no negative weight cycles).
from a single source vertex to all
other vertices in a weighted graph. Steps in Bellman-Ford Algorithm:
1. Initialize:
Dijkstra’s Algorithm o Set the distance to the
Dijkstra’s algorithm finds the source as 0 and all other
shortest path from a source vertex to vertices as infinity.
all other vertices in a graph with 2. Relax all edges V−1V-1
non-negative edge weights. times:
o For each edge (u, v)
Steps in Dijkstra's Algorithm: with weight w, update
1. Initialize: distance if
o Assign distance 0 to the extdistance[u]+w<extdi
source vertex and stance[v].
infinity to all others. 3. Detect negative weight
o Create a priority queue cycles:
and insert the source o If a shorter path is
vertex. found after V−1V-1
2. Process the closest vertex: iterations, a negative
o Extract the vertex with cycle exists.
the minimum distance
from the queue. Time Complexity: O(VE)O(VE),
o Update the distance to making it slower than Dijkstra’s
its adjacent vertices if a algorithm but useful for graphs with
shorter path is found. negative weights.
o Insert updated vertices
into the queue.
3. Repeat until all vertices are
processed. Dynamic Programming (DP)
Dynamic Programming is an o Trace back through
optimization technique used in stored values to build
computer science and mathematics the final solution.
to solve complex problems by
breaking them down into simpler Types of Dynamic Programming
subproblems. It is mainly used for Approaches:
problems with overlapping • Top-Down Approach
subproblems and optimal (Memoization): Solve the
substructure properties. problem recursively and store
intermediate results to avoid
General Method of Dynamic redundant computations.
Programming • Bottom-Up Approach
1. Characterize the Structure (Tabulation): Solve the
of an Optimal Solution: problem iteratively by
o Identify the computing solutions to
subproblems and how smaller subproblems first and
they relate to the building up to the final
original problem. solution.
o Recognize the optimal
substructure property
(i.e., the optimal
solution to a problem Knapsack Problem
contains optimal
solutions to its The Knapsack Problem is a
subproblems). combinatorial optimization problem
2. Define the Recursive where we try to maximize the value
Solution: of items placed in a knapsack of
o Express the solution limited capacity.
recursively in terms of
solutions to smaller Types of Knapsack Problems:
subproblems. 1. 0/1 Knapsack Problem:
3. Compute the Solution in a Items cannot be divided; we
Bottom-Up Manner: either take an item or leave it.
o Avoid redundant 2. Fractional Knapsack
calculations by storing Problem: Items can be
results in a table divided, allowing us to take
(memoization or fractional values of an item.
tabulation).
o Use iterative 0/1 Knapsack Problem (Using
approaches to fill the Dynamic Programming)
table.
4. Construct the Optimal Problem Statement: Given n items,
Solution (if required): each with a weight w[i] and a value
v[i], and a knapsack of capacity W, Instead of finding the shortest path
determine the maximum value that from a single source, this algorithm
can be obtained by selecting items finds the shortest path between
without exceeding the weight limit. every pair of nodes in a graph.

Recursive Formula: Let dp[i][w] Algorithm Steps:


be the maximum value that can be 1. Initialization: Create a
achieved with the first i items and distance matrix dist[][], where
total weight w. dist[i][j] holds the shortest
distance from vertex i to j. If
Where: there is an edge between i and
dp[i-1][w] represents the value if j, set dist[i][j] to its weight;
we do not include item i. otherwise, set it to infinity.
v[i] + dp[i-1][w - w[i]] represents 2. Iteration: Use each vertex k
the value if we include item i (if as an intermediate point and
it fits in the knapsack). update dist[i][j] if using k as
an intermediate vertex results
Algorithm (Bottom-Up Dynamic in a shorter path:
Programming): 3. Final Result: After n
iterations (where n is the
1. Create a 2D table number of vertices), the
dp[n+1][W+1] initialized to 0. dist[][] matrix contains the
2. Fill the table using the above shortest path distances
formula. between all pairs of vertices.
3. The value in dp[n][W] gives
the maximum value Time Complexity: O(n³)
achievable.
Advantages: Works for both
Time Complexity: O(nW), where n directed and undirected graphs and
is the number of items and W is the handles negative-weight edges (but
knapsack capacity. not negative-weight cycles).

All-Pair Shortest Paths: Floyd-


Warshall Algorithm Introduction to Backtracking

The Floyd-Warshall Algorithm is a Backtracking is a general


dynamic programming algorithm algorithmic technique for finding
used to find the shortest paths solutions to problems by exploring
between all pairs of nodes in a all possible options and discarding
weighted graph. those that fail constraints.

Algorithm Idea Key Concept:


Backtracking builds a solution step- placements.
by-step and abandons a path • Graph Coloring: Assigns
(backtracks) as soon as it determines colors to vertices such that no
that this path cannot lead to a valid two adjacent vertices share the
solution. same color.
• Subset Sum Problem: Finds
Steps of Backtracking: subsets that sum to a target
1. Start with an empty solution. value.
2. Add elements step by step.
3. If at any point the solution
violates constraints, backtrack
and try a different option.
4. Repeat until a valid solution is
found or all possibilities are
exhausted.

Example: N-Queens Problem


The N-Queens problem requires
placing N queens on an N × N
chessboard such that no two queens
attack each other.

Algorithm:
1. Place a queen in the first row.
2. Move to the next row and
place a queen in a non-
attacking position.
3. If no valid position exists,
backtrack and try a different
position in the previous row.
4. Repeat until all queens are
placed.

Time Complexity:
• In the worst case, O(N!), as it
explores all permutations.
• Pruning (constraint checking)
reduces the number of
possible solutions.

Applications of Backtracking:
• Sudoku Solver: Fills empty
cells while ensuring valid

You might also like