0% found this document useful (0 votes)
20 views

Q15 Explain Strassen 's Matrix

Strassen's Matrix Multiplication algorithm is a divide-and-conquer method that reduces the number of scalar multiplications needed to multiply two matrices, making it more efficient for larger matrices compared to the standard method. The algorithm involves dividing matrices into submatrices, computing seven specific products instead of eight, and combining these products to form the final result. Additionally, the document explains the divide-and-conquer technique, pseudo code conventions, quick sort implementation, asymptotic notation, and flow shop scheduling algorithms.

Uploaded by

nayangite591
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Q15 Explain Strassen 's Matrix

Strassen's Matrix Multiplication algorithm is a divide-and-conquer method that reduces the number of scalar multiplications needed to multiply two matrices, making it more efficient for larger matrices compared to the standard method. The algorithm involves dividing matrices into submatrices, computing seven specific products instead of eight, and combining these products to form the final result. Additionally, the document explains the divide-and-conquer technique, pseudo code conventions, quick sort implementation, asymptotic notation, and flow shop scheduling algorithms.

Uploaded by

nayangite591
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Q15 Explain strassen 's matrix multiplication algorithm with

formulas and example


Strassen's Matrix Multiplication algorithm is an efficient divide-
and-conquer algorithm for multiplying two matrices. It reduces
the number of multiplications required compared to the standard
matrix multiplication method, making it faster for large matrices.
Standard Matrix Multiplication
Given two matrices and , the product is computed using:
C_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}
This requires scalar multiplications.
Strassen's Algorithm
Strassen's algorithm reduces the number of scalar multiplications
to by leveraging a divide-and-conquer approach. The algorithm
works as follows:
Step 1: Divide the Matrices
For matrices and , divide them into four submatrices:
A = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \
end{bmatrix}, \quad
B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \
end{bmatrix}
The product becomes:
C = \begin{bmatrix}
C_{11} & C_{12} \\
C_{21} & C_{22}
\end{bmatrix}
where:
C_{11} = A_{11}B_{11} + A_{12}B_{21}, \quad
C_{12} = A_{11}B_{12} + A_{12}B_{22}
C_{21} = A_{21}B_{11} + A_{22}B_{21}, \quad C_{22} =
A_{21}B_{12} + A_{22}B_{22} ]
Step 2: Compute Strassen's 7 Products
Instead of computing all eight subproducts, Strassen computes
seven intermediate products:
M_1 = (A_{11} + A_{22})(B_{11} + B_{22}), \quad
M_2 = (A_{21} + A_{22})B_{11}
M_3 = A_{11}(B_{12} - B_{22}), \quad M_4 = A_{22}(B_{21} -
B_{11}) ]
M_5 = (A_{11} + A_{12})B_{22}, \quad
M_6 = (A_{21} - A_{11})(B_{11} + B_{12})
M_7 = (A_{12} - A_{22})(B_{21} + B_{22}) ]
Step 3: Combine to Get the Result
Using these intermediate products, the resulting submatrices are:
C_{11} = M_1 + M_4 - M_5 + M_7, \quad
C_{12} = M_3 + M_5
C_{21} = M_2 + M_4, \quad C_{22} = M_1 - M_2 + M_3 + M_6 ]
Step 4: Recursively Apply
For larger matrices, recursively apply Strassen’s algorithm until
the base case (small matrices).
Example
Let and be matrices:
A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad
B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}
1. Divide and into submatrices:
A_{11} = 1, A_{12} = 2, A_{21} = 3, A_{22} = 4, \quad
B_{11} = 5, B_{12} = 6, B_{21} = 7, B_{22} = 8
2. Compute the 7 products:
M_1 = (1+4)(5+8) = 5 \cdot 13 = 65, \quad
M_2 = (3+4) \cdot 5 = 7 \cdot 5 = 35
M_3 = 1 \cdot (6-8) = 1 \cdot -2 = -2, \quad M_4 = 4 \cdot (7-5) =
4 \cdot 2 = 8 ]
M_5 = (1+2) \cdot 8 = 3 \cdot 8 = 24, \quad
M_6 = (3-1)(5+6) = 2 \cdot 11 = 22
M_7 = (2-4)(7+8) = -2 \cdot 15 = -30 ]
3. Compute :
C_{11} = M_1 + M_4 - M_5 + M_7 = 65 + 8 - 24 - 30 = 19
C_{12} = M_3 + M_5 = -2 + 24 = 22 ]
C_{21} = M_2 + M_4 = 35 + 8 = 43
C_{22} = M_1 - M_2 + M_3 + M_6 = 65 - 35 - 2 + 22 = 50 ]
4. Combine the results:
C = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}
This matches the result of traditional matrix multiplication.
Advantages
Faster than standard multiplication for large matrices.
Useful in applications requiring high-performance computation.
Disadvantages
Overhead due to recursion and additional addition/subtraction
operations.
Not as efficient for small matrices.
Q16 Explain divide and conquer technique with example
The divide and conquer technique is a fundamental algorithm
design paradigm used to solve complex problems efficiently. It
involves breaking a problem into smaller subproblems, solving
these subproblems independently, and then combining their
solutions to solve the original problem.
The three main steps in divide and conquer are:
1. Divide: Break the problem into smaller subproblems of the
same or similar type.
2. Conquer: Solve the subproblems recursively. If a subproblem is
small enough, solve it directly.
3. Combine: Merge the solutions of the subproblems to form the
solution to the original problem.
Example: Merge Sort
Merge Sort is a classic example of the divide and conquer
technique. It sorts an array by dividing it into smaller subarrays,
sorting those, and then merging them.
Steps:
1. Divide: Split the array into two halves.
2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves into a single sorted
array.
Algorithm:
def merge_sort(arr):
if len(arr) <= 1: # Base case: a single element is already
sorted
return arr
mid = len(arr) // 2 # Find the middle point to divide the array
left_half = merge_sort(arr[:mid]) # Recursively sort the left
half
right_half = merge_sort(arr[mid:]) # Recursively sort the right
half
return merge(left_half, right_half) # Merge the sorted halves
def merge(left, right):
sorted_array = []
i=j=0
while i < len(left) and j < len(right): # Merge elements in
sorted order
if left[i] < right[j]:
sorted_array.append(left[i])
i += 1
else:
sorted_array.append(right[j])
j += 1
# Add remaining elements
sorted_array.extend(left[i:])
sorted_array.extend(right[j:])
return sorted_array
Example Input:
Input: [38, 27, 43, 3, 9, 82, 10]
Execution:
1. Divide into: [38, 27, 43] and [3, 9, 82, 10]
2. Recursively sort:
[38, 27, 43] → [27, 38, 43]
[3, 9, 82, 10] → [3, 9, 10, 82]
3. Merge: [27, 38, 43] and [3, 9, 10, 82] → [3, 9, 10, 27, 38, 43,
82]
Complexity:
Time Complexity: , where is the size of the array.
Space Complexity: for the temporary arrays used during merging.
This approach demonstrates how divide and conquer simplifies
problem-solving by reducing the problem's size iteratively.

Q17 Explain pseudo code convention?


Pseudo code conventions are a set of informal guidelines used to
represent algorithms in a way that is easy to understand while
abstracting away the complexities of actual programming
languages. In Design and Analysis of Algorithms, pseudo code
serves as a tool to focus on the algorithm's logic and structure
without worrying about syntax or implementation-specific details.
Here’s an overview of pseudo code conventions:
1. General Structure
Use indented blocks to represent nested structures, such as loops
or conditional statements.
Use capitalized keywords like IF, WHILE, FOR, ELSE, RETURN, etc.,
to improve readability.
Example:
IF condition THEN
// Block of code
ELSE
// Block of code
END IF
2. Variables and Initialization
Variables are often initialized with simple expressions like x ← 0
(using the left arrow ← or = for assignment).
Avoid specifying data types explicitly unless necessary for clarity.
Example:
count ← 0
3. Control Structures
Conditionals: Use IF, ELSE, and ELSE IF for branching logic.
Loops: Represent iterations using FOR or WHILE loops.
Explicitly indicate the termination condition.
Example:
FOR i ← 1 TO n DO
// Loop body
END FOR
4. Input and Output
Use INPUT and OUTPUT or similar terms to indicate interactions.
For algorithms, you can omit these details or keep them minimal.
Example:
INPUT: Array A of size n
OUTPUT: Sorted Array A
5. Functions and Procedures
Define functions or procedures with descriptive names.
Indicate parameters clearly.
Use RETURN for output.
Example:
PROCEDURE SumArray(A)
sum ← 0
FOR i ← 1 TO length(A) DO
sum ← sum + A[i]
END FOR
RETURN sum
END PROCEDURE
6. Comments
Use comments (// or /* */) to clarify logic or assumptions in the
algorithm.
Example:
// This loop computes the factorial
result ← 1
FOR i ← 1 TO n DO
result ← result * i
END FOR
7. Abstract Data Operations
Use simple and intuitive names for operations on data structures
(e.g., INSERT, DELETE, PUSH, POP, ENQUEUE, DEQUEUE).
Example:
PUSH(stack, value)
8. Mathematical Notation
Use mathematical symbols or concise operations where
applicable (e.g., ∞ for infinity, mod for modulus, etc.).
Example:
min ← ∞
FOR i ← 1 TO n DO
IF A[i] < min THEN
min ← A[i]
END IF
END FOR
Benefits of Pseudo Code in Design and Analysis:
Language Independence: Focuses on logic rather than syntax.
Clarity: Simplifies complex ideas for easy understanding.
Communication: A common medium for discussing algorithms
across developers and researchers.
Modularity: Highlights algorithmic components and logical flow.
By adhering to these conventions, pseudo code becomes a
powerful tool for designing and analyzing algorithms.

Q18 Write an algorithm to implement quick sort using divide and


conquer approach
Here is an implementation of the Quick Sort algorithm using the
divide-and-conquer approach in Python.
Algorithm Explanation
Quick Sort works by selecting a pivot element, partitioning the
array into two halves based on the pivot, and recursively sorting
the two partitions.
Algorithm Steps
1. Divide:
Choose a pivot element from the array.
Partition the array such that all elements smaller than the pivot
are on its left, and all elements greater are on its right.
2. Conquer:
Recursively apply Quick Sort on the left and right partitions.
3. Combine:
Combine the partitions (implicitly done since arrays are modified
in place).
Python Code
def quick_sort(arr):
def partition(low, high):
pivot = arr[high] # Choose the last element as pivot
i = low - 1 # Pointer for the smaller element
for j in range(low, high):
if arr[j] <= pivot:
i += 1 # Increment pointer
arr[i], arr[j] = arr[j], arr[i] # Swap smaller element with
arr[j]
# Swap the pivot element to its correct position
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1 # Return the partition index
def quick_sort_recursive(low, high):
if low < high:
# Find the partition index
pi = partition(low, high)
# Recursively sort the partitions
quick_sort_recursive(low, pi - 1)
quick_sort_recursive(pi + 1, high)
# Start the recursive quick sort
quick_sort_recursive(0, len(arr) - 1)
return arr
# Example Usage
array = [10, 7, 8, 9, 1, 5]
print("Unsorted array:", array)
sorted_array = quick_sort(array)
print("Sorted array:", sorted_array)
Complexity Analysis
Time Complexity:
Best Case: (pivot divides the array into two equal halves)
Worst Case: (pivot is the smallest or largest element repeatedly)
Average Case:
Space Complexity:
due to recursive calls (in-place sort).

Q19 Explain asymptotic notation?


Asymptotic notation is a mathematical framework used to
describe the efficiency of algorithms, particularly in terms of their
time or space complexity as the input size grows large. It
provides a way to express the growth rate of an algorithm's
resource usage, allowing for the comparison of algorithms
irrespective of hardware or implementation details
Key Asymptotic Notations
1. Big-O Notation ():
Represents the upper bound of an algorithm's growth rate.
Describes the worst-case scenario, giving an upper limit on the
time or space required as grows large.
Example: means the algorithm's growth will not exceed for some
constant when is sufficiently large.
2. Omega Notation ():
Represents the lower bound of an algorithm's growth rate.
Describes the best-case scenario or guarantees the minimum
resource usage.
Example: means the algorithm will take at least time for some
constant when is large enough.
3. Theta Notation ():
Represents the tight bound of an algorithm's growth rate.
Describes the scenario where the algorithm's growth is bounded
both above and below by the same function.
Example: means the algorithm grows at a rate proportional to ,
no faster and no slower.
4. Little-o Notation ():
Represents a strict upper bound where the algorithm's growth
rate is strictly less than the given function.
Example: means the algorithm grows slower than and is not
equal to .
5. Little-omega Notation ():
Represents a strict lower bound where the algorithm's growth rate
is strictly greater than the given function.
Example: means the algorithm grows faster than and is not
equal to .
Importance in Algorithm Analysis
Simplification: Asymptotic notation focuses on the dominant term
and ignores constants and lower-order terms, making analysis
clearer.
Comparison: It helps compare algorithms' efficiency without
worrying about specific hardware or implementation.
Predictability: Provides a theoretical understanding of how
algorithms scale as input size increases.
Example
Consider two algorithms with time complexities:
Using asymptotic notation:
is , as dominates for large .
is , as dominates for large .
This allows us to conclude that is more efficient than for large
input sizes.

Q20 Explain Flow shop scheduling algorithm.


Flow Shop Scheduling is a classical problem in the field of design
and analysis of algorithms, where the goal is to schedule a set of
jobs on a sequence of machines to minimize specific objectives
like makespan (the total time required to complete all jobs), total
flow time, or lateness.
Problem Definition
Jobs: A set of jobs, , each consisting of tasks.
Machines: A set of machines, , where each job must be processed
in the same order on all machines.
Processing Time: Each job requires a known amount of time on
machine .
The key characteristic of flow shop scheduling is that the order in
which jobs are processed on the machines is the same for all jobs.
Objectives
The primary objectives for a flow shop scheduling problem are:
1. Minimize Makespan (): The time required to complete all jobs.
2. Minimize Total Flow Time: The sum of completion times of all
jobs.
3. Minimize Lateness or Tardiness: Penalty for finishing jobs later
than their due dates.
Algorithms for Flow Shop Scheduling
1. Johnson's Algorithm
Use Case: A special case with 2 machines or 3 machines (in some
cases).
Steps:
1.Divide the jobs into two groups:
Jobs with the smallest processing time on the first machine.
Jobs with the smallest processing time on the second machine.
2. Sequence jobs in ascending order of processing time for and .
3. Assign the jobs in the derived sequence.
Time Complexity:
2. Heuristic Methods
For Larger Flow Shops:
NEH Algorithm (Nawaz-Enscore-Ham): A widely used heuristic to
minimize makespan.
1. Sort jobs in decreasing order of total processing time.
2. Iteratively build a schedule by inserting each job into the
current schedule at the position that minimizes makespan.
Simulated Annealing, Tabu Search, or Genetic Algorithms: Used
when the problem size grows and exact methods are
computationally expensive.
3. Exact Algorithms
Dynamic Programming: Solves small-sized problems exactly.
Branch and Bound: Explores all possible schedules but prunes
suboptimal branches.
4. Approximation Algorithms
Used for specific instances where approximation is acceptable,
providing near-optimal solutions within bounded error margins.
Complexity of the Problem
The general flow shop scheduling problem is NP-hard, especially
when .
Simplified cases (e.g., Johnson’s Algorithm for 2 machines) can be
solved in polynomial time.
Example: 2-Machine Flow Shop
Suppose we have 3 jobs and 2 machines and . Processing times
are as follows:
Using Johnson’s Rule:
1. Select (smallest time = 2 on ).
2. Select (next smallest time = 2 on ).
3. Sequence is , , .
Compute the completion times to find the makespan.
Flow shop scheduling is a fundamental problem in operations
research, optimization, and algorithm design, with numerous real-
world applications in manufacturing, logistics, and supply chain
management.

You might also like