0% found this document useful (0 votes)
34 views

DAA Ass Group3

algorithm

Uploaded by

Yawkal Addis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

DAA Ass Group3

algorithm

Uploaded by

Yawkal Addis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE


Design and Analysis of Algorithm Group Assignment for 3rd year students

BY: GROUP 3 MEMBERS:

Name Id

1. Yawukal Addis………………………………………………..BULR3616/14

2. Woldie Chala…………………………………………………..BULR1667/14

3. Abinet Getahun…………………………………………………BULR0072/14

4. Beza Awoke……………………………………………………BULR0350/14

5. Tarekegn Zeleke………………………………………………..BUTR10002/14

Submitted to Instructor Amsalu T.

BONGA ETHIOPIA

June 17, 2024


1. Shortest Path Algorithms in the Analysis of Algorithms .
Q1. Compare and contrast the following shortest path algorithms by
providing the best examples for each. Write an algorithm for how they work.

A. Shortest path in an unweighted graph.

B. Shortest path in a weighted graph.

C. Shortest path in a weighted graph with negative edges.

D. Shortest path in a weighted acyclic graph.

Shortest path algorithms are fundamental in graph theory and are used to find
the shortest path between two vertices in a graph. There are different types of
shortest path algorithms based on the characteristics of the graph, such as
unweighted graphs, weighted graphs with non-negative weights, and graphs
with negative weights.

A. Shortest path in an unweighted graph.

 In an unweighted graph, all edges have the same weight or cost, and
finding the shortest path involves finding the path with the minimum
number of edges between two vertices.
The two most common algorithms for finding the shortest path in an
unweighted graph are:

1. Breadth-First Search (BFS):

 Algorithm:
1. Start with the source vertex and mark it as visited.
2. Enqueue the source vertex into a queue.
3. While the queue is not empty:
- Dequeue a vertex from the queue.
- For each unvisited neighbor of the dequeued vertex:

Page | 1
- Mark the neighbour as visited.
- Enqueue the neighbour into the queue.
- Set the distance of the neighbour as one more than the distance of
the dequeued vertex.
4. Repeat until all reachable vertices are visited.

2. Dijkstra's Algorithm (for unweighted graphs):

Algorithm:

1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Start from the source vertex and explore all neighbouring vertices.
3. Update the distance of each neighbouring vertex as one more than the
distance of the current vertex.
4. Continue exploring vertices until all reachable vertices are visited.

Shortest path algorithms in unweighted graphs are essential for scenarios where
edge weights are not considered, such as network routing protocols or
determining connectivity between nodes. They provide a foundation for more
complex shortest path algorithms in weighted graphs by understanding basic
graph traversal principles.

Example:-Given an unweight graph of V nodes and E edges, a source node S,


and a destination node D, we need to find the shortest path from node S to node
D in the graph.

Page | 2
Input: V = 8, E = 10, S = 0, D = 7, edges[][] = {{0, 1}, {1, 2}, {0, 3}, {3, 4},
{4,7},{3,7},{6,7},{4,5},{4,6},{5,6}}
Output: 037
Explanation: The shortest path is 0 -> 3 -> 7.
Input: V = 8, E = 10, S = 2, D = 6, edges[][] = {{0, 1}, {1, 2}, {0, 3}, {3, 4},
{4,7},{3,7},{6,7},{4,5},{4,6},{5,6}}
Output: 210346
Explanation: The shortest path is 2 -> 1 -> 0 -> 3 – > 4 -> 6.

B. Shortest path in a weighted graph.

 Shortest path algorithms in weighted graphs are used to find the path
with the minimum total weight or cost between two vertices. There are
several popular algorithms for finding the shortest path in weighted
graphs, depending on the characteristics of the graph and the weights
assigned to the edges. Two common algorithms for finding the shortest
path in a weighted graph are Dijkstra's Algorithm and Bellman-Ford
Algorithm.
 In a weighted graph, each edge has a weight or cost associated with it,
and finding the shortest path involves finding the path with the minimum
total weight between two vertices.

Page | 3
 The two most common algorithms for finding the shortest path in a
weighted graph are:

1. Dijkstra’s Algorithm:

Algorithm:

1. Initialize a distance array with infinity for all vertices except the
source (distance to source is 0).
2. Create a priority queue or min-heap to store vertices based on their
current distance from the source.
3. Start from the source vertex and explore all neighbouring vertices.
4. Update the distance of each neighbouring vertex if a shorter path is
found.
5. Repeat the process until all reachable vertices are visited.

Example: Consider the following weighted graph:

2. Bellman-Ford Algorithm:

Algorithm:

1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Relax all edges repeatedly, updating the distance of each vertex if a
shorter path is found.
3. Repeat the relaxation step V-1 times, where V is the number of vertices
in the graph
4. Check for negative weight cycles by performing one more relaxation
step. If any distance is updated, then there is a negative weight cycle.

Example: Bellman-Ford algorithm can handle graphs with negative edge


weights and detect negative weight cycles.

Page | 4
 Shortest path algorithms in weighted graphs are crucial for various
applications like network routing, GPS navigation systems, and resource
allocation optimization. These algorithms help in efficiently determining
the most cost-effective paths between different locations or nodes in a
network with varying edge weights.

For example, consider the following graph. Let source ‘u’ be vertex 0,
destination ‘v’ be 3 and k be 2. There are two walks of length 2, the walks are
{0, 2, 3} and {0, 1, 3}. The shortest among the two is {0, 2, 3} and weight of
path is 3+6 = 9.

C. Shortest path in a weighted graph with negative edges.

Example: Bellman-Ford algorithm: The Bellman-Ford algorithm is designed to


handle weighted graphs with negative edges. It works by maintaining a distance
array for each vertex and iteratively updating the distances of its neighbours.
The algorithm can detect negative weight cycles and handle them correctly.

Algorithm:

 1. Initialize distances to all nodes as infinity except the initial node


which is set to 0.
 2. Iterate through all edges V-1 times, where V is the number of
vertices, and update distances based on the edges.

Page | 5
 3. After V-1 iterations, check for negative weight cycles. If a shorter
path is found after V-1 iterations, then there is a negative weight cycle.
 D. Shortest path in a weighted acyclic graph:
 Example: Floyd-Warshall algorithm The Floyd-Warshall algorithm
works by iteratively considering all vertices as potential intermediate
vertices in the shortest path between any pair of vertices. It updates the
shortest distance between any two vertices if a shorter path is found by
including the intermediate vertex.
 The steps of the algorithm are:
1. Initialize a 2D array to store the shortest distances between all pairs of
vertices, with the initial values being the weights of the edges if there is
an edge, and infinity if there is no edge. Also, set the diagonal elements
of the array to 0.
2. For each intermediate vertex k from 1 to V, where V is the number of
vertices, update the distance array as follows: For each pair of vertices i
and j, if the distance from i to j through vertex k is shorter than the
current distance, update the distance to the new shorter distance.

3. After the above step, the array will contain the shortest distances between
all pairs of vertices.

 The algorithm guarantees to find the shortest path between all pairs of
vertices in a weighted graph, including graphs with negative edge
weights. Topological sorting with dynamic programming
Algorithm:
1. Perform topological sorting of the graph.

2. Initialize distances to all nodes as infinity except the initial node which
is set to 0.

3. Iterate through the sorted nodes and update distances based on the edges.

Page | 6
A negative edge is simply an edge having a negative weight. It could be in any
context pertaining to the graph and what are its edges referring to. For example,
the edge C-D in the above graph is a negative edge. Floyd-Warshall works by
minimizing the weight between every pair of the graph, if possible. So, for a
negative weight you could simply perform the calculation as you would have
done for positive weight edges.

The problem arises when there is a negative cycle. Take a look at the above
graph. And ask yourself the question - what is the shortest path between A and
E? You might at first feel as if its ABCE costing 6 ( 2+1+3 ). But actually,
taking a deeper look, you would observe a negative cycle, which is BCD. The
weight of BCD is 1+(-4)+2 = (-1). While traversing from A to E, i could keep
cycling around inside BCD to reduce my cost by 1 each time. Like, the path
A(BCD)BCE costs 5 (2+(-1)+1+3). Now repeating the cycle infinite times
would keep reducing the cost by 1 each time. I could achieve a negative infinite
shortest path between A and E.

D. Shortest path in a weighted acyclic graph.

The shortest path algorithms for a weighted acyclic graph, specifically focusing
on two popular algorithms: Topological Sort with Dynamic Programming and
Dijkstra's Algorithm with Priority Queue.

Topological Sort with Dynamic Programming:

Page | 7
This algorithm is used to find the shortest path in a weighted acyclic graph. It
leverages the topological sorting of vertices to efficiently calculate the shortest
path from a source vertex to all other vertices.

Algorithm:

1. Perform a topological sort of the vertices in the graph.


2. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
3. Iterate over the vertices in topological order.
4. For each vertex, update the distance of its neighbours based on the current
vertex's distance and edge weights.
5. Repeat this process until all reachable vertices are visited.

2) Fractional Knapsack Problems.


 The fractional knapsack problem is also one of the techniques which are
used to solve the knapsack problem. In fractional knapsack, the items are
broken in order to maximize the profit. The problem in which we break
the item is known as a Fractional knapsack problem.
 This problem can be solved with the help of using two techniques:-
 Brute-force approach: The brute-force approach tries all the possible
solutions with all the different fractions but it is a time-consuming
approach.
 Greedy approach: In Greedy approach, we calculate the ratio of
profit/weight, and accordingly, we will select the item. The item with the
highest ratio would be selected first.
 The fractional knapsack problem can be solved by first sorting the items
according to their values, and it can be done in O(NlogN) this approach
starts with finding the most valuable item, and we consider the most
valuable item as much as possible, so start with the highest value item

Page | 8
denoted by vi. Then, we consider the next item from the sorted list, and
in this way, we perform the linear search in O(N) time complexity.
 Therefore, the overall running time would be O(NlogN) plus O(N)
equals to O(NlogN).
 We can say that the fractional knapsack problem can be solved much
faster than the 0/1 knapsack problem

There are basically three approaches to solve the problem:-

1) The first approach is to select the item based on the maximum profit.
2) The second approach is to select the item based on the minimum
weight.
3) The third approach is to calculate the ratio of profit/weight.

Consider the below example:

Objects: 1 2 3 4 5 6 7
Profit (P): 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
W (Weight of the knapsack): 15
n (no of items): 7

First approach:-

Object Profit Weight Remaining weight


3 15 5 15 - 5 = 10
2 10 3 10 - 3 = 7
6 9 3 7-3=4
5 8 1 4-1=3

Page | 9
7 7 * ¾ = 5.25 3 3-3=0
The total profit would be equal to (15 + 10 + 9 + 8 + 5.25) = 47.25
 Second approach:
The second approach is to select the item based on the minimum weight.

Object Profit Weight Remaining weight


1 5 1 15 - 1 = 14
5 7 1 14 - 1 = 13
7 4 2 13 - 2 = 11
2 10 3 11 - 3 = 8
6 9 3 8-3=5
4 7 4 5-4=1
3 15 * 1/5 = 3 1 1-1=0
In this case, the total profit would be equal to (5 + 7 + 4 + 10 + 9 + 7 + 3)
= 46
 Third approach:
In the third approach, we will calculate the ratio of profit/weight.

Objects: 1 2 3 4 5 6 7

Profit (P): 5 10 15 7 8 9 4

Weight(w): 1 3 5 4 1 3 2

In this case, we first calculate the profit/weight ratio.

Page | 10
Object 1: 5/1 = 5

Object 2: 10/3 = 3. 33

Object 3: 15/5 = 3

Object 4: 7/4 = 1.7

Object 5: 8/1 = 8

Object 6: 9/3 = 3

Object 7: 4/2 = 2

P:w: 5 3.3 3 1.7 8 3 2

In this approach, we will select the objects based on the maximum


profit/weight ratio. Since the P/W of object 5 is maximum so we select
object 5.

Object Profit Weight Remaining weight


5 8 1 15 - 8 = 7

After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So,
we select object 1 shown in the below table:

Object Profit Weight Remaining weight


5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13

After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So,
we select object 2 having profit/weight ratio as 3.3.

Object Profit Weight Remaining weight

Page | 11
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10

After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So,
we select object 3 having profit/weight ratio as 3.

Object Profit Weight Remaining weight


5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5

After object 3, object 6 has the maximum profit/weight ratio, i.e., 3. So


we select object 6 having profit/weight ratio as 3.

Object Profit Weight Remaining weight


5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
6 9 3 5-3=2

After object 6, object 7 has the maximum profit/weight ratio, i.e., 2. So


we select object 7 having profit/weight ratio as 2.

Object Profit Weight Remaining weight


5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10

Page | 12
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0

As we can observe in the above table that the remaining weight is zero which
means that the knapsack is full. We cannot add more objects in the knapsack.
Therefore, the total profit would be equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.

In the first approach, the maximum profit is 47.25. The maximum profit in the
second approach is 46. The maximum profit in the third approach is 51.
Therefore, we can say that the third approach, i.e., maximum profit/weight ratio
is the best approach among all the approaches.

Algorithms for Fractional Knapsack Problems.

FRACTIONAL_KNAPSACK(X, V, W, M)
S ← Φ // Set of selected items, initially empty
SW ← 0 // weight of selected items
SP ← 0 // profit of selected items
i←1
while i ≤ n do
if (SW + w[i]) ≤ M then
S ← S ∪ X[i]
SW ← SW + W[i]
SP ← SP + V[i]
else
frac ← (M – SW) / W[i]
S ← S ∪ X[i] * frac // Add fraction of item X[i]
SP ← SP + V[i] * frac // Add fraction of profit
SW ← SW + W[i] * frac // Add fraction of weight

Page | 13
end
i←i+1
end

3) Introduction to Probabilistic Algorithms and Parallel Algorithms.

3.1 Introduction to Probabilistic Algorithms

A probabilistic algorithm is an algorithm where the result and/or the way the
result is obtained depend on chance. These algorithms are also sometimes called
randomized algorithms. In some applications the use of probabilistic algorithms
is natural, e.g. simulating the behaviour of some existing or planned system
over time. In this case the result by nature is stochastic.

In some cases the problem to be solved is deterministic but can be transformed


into a stochastic one and solved by applying a probabilistic algorithm. Eg
numerical integration, optimization. For these numerical applications the result
obtained is always approximate, but its expected precision improves as the time
available to use the algorithm increases. The techniques of applying
probabilistic algorithms to numerical problems were originally called Monte
Carlo methods.

There are also a number of discrete problems for which only an exact result is
acceptable eg. sorting and searching) and where the introduction of randomness
influences only on the ease and efficiency in finding the solution. For some
problems where trivial exhaustive search is not feasible probabilistic algorithms
can be applied giving a result that is correct with a probability less than one eg.
primarily testing, string equality testing). The probability of failure can be made
arbitrary small by repeated applications of the algorithm.

Page | 14
3.1.1 Types of Probabilistic Algorithms:
1. Monte Carlo Algorithms:- Monte Carlo algorithms use random
sampling to obtain numerical results. They provide approximate
solutions with a quantifiable level of confidence. Example: Estimating
the value of π (pi) by generating random points within a square and
calculating the fraction of points falling inside a quarter circle
inscribed in the square.
Application:
 Estimation of integrals and areas.
 Simulation of physical and mathematical systems.
 Cryptography, such as in generating large prime numbers.
2. Las Vegas Algorithms:- Las Vegas algorithms always produce a
correct result, but their running time may vary. They use
randomization to improve the expected performance while ensuring
correctness. Example: Quicksort algorithm with randomized pivot
selection.
Application:
 Sorting algorithms.
 Optimization problems.
 Graph algorithms where randomness can aid in achieving faster results.
3. Randomized Algorithms:-Randomized algorithms use random inputs
or random choices during computation. They are designed to deliver
efficient solutions for problems that are hard to solve
deterministically. Example: Randomized primarily testing using the
Miller-Rabin algorithm.
Application:
 Graph algorithms like random walks and graph colouring.
 Computational geometry for algorithms like randomized incremental
construction.

Page | 15
 Optimization problems where randomness can lead to better solutions.
3.1.2 Applications of Probabilistic Algorithms in DAA:

1. Approximation Algorithms for NP-Hard Problems:-Probabilistic


algorithms are often used to approximate solutions to NP-hard
problems where finding an exact solution in polynomial time is
impractical. Example: Approximation algorithms for the Traveling
Salesperson Problem (TSP) or the Knapsack Problem.
2. Randomized Data Structures:-Data structures like skip lists and
randomized search trees use probabilistic algorithms to achieve
efficient average-case performance. Example: Skip lists for fast
searching, insertion, and deletion operations.
3. Cryptography and Security Protocols:-Randomized algorithms are
crucial in cryptography for generating secure keys, primes, and
ensuring secure communication protocols. Example: Randomized
algorithms used in RSA encryption for generating large prime
numbers.
4. Machine Learning and Probabilistic Modelling:-Probabilistic
algorithms play a key role in machine learning for tasks like
clustering, dimensionality reduction, and Bayesian inference.
Example: Probabilistic graphical models for probabilistic reasoning
and decision making.
5. Randomized Optimization Techniques:-In optimization problems,
randomized algorithms can often provide solutions that are close to
optimal, even when the problem is non-convex or has many local
minima. Example: Evolutionary algorithms and simulated annealing
for global optimization.

3.1.3 Advantages:

Page | 16
Efficiency: Probabilistic algorithms can sometimes provide faster
solutions or better approximations compared to deterministic approaches.
Versatility: They can handle complex problems where exact solutions are
computationally prohibitive.
Applications: Widely applicable across various domains including
mathematics, computer science, physics, and engineering.

3.2 Introduction to parallel algorithms.


Parallel algorithms are those specially devised for parallel computers. The
idealized parallel algorithms are those written for the PRAM models if no
physical constraints or communication overheads are imposed.

In the real world, an algorithm is considered efficient only if it can be cost-


effectively implemented on physical machines. In this sense, all machine-
implementable algorithms must be architecture-dependent. This means the
effects of communication overhead and architectural constraints cannot be
ignored.

3.2.1 Characteristics of Parallel Algorithm


There are various characteristics of parallel algorithm which are as
follows −
 Deterministic versus nondeterministic − It is only deterministic
algorithms are implementable on real machines. Our study is confined to
deterministic algorithms with polynomial time complexity
 Computational Granularity − Granularity decides the size of data
items and program modules used in the computation. In this sense, we
also classify algorithms as fine-grain, medium-grain, or coarse-grain.
 Parallelism profile − The distribution of the degree of parallelism in an
algorithm reveals the opportunity for parallel processing. This often
affects the effectiveness of the parallel algorithms.

Page | 17
 Communication patterns and synchronization requirements −
Communication patterns address both memory access and interprocessor
communications. The patterns can be static or dynamic, depending on
the algorithms. Static algorithms are more suitable for SIMD or pipelined
machines, while dynamic algorithms are for MIMD machines. The
synchronization frequency often affects the efficiency of an algorithm.
 Uniformity of the operations − This refers to the types of fundamental
operations to be performed. If the operations are uniform across the data
set, the SIMD processing or pipelining may be more desirable. In other
words, randomly structured algorithms are more suitable for MIMD
processing. Other related issues include data types and precision desired.
 Memory requirement and data structures − In solving large-scale
problems, the data sets may require huge memory space. Memory
efficiency is affected by data structures chosen and data movement
patterns in the algorithms. Both time and space complexities are key
measures of the granularity of a parallel algorithm.
3.2.2 Types of Parallel Algorithms
1. Task Parallelism:-Task parallelism divides a task into smaller sub-
tasks that can be executed concurrently. Each sub-task may operate on
different data or parts of the problem. Example: Matrix multiplication
where different threads compute different rows or columns concurrently.
Application:
 Parallel sorting algorithms like parallel merge sort.
 Image and video processing where different regions or frames
can be processed concurrently.

2. Data Parallelism:-Data parallelism involves distributing the same


operation across multiple processing units, each working on different data

Page | 18
elements simultaneously. Example: Parallel summation of array elements
where each processor sums a subset of array elements.
Application:
 Parallel matrix operations like addition, multiplication.
 Statistical computations such as parallel computations in big
data analytics.

3. Pipeline Parallelism:-Pipeline parallelism breaks down a task into a


sequence of sub-tasks, where each sub-task is executed concurrently by
different processors. Data flows through stages of processing. Example:
Video or audio processing pipelines where data streams through stages
like encoding, compression, and storage concurrently.
Application:
 Streaming applications where data is processed in real-time.
 Computational fluid dynamics simulations involving multiple
stages of computation.
3.2.3 Applications in DAA:
1. Parallel Sorting Algorithms:-Algorithms like parallel merge sort,
quicksort, and parallel radix sort distribute sorting operations across
multiple processors.
Application: Sorting large datasets in databases, search engines, and
data analytics platforms where efficiency in sorting impacts overall
performance.
2. Parallel Graph Algorithms: Algorithms such as parallel breadth-first
search (BFS), parallel depth-first search (DFS), and parallel shortest path
algorithms distribute graph traversal and computation across multiple
processors.
Application: Social network analysis, route planning in transportation
networks, and recommendation systems.

Page | 19
4. Parallel Matrix Operations:-Algorithms for parallel matrix addition,
multiplication, and inversion distribute computation of matrix
elements across processors to speed up operations.
Application: Scientific computing, image processing, and machine
learning algorithms such as neural network training.

4. Parallel Divide and Conquer Algorithms:-Algorithms like parallel


merge sort and parallel quicksort divide the problem into smaller sub-
problems that can be solved concurrently.
Application: Efficient handling of recursive problems in computational
geometry, numerical methods, and optimization algorithms.
3.2.4 Advantages
 Performance Scaling: Parallel algorithms can significantly
reduce execution time for large-scale problems by harnessing
multiple processing units.
 Scalability: They allow systems to scale with the addition of
more processors, enhancing throughput and handling larger
datasets.
 Complexity: Designing parallel algorithms requires managing
synchronization, load balancing, and communication overhead
effectively.

Page | 20
Conclusion

Shortest path in an unweighted graph: Breadth-First Search (BFS) is the optimal


algorithm, with a time complexity of O(V+E). BFS explores the graph layer by
layer, guaranteeing the shortest path.

Shortest path in a weighted graph: Dijkstra's algorithm is the most widely used,
with a time complexity of O((V+E)log V). It uses a priority queue to efficiently
explore the graph and find the shortest paths.

Shortest path in a weighted graph with negative edges: Bellman-Ford algorithm


can handle negative edge weights, with a time complexity of O(VE). It relaxes
all edges repeatedly, ensuring no negative weight cycles exist.

Shortest path in a weighted acyclic graph: Topological sort can be used to find
the shortest paths in O(V+E) time. The algorithm exploits the acyclic nature of
the graph to efficiently compute shortest paths.

Fractional Knapsack Problem: The fractional knapsack problem can be solved


greedily by selecting items in decreasing order of their value-to-weight ratio.
The greedy algorithm has a time complexity of O(n log n) due to the sorting
step. The fractional knapsack problem always has an optimal solution where the
most valuable items are taken to the fullest extent possible.

Probabilistic and Parallel Algorithms: Probabilistic algorithms use randomness


to solve problems efficiently, often achieving better time complexity than
deterministic algorithms. They are well-suited for parallelization, as the
independent trials can be executed concurrently on multiple processors.

Both probabilistic and parallel algorithms have found widespread applications


in areas like optimization, simulation, and large-scale data processing.

Page | 21

You might also like