DAA Ass Group3
DAA Ass Group3
Name Id
1. Yawukal Addis………………………………………………..BULR3616/14
2. Woldie Chala…………………………………………………..BULR1667/14
3. Abinet Getahun…………………………………………………BULR0072/14
4. Beza Awoke……………………………………………………BULR0350/14
5. Tarekegn Zeleke………………………………………………..BUTR10002/14
BONGA ETHIOPIA
Shortest path algorithms are fundamental in graph theory and are used to find
the shortest path between two vertices in a graph. There are different types of
shortest path algorithms based on the characteristics of the graph, such as
unweighted graphs, weighted graphs with non-negative weights, and graphs
with negative weights.
In an unweighted graph, all edges have the same weight or cost, and
finding the shortest path involves finding the path with the minimum
number of edges between two vertices.
The two most common algorithms for finding the shortest path in an
unweighted graph are:
Algorithm:
1. Start with the source vertex and mark it as visited.
2. Enqueue the source vertex into a queue.
3. While the queue is not empty:
- Dequeue a vertex from the queue.
- For each unvisited neighbor of the dequeued vertex:
Page | 1
- Mark the neighbour as visited.
- Enqueue the neighbour into the queue.
- Set the distance of the neighbour as one more than the distance of
the dequeued vertex.
4. Repeat until all reachable vertices are visited.
Algorithm:
1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Start from the source vertex and explore all neighbouring vertices.
3. Update the distance of each neighbouring vertex as one more than the
distance of the current vertex.
4. Continue exploring vertices until all reachable vertices are visited.
Shortest path algorithms in unweighted graphs are essential for scenarios where
edge weights are not considered, such as network routing protocols or
determining connectivity between nodes. They provide a foundation for more
complex shortest path algorithms in weighted graphs by understanding basic
graph traversal principles.
Page | 2
Input: V = 8, E = 10, S = 0, D = 7, edges[][] = {{0, 1}, {1, 2}, {0, 3}, {3, 4},
{4,7},{3,7},{6,7},{4,5},{4,6},{5,6}}
Output: 037
Explanation: The shortest path is 0 -> 3 -> 7.
Input: V = 8, E = 10, S = 2, D = 6, edges[][] = {{0, 1}, {1, 2}, {0, 3}, {3, 4},
{4,7},{3,7},{6,7},{4,5},{4,6},{5,6}}
Output: 210346
Explanation: The shortest path is 2 -> 1 -> 0 -> 3 – > 4 -> 6.
Shortest path algorithms in weighted graphs are used to find the path
with the minimum total weight or cost between two vertices. There are
several popular algorithms for finding the shortest path in weighted
graphs, depending on the characteristics of the graph and the weights
assigned to the edges. Two common algorithms for finding the shortest
path in a weighted graph are Dijkstra's Algorithm and Bellman-Ford
Algorithm.
In a weighted graph, each edge has a weight or cost associated with it,
and finding the shortest path involves finding the path with the minimum
total weight between two vertices.
Page | 3
The two most common algorithms for finding the shortest path in a
weighted graph are:
1. Dijkstra’s Algorithm:
Algorithm:
1. Initialize a distance array with infinity for all vertices except the
source (distance to source is 0).
2. Create a priority queue or min-heap to store vertices based on their
current distance from the source.
3. Start from the source vertex and explore all neighbouring vertices.
4. Update the distance of each neighbouring vertex if a shorter path is
found.
5. Repeat the process until all reachable vertices are visited.
2. Bellman-Ford Algorithm:
Algorithm:
1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Relax all edges repeatedly, updating the distance of each vertex if a
shorter path is found.
3. Repeat the relaxation step V-1 times, where V is the number of vertices
in the graph
4. Check for negative weight cycles by performing one more relaxation
step. If any distance is updated, then there is a negative weight cycle.
Page | 4
Shortest path algorithms in weighted graphs are crucial for various
applications like network routing, GPS navigation systems, and resource
allocation optimization. These algorithms help in efficiently determining
the most cost-effective paths between different locations or nodes in a
network with varying edge weights.
For example, consider the following graph. Let source ‘u’ be vertex 0,
destination ‘v’ be 3 and k be 2. There are two walks of length 2, the walks are
{0, 2, 3} and {0, 1, 3}. The shortest among the two is {0, 2, 3} and weight of
path is 3+6 = 9.
Algorithm:
Page | 5
3. After V-1 iterations, check for negative weight cycles. If a shorter
path is found after V-1 iterations, then there is a negative weight cycle.
D. Shortest path in a weighted acyclic graph:
Example: Floyd-Warshall algorithm The Floyd-Warshall algorithm
works by iteratively considering all vertices as potential intermediate
vertices in the shortest path between any pair of vertices. It updates the
shortest distance between any two vertices if a shorter path is found by
including the intermediate vertex.
The steps of the algorithm are:
1. Initialize a 2D array to store the shortest distances between all pairs of
vertices, with the initial values being the weights of the edges if there is
an edge, and infinity if there is no edge. Also, set the diagonal elements
of the array to 0.
2. For each intermediate vertex k from 1 to V, where V is the number of
vertices, update the distance array as follows: For each pair of vertices i
and j, if the distance from i to j through vertex k is shorter than the
current distance, update the distance to the new shorter distance.
3. After the above step, the array will contain the shortest distances between
all pairs of vertices.
The algorithm guarantees to find the shortest path between all pairs of
vertices in a weighted graph, including graphs with negative edge
weights. Topological sorting with dynamic programming
Algorithm:
1. Perform topological sorting of the graph.
2. Initialize distances to all nodes as infinity except the initial node which
is set to 0.
3. Iterate through the sorted nodes and update distances based on the edges.
Page | 6
A negative edge is simply an edge having a negative weight. It could be in any
context pertaining to the graph and what are its edges referring to. For example,
the edge C-D in the above graph is a negative edge. Floyd-Warshall works by
minimizing the weight between every pair of the graph, if possible. So, for a
negative weight you could simply perform the calculation as you would have
done for positive weight edges.
The problem arises when there is a negative cycle. Take a look at the above
graph. And ask yourself the question - what is the shortest path between A and
E? You might at first feel as if its ABCE costing 6 ( 2+1+3 ). But actually,
taking a deeper look, you would observe a negative cycle, which is BCD. The
weight of BCD is 1+(-4)+2 = (-1). While traversing from A to E, i could keep
cycling around inside BCD to reduce my cost by 1 each time. Like, the path
A(BCD)BCE costs 5 (2+(-1)+1+3). Now repeating the cycle infinite times
would keep reducing the cost by 1 each time. I could achieve a negative infinite
shortest path between A and E.
The shortest path algorithms for a weighted acyclic graph, specifically focusing
on two popular algorithms: Topological Sort with Dynamic Programming and
Dijkstra's Algorithm with Priority Queue.
Page | 7
This algorithm is used to find the shortest path in a weighted acyclic graph. It
leverages the topological sorting of vertices to efficiently calculate the shortest
path from a source vertex to all other vertices.
Algorithm:
Page | 8
denoted by vi. Then, we consider the next item from the sorted list, and
in this way, we perform the linear search in O(N) time complexity.
Therefore, the overall running time would be O(NlogN) plus O(N)
equals to O(NlogN).
We can say that the fractional knapsack problem can be solved much
faster than the 0/1 knapsack problem
1) The first approach is to select the item based on the maximum profit.
2) The second approach is to select the item based on the minimum
weight.
3) The third approach is to calculate the ratio of profit/weight.
Objects: 1 2 3 4 5 6 7
Profit (P): 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
W (Weight of the knapsack): 15
n (no of items): 7
First approach:-
Page | 9
7 7 * ¾ = 5.25 3 3-3=0
The total profit would be equal to (15 + 10 + 9 + 8 + 5.25) = 47.25
Second approach:
The second approach is to select the item based on the minimum weight.
Objects: 1 2 3 4 5 6 7
Profit (P): 5 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
Page | 10
Object 1: 5/1 = 5
Object 2: 10/3 = 3. 33
Object 3: 15/5 = 3
Object 5: 8/1 = 8
Object 6: 9/3 = 3
Object 7: 4/2 = 2
After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So,
we select object 1 shown in the below table:
After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So,
we select object 2 having profit/weight ratio as 3.3.
Page | 11
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So,
we select object 3 having profit/weight ratio as 3.
Page | 12
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0
As we can observe in the above table that the remaining weight is zero which
means that the knapsack is full. We cannot add more objects in the knapsack.
Therefore, the total profit would be equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.
In the first approach, the maximum profit is 47.25. The maximum profit in the
second approach is 46. The maximum profit in the third approach is 51.
Therefore, we can say that the third approach, i.e., maximum profit/weight ratio
is the best approach among all the approaches.
FRACTIONAL_KNAPSACK(X, V, W, M)
S ← Φ // Set of selected items, initially empty
SW ← 0 // weight of selected items
SP ← 0 // profit of selected items
i←1
while i ≤ n do
if (SW + w[i]) ≤ M then
S ← S ∪ X[i]
SW ← SW + W[i]
SP ← SP + V[i]
else
frac ← (M – SW) / W[i]
S ← S ∪ X[i] * frac // Add fraction of item X[i]
SP ← SP + V[i] * frac // Add fraction of profit
SW ← SW + W[i] * frac // Add fraction of weight
Page | 13
end
i←i+1
end
A probabilistic algorithm is an algorithm where the result and/or the way the
result is obtained depend on chance. These algorithms are also sometimes called
randomized algorithms. In some applications the use of probabilistic algorithms
is natural, e.g. simulating the behaviour of some existing or planned system
over time. In this case the result by nature is stochastic.
There are also a number of discrete problems for which only an exact result is
acceptable eg. sorting and searching) and where the introduction of randomness
influences only on the ease and efficiency in finding the solution. For some
problems where trivial exhaustive search is not feasible probabilistic algorithms
can be applied giving a result that is correct with a probability less than one eg.
primarily testing, string equality testing). The probability of failure can be made
arbitrary small by repeated applications of the algorithm.
Page | 14
3.1.1 Types of Probabilistic Algorithms:
1. Monte Carlo Algorithms:- Monte Carlo algorithms use random
sampling to obtain numerical results. They provide approximate
solutions with a quantifiable level of confidence. Example: Estimating
the value of π (pi) by generating random points within a square and
calculating the fraction of points falling inside a quarter circle
inscribed in the square.
Application:
Estimation of integrals and areas.
Simulation of physical and mathematical systems.
Cryptography, such as in generating large prime numbers.
2. Las Vegas Algorithms:- Las Vegas algorithms always produce a
correct result, but their running time may vary. They use
randomization to improve the expected performance while ensuring
correctness. Example: Quicksort algorithm with randomized pivot
selection.
Application:
Sorting algorithms.
Optimization problems.
Graph algorithms where randomness can aid in achieving faster results.
3. Randomized Algorithms:-Randomized algorithms use random inputs
or random choices during computation. They are designed to deliver
efficient solutions for problems that are hard to solve
deterministically. Example: Randomized primarily testing using the
Miller-Rabin algorithm.
Application:
Graph algorithms like random walks and graph colouring.
Computational geometry for algorithms like randomized incremental
construction.
Page | 15
Optimization problems where randomness can lead to better solutions.
3.1.2 Applications of Probabilistic Algorithms in DAA:
3.1.3 Advantages:
Page | 16
Efficiency: Probabilistic algorithms can sometimes provide faster
solutions or better approximations compared to deterministic approaches.
Versatility: They can handle complex problems where exact solutions are
computationally prohibitive.
Applications: Widely applicable across various domains including
mathematics, computer science, physics, and engineering.
Page | 17
Communication patterns and synchronization requirements −
Communication patterns address both memory access and interprocessor
communications. The patterns can be static or dynamic, depending on
the algorithms. Static algorithms are more suitable for SIMD or pipelined
machines, while dynamic algorithms are for MIMD machines. The
synchronization frequency often affects the efficiency of an algorithm.
Uniformity of the operations − This refers to the types of fundamental
operations to be performed. If the operations are uniform across the data
set, the SIMD processing or pipelining may be more desirable. In other
words, randomly structured algorithms are more suitable for MIMD
processing. Other related issues include data types and precision desired.
Memory requirement and data structures − In solving large-scale
problems, the data sets may require huge memory space. Memory
efficiency is affected by data structures chosen and data movement
patterns in the algorithms. Both time and space complexities are key
measures of the granularity of a parallel algorithm.
3.2.2 Types of Parallel Algorithms
1. Task Parallelism:-Task parallelism divides a task into smaller sub-
tasks that can be executed concurrently. Each sub-task may operate on
different data or parts of the problem. Example: Matrix multiplication
where different threads compute different rows or columns concurrently.
Application:
Parallel sorting algorithms like parallel merge sort.
Image and video processing where different regions or frames
can be processed concurrently.
Page | 18
elements simultaneously. Example: Parallel summation of array elements
where each processor sums a subset of array elements.
Application:
Parallel matrix operations like addition, multiplication.
Statistical computations such as parallel computations in big
data analytics.
Page | 19
4. Parallel Matrix Operations:-Algorithms for parallel matrix addition,
multiplication, and inversion distribute computation of matrix
elements across processors to speed up operations.
Application: Scientific computing, image processing, and machine
learning algorithms such as neural network training.
Page | 20
Conclusion
Shortest path in a weighted graph: Dijkstra's algorithm is the most widely used,
with a time complexity of O((V+E)log V). It uses a priority queue to efficiently
explore the graph and find the shortest paths.
Shortest path in a weighted acyclic graph: Topological sort can be used to find
the shortest paths in O(V+E) time. The algorithm exploits the acyclic nature of
the graph to efficiently compute shortest paths.
Page | 21