0% found this document useful (0 votes)
4 views

Chapter Four

lecture note

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter Four

lecture note

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Dynamic

Programming
Chapter Four
Introduction
• Dynamic programming approach is similar to divide and conquer in breaking
down the problem into smaller and yet smaller possible sub-problems.
• But unlike divide and conquer, these sub-problems are not solved
independently.
• Rather, results of these smaller sub-problems are remembered and used for
similar or overlapping sub-problems.
• Mostly, dynamic programming algorithms are used for solving optimization
problems.
• Before solving the in-hand sub-problem, dynamic algorithm will try to
examine the results of the previously solved sub-problems.
• The solutions of sub-problems are combined in order to achieve the best
optimal final solution.
• This paradigm is thus said to be using Bottom-up approach.
Cont’d…
• So we can conclude that −
• The problem should be able to be divided into smaller overlapping
sub-problem.
• Final optimum solution can be achieved by using an optimum solution
of smaller sub-problems.
• Dynamic algorithms use memorization.
Cont’d…
• However, in a problem, two main properties can suggest that the given
problem can be solved using Dynamic Programming. They are −
• Overlapping Sub-Problems
• Similar to Divide-and-Conquer approach, Dynamic Programming also combines
solutions to sub-problems. It is mainly used where the solution of one sub-
problem is needed repeatedly.
• The computed solutions are stored in a table, so that these don’t have to be
re-computed. Hence, this technique is needed where overlapping sub-problem
exists.
• For example, Binary Search does not have overlapping sub-problem. Whereas
recursive program of Fibonacci numbers have many overlapping sub-problems.
Optimal Sub-Structure
• A given problem has Optimal Substructure Property, if the optimal
solution of the given problem can be obtained using optimal solutions
of its sub-problems.
• For example, the Shortest Path problem has the following optimal
substructure property −
• If a node x lies in the shortest path from a source node u to
destination node v, then the shortest path from u to v is the
combination of the shortest path from u to x, and the shortest path
from x to v.
Steps of Dynamic Programming
Approach
• Dynamic Programming algorithm is designed using the following four
steps −
• Characterize the structure of an optimal solution.
• Recursively define the value of an optimal solution.
• Compute the value of an optimal solution, typically in a bottom-up
fashion.
• Construct an optimal solution from the computed information.
Examples

• The following computer problems can be solved using dynamic


programming approach −
• Knapsack problem
• All pair shortest path by Floyd-Warshall and Bellman Ford
• Shortest path by Dijkstra
Floyd Warshall Algorithm
• The Floyd-Warshall algorithm is a graph algorithm that is deployed to
find the shortest path between all the vertices present in a weighted
graph.
• This algorithm is different from other shortest path algorithms; to
describe it simply, this algorithm uses each vertex in the graph as a
pivot to check if it provides the shortest way to travel from one point to
another.
• Floyd-Warshall algorithm works on both directed and undirected
weighted graphs unless these graphs do not contain any negative cycles
in them. By negative cycles, it is meant that the sum of all the edges in
the graph must not lead to a negative number.
Cont’d…
• Since, the algorithm deals with overlapping sub-problems – the path
found by the vertices acting as pivot are stored for solving the next
steps – it uses the dynamic programming approach.
• Floyd-Warshall algorithm is one of the methods in All-pairs shortest
path algorithms and it is solved using the Adjacency Matrix
representation of graphs.
Cont’d…
• Floyd-Warshall Algorithm
• Consider a graph, G = {V, E} where V is the set of all vertices present in
the graph and E is the set of all the edges in the graph. The graph, G,
is represented in the form of an adjacency matrix, A, that contains all
the weights of every edge connecting two vertices.
Algorithm
• Step 1 − Construct an adjacency matrix A with all the costs of edges
present in the graph. If there is no path between two vertices, mark
the value as ∞.
• Step 2 − Derive another adjacency matrix A1 from A keeping the first
row and first column of the original adjacency matrix intact in A1.
• And for the remaining values, say A1[i,j], if A[i,j]>A[i,k]+A[k,j] then
replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values.
Here, in this step, k = 1 (first vertex acting as pivot).
• Step 3 − Repeat Step 2 for all the vertices in the graph by changing the
k value for every pivot vertex until the final matrix is achieved.
• Step 4 − The final adjacency matrix obtained is the final solution with
all the shortest paths.
Pseudocode
• Floyd-Warshall(w, n){ // w: weights, n: number of vertices
• for i = 1 to n do // initialize, D (0) = [wij]
• for j = 1 to n do{
• d[i, j] = w[i, j];
• }
• for k = 1 to n do // Compute D (k) from D (k-1)
• for i = 1 to n do
• for j = 1 to n do
• if (d[i, k] + d[k, j] < d[i, j]){
• d[i, j] = d[i, k] + d[k, j];
• }
• return d[1..n, 1..n];
• }
All pair Shortest Algorithm by Floyd’s Algorithm
Cont’d…
Cont’d…
Cont’d…
0/1 Knapsack problem
• Here knapsack is like a container or a bag. Suppose we have given
some items which have some weights or profits. We have to put some
items in the knapsack in such a way total value produces a maximum
profit.
• For example, the weight of the container is 20 kg. We have to select
the items in such a way that the sum of the weight of items should be
either smaller than or equal to the weight of the container, and the
profit should be maximum.
0/1 knapsack problem
• What is the 0/1 knapsack problem?
• The 0/1 knapsack problem means that the items are either
completely or no items are filled in a knapsack.
• For example, we have two items having weights 2kg and 3kg,
respectively.
• If we pick the 2kg item then we cannot pick 1kg item from the 2kg
item (item is not divisible); we have to pick the 2kg item completely.
• This is a 0/1 knapsack problem in which either we pick the item
completely or we will pick that item. The 0/1 knapsack problem is
solved by the dynamic programming.
Example
Initially 0 row and row column
weights have profit 0.
Take the first item 1 and ignore the rest
put its profit at the corresponding weight

Take the previous item profit if non exist.


Now include item 1 and 2
Now and profit of w2+w3=1+2=3.
put this in weight w5.
Now compute weight of item 3.
Now add w2+w4 and w3+w4. we
can’t add w2+w3+w4=w9 exceeds
w8.
Cont’d…
• he remaining values are filled with the maximum profit achievable
with respect to the items and weight per column that can be stored in
the knapsack.
• The formula to store the profit values is −
• V[i,w]=max{V[i−1],[w−w[i]]+P[i]}
Example
Cont’d…
Cont’d…
Cont’d…
Cont’d…

Profit 8 is only found in row 4 only, we included it.


Cont’d…

Profit 2 found in 3rd and 2nd row, we make 3rd row 0 because it is gained by 2nd row.
The total profit={profit of x2+ profit
x4=2+8=10}

X1 is 0 because it is found in row 0


Shortest Path-Dijkstra’s algorithm
• Dijkstra’s algorithm is a popular algorithms for solving many single-
source shortest path problems having non-negative edge weight in
the graphs i.e., it is to find the shortest distance between two vertices
on a graph.
• The algorithm maintains a set of visited vertices and a set of unvisited
vertices.
• It starts at the source vertex and iteratively selects the unvisited
vertex with the smallest tentative distance from the source.
• It then visits the neighbors of this vertex and updates their tentative
distances if a shorter path is found.
Basics requirements for Implementation of Dijkstra’s
Algorithm

• Graph: Dijkstra’s Algorithm can be implemented on any graph but it


works best with a weighted Directed Graph with non-negative edge
weights and the graph should be represented as a set of vertices and
edges.
• Source Vertex: Dijkstra’s Algorithm requires a source node which is
starting point for the search.
• Destination vertex: Dijkstra’s algorithm may be modified to terminate
the search once a specific destination vertex is reached.
• Non-Negative Edges: Dijkstra’s algorithm works only on graphs that have
positive weights this is because during the process the weights of the
edge have to be added to find the shortest path.
Algorithm for Dijkstra’s Algorithm:
• Mark the source node with a current distance of 0 and the rest with
infinity.
• Set the non-visited node with the smallest current distance as the
current node.
• For each neighbor, N of the current node adds the current distance of
the adjacent node with the weight of the edge connecting 0->1. If it is
smaller than the current distance of Node, set it as the new current
distance of N.
• Mark the current node 1 as visited.
• Go to step 2 if there are any nodes are unvisited.
Example
Cont’d…
Cont’d…
• Step 2: Check for adjacent Nodes, Now we have to choices (Either
choose Node1 with distance 2 or either choose Node 2 with distance
6 ) and choose Node with minimum distance.
• In this step Node 1 is Minimum distance adjacent Node, so marked it
as visited and add up the distance.
Cont’d…
Cont’d…
• Step 3: Then Move Forward and check for adjacent Node which is
Node 3, so marked it as visited and add up the distance, Now the
distance will be:
• Distance: Node 0 -> Node 1 -> Node 3 = 2 + 5 = 7
Cont’d…
• Step 4: Again we have two choices for adjacent Nodes (Either we can
choose Node 4 with distance 10 or either we can choose Node 5 with
distance 15) so choose Node with minimum distance.
• In this step Node 4 is Minimum distance adjacent Node, so marked it
as visited and add up the distance.
• Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2 + 5 + 10 = 17
Cont’d…
Cont’d…
• Step 5: Again, Move Forward and check for adjacent Node which
is Node 6, so marked it as visited and add up the distance, Now the
distance will be:
• Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 +
10 + 2 = 19
Cont’d…
Time Complexity
• Complexity Analysis of Dijkstra’s Algorithm using Priority Queue:
• Time complexity : O(E log V)
• Space Complexity: O(V2), here V is the number of Vertices.
Dijkstra's Algorithm using
Tabular Method
Minimum is 2A and Destination is C
Start From C and Compute Distance
A-H
Minimum is 4C Start from D now
Cont’d…
Cont’d…
Cont’d…
Cont’d…
Cont’d…
Total time complexity
• The total time complexity of Dijkstra's algorithm using the tabular
method with a binary heap-based min-priority queue is O((V + E) log
V).
Depth First Search Algorithm

• A standard DFS implementation puts each vertex of the graph into


one of two categories:
• Visited
• Not Visited
• The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles.
The DFS algorithm works as follows:
• Start by putting any one of the graph's vertices on top of a stack.
• Take the top item of the stack and add it to the visited list.
• Create a list of that vertex's adjacent nodes. Add the ones which
aren't in the visited list to the top of the stack.
• Keep repeating steps 2 and 3 until the stack is empty.
Depth First Search Example
• Let's see how the Depth First Search algorithm works with an
example. We use an undirected graph with 5 vertices.
Cont’d…
• We start from vertex 0, the DFS algorithm starts by putting it in the
Visited list and putting all its adjacent vertices in the stack.
Cont’d…
• Next, we visit the element at the top of stack i.e. 1 and go to its
adjacent nodes. Since 0 has already been visited, we visit 2 instead.
Cont’d…
• Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the
top of the stack and visit it.
Cont’d…
• Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the
top of the stack and visit it.
Cont’d…
• After we visit the last element 3, it doesn't have any unvisited
adjacent nodes, so we have completed the Depth First Traversal of the
graph.
Complexity of Depth First Search
• The time complexity of the DFS algorithm is represented in the form
of O(V + E), where V is the number of nodes and E is the number of
edges.
• The space complexity of the algorithm is O(V).

You might also like