0% found this document useful (0 votes)
37 views

Dynamic Programming

Uploaded by

milandhiman232
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Dynamic Programming

Uploaded by

milandhiman232
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Topic 3.

Dynamic
Dynamic Programming
Programming

By: Er.Bhavneet Kaur


Master Subject Co-ordinator
nd
Approach
•Dynamic Programming is also used in optimization problems. Like
divide-and-conquer method, Dynamic Programming solves
problems by combining the solutions of sub problems.
•Dynamic Programming algorithm solves each sub-problem just
once and then saves its answer in a table, thereby avoiding the
work of re-computing the answer every time.
•Two main properties of a problem suggest that the given problem
can be solved using Dynamic Programming. These properties are
overlapping sub-problems and optimal substructure.
• Dynamic Programming also combines solutions to sub-
problems. It is mainly used where the solution of one
sub-problem is needed repeatedly. The computed
solutions are stored in a table, so that these don’t have
to be re-computed.
• For example, Binary Search does not have overlapping
sub-problem. Whereas recursive program of Fibonacci
numbers have many overlapping sub-problems.
• A given problem has Optimal Substructure Property, if the optimal
solution of the given problem can be obtained using optimal
solutions of its sub-problems.
• For example, the Shortest Path problem has the following optimal
substructure property − If a node x lies in the shortest path from a
source node u to destination node v, then the shortest path from u
to v is the combination of the shortest path from u to x, and the
shortest path from x to v. The standard All Pair Shortest Path
algorithms like Floyd-Warshall and Bellman-Ford are typical
examples of Dynamic Programming.
• Dynamic Programming algorithm is designed using the
following four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a
bottom-up fashion.
4. Construct an optimal solution from the computed
information.
Divide & Conquer Method Dynamic Programming
It deals (involves) three steps at each level of It involves the sequence of four steps: o Characterize the structure of optimal solutions. o
recursion: Divide the problem into a number Recursively defines the values of optimal solutions. o Compute the value of optimal solutions
of sub problems. Conquer the sub problems in a Bottom-up minimum. o Construct an Optimal Solution from computed information.
by solving them recursively. Combine the
solution to the sub problems into the solution
for original sub problems.

It is Recursive. It is non Recursive.

t does more work on sub problems and hence It solves sub problems only once and then stores in the table.
has more time consumption

It is a top-down approach. It is a Bottom-up approach


In this sub problems are independent of each In this sub problems are interdependent.
other.

For example: Merge Sort & Binary Search etc. For example: Matrix Multiplication.
Applications of dynamic programming
• 0/1 knapsack problem
• All pair Shortest path problem
• Reliability design problem
• Longest common subsequence (LCS)
• Flight control and robotics control
• Time-sharing: It schedules the job to maximize CPU
usage
0/1 Knapsack Problem
• Knapsack is basically means bag. A bag of given
capacity. We want to pack n items in your
luggage. o The ith item is worth vi dollars and
weight wi pounds. o Take as valuable a load as
possible, but cannot exceed W pounds. o vi wi
W are integers.
• 1. W ≤ capacity 2. Value ← Max
Unbounded Knapsack (Repetition of items allowed)

• Unbounded Knapsack Problem. In this case, an item can be used infinite


times. This problem can be solved efficiently using Dynamic Programming.
• N is always positive i.e greater than zero
EXAMPLE
• Input : W = 100
val[] = {1, 30}
wt[] = {1, 50}
Output : 100
There are many ways to fill knapsack.
1) 2 instances of 50 unit weight item.
2) 100 instances of 1 unit weight item.
3) 1 instance of 50 unit weight item and 50
instances of 1 unit weight items.
We get maximum value with option 2.
Input : W = 8
val[] = {10, 40, 50, 70}
wt[] = {1, 3, 4, 5}
Output : 110
We get maximum value with one unit of
weight 5 and one unit of weight 3.
Bounded Knapsack Problem

• Given N items, each item having a given


weight wi and a value vi, the task is to
maximize the value by selecting a maximum
of K items adding up to a maximum weight W.
0/1 Knapsack Problem

• We are given N items where each item has


some weight (wi) and value (vi) associated
with it. We are also given a bag with
capacity W. The target is to put the items into
the bag such that the sum of values associated
with them is the maximum possible.
Fractional Knapsack Problem

• Given the weights and values of N items, put


these items in a knapsack of capacity W to get
the maximum total value in the knapsack. In
Fractional Knapsack, we can break items for
maximizing the total value of the knapsack.
Question?
• You are given a knapsack that can carry a maximum
weight of 60. There are 4 items with weights {20, 30,
40, 70} and values {70, 80, 90, 200}. What is the
maximum value of the items you can carry using the
knapsack?
Solution: The maximum value you can get is 160. This
can be achieved by choosing the items 1 and 3 that
have a total weight of 60.
• Consider the following instance of knapsack problem: The
maximum weight of 12 is allowed in the knapsack. Find the value of
maximum profit with the optimal solution of the fractional
knapsack problem.
• Solution:

Decreasing order of P i /W i is X1, X4, X3, X5, X2 X1 --> profit = 15 and weight = 2
Including X4 --> profit = 15 + 16 and weight = 2 + 4 = 6 Including X3 --> profit = 40
and weight = 9 Now weight left = 3 Weight of X5 = 6 --> half of X5 can be included.
Profit = 40 + 17/2 = 48.5.
• 2. A thief wants to rob a store. He is carrying a bag of capacity W. The store has
‘n’ items. Its weight is given by the ‘wt’ array and its value by the ‘val’ array. He
can either include an item in its knapsack or exclude it but can’t partially have
it as a fraction. We need to find the maximum value of items that the thief can
steal.

• Solution:
Bellman Ford Algorithm
• The Bellman-Ford algorithm is a way to find single source shortest
paths in a graph with negative edge weights (but no negative cycles).
The second for loop in this algorithm also detects negative cycles.
• The first for loop relaxes each of the edges in the graph n − 1 times. We
claim that after n − 1 iterations, the distances are guaranteed to be
correct. Overall, the algorithm takes O(mn) time.
• Dynamic Programming is used in the Bellman-Ford algorithm. It begins
with a starting vertex and calculates the distances between other
vertices that a single edge can reach. It then searches for a path with
two edges, and so on. The Bellman-Ford algorithm uses the bottom-up
approach.
• Bellman-Ford detects negative cycles, i.e. if there is
a negative cycle reachable from the source s, then
for some edge (u, v), dn−1(v) > dn−1(u) + w(u, v).
• If the graph has no negative cycles, then the
distance estimates on the last iteration are equal
to the true shortest distances. That is, dn−1(v) =
δ(s, v) for all vertices v.
• Why Should You Be Cautious With Negative Weights?

When attempting to find the shortest path, negative


weight cycles may produce an incorrect result.Shortest
path algorithms, such as Dijkstra's Algorithm that cannot
detect such a cycle, may produce incorrect results
because they may go through a negative weight cycle,
reducing the path length.
Example

• Choose path value 0 for the source vertex and infinity for all other
vertices.
• If the new calculated path length is less than the previous path length,
go to the source vertex's neighbouring Edge and relax the path length of
the adjacent Vertex.
• This procedure must be repeated V-1 times, where V is the number of
vertices in total. This happened because, in the worst-case scenario,
any vertex's path length can be changed N times to an even shorter
path length.
• This procedure must be repeated V-1 times, where V is the number of
vertices in total. This happened because, in the worst-case scenario,
any vertex's path length can be changed N times to an even shorter
path length.
• As a result, after V-1 iterations, you find your new path lengths and
can determine in case the graph has a negative cycle or not.

You might also like