0% found this document useful (0 votes)
15 views

DAA Lec 8 Dynamic Programming (1)

Uploaded by

Laksh Choudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

DAA Lec 8 Dynamic Programming (1)

Uploaded by

Laksh Choudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

By: SAJID ALI

Email: [email protected]
Cell#: 0321-8956305
DESIGN & ANALYSIS OF ALGORITHMS
FALL 2024
 Trade Space for Time
 Categorize the Dynamic Programming

 Solving problem in Dynamic Programming

 Bottom Up vs Top Down

 Concept of Dynamic Programming


 Element of DP

 Knapsack Problem

 Examples

 Memoization
Writes down "1+1+1+1+1+1+1+1 =" on a
sheet of paper. "What's that equal to?"
Counting "Eight!"
Writes down another
"1+" on the left. "What
about that?"
"Nine!" " How'd you know it was
nine so fast?" "You just added
one more!"
"So you didn't need to recount because you
remembered there were
eight! Dynamic Programming is just a fancy way to say
remembering stuff to save time later!"
The intuition behind dynamic programming is that we trade space for
time, i.e. to say that instead of calculating all the states taking a lot of
time but no space, we take up space to store the results of all the sub-
problems to save time later.
Let's try to understand this by taking an example of Fibonacci
numbers.
Fibonacci (n)=1; if n=0
Fibonacci (n)=0; if n=1
Fibonacci (n)= Fibonacci(n-1)+Fibonacci (n-2)
So, the first few numbers in this series will be: 1, 1, 2, 3, 5, 8, 13, 21...
and so on!
1. Optimization problems.
2. Combinatorial problems.

 The optimization problems expect you to select a feasible


solution, so that the value of the required function is
minimized or maximized.
 Combinatorial problems expect you to figure out the
number of ways to do something, or the probability of
some event happening.
 Show that the problem can be broken down into optimal
sub-problems.

 Recursively define the value of the solution by


expressing it in terms of optimal solutions for smaller
sub-problems.

 Compute the value of the optimal solution in bottom-up


fashion. Construct an optimal solution from the
computed information.
Bottom Up - I'm going to learn programming. Then, I will
start practicing. Then, I will start taking part in contests.
Then, I'll practice even more and try to improve. After
working hard like crazy, I'll be an amazing coder.

Top Down - I will be an amazing coder. How? I will work


hard like crazy. How? I'll practice more and try to improve.
How? I'll start taking part in contests. Then? I'll practicing.
How? I'm going to learn programming.
 Dynamic programming, like the divide-and-conquer
method, solves problems by combining the
solutions to sub-problems.
 Divide-and-conquer algorithm does more work than
necessary.
 Applies when the subproblems overlap—that is, when
subproblems share sub-problem
 “Programming” in this context refers to a tabular method,
not to writing computer code. Try all possible solutions to
find optimal solution
Idea:

 A dynamic-programming algorithm solves each sub-


subproblem just once and then saves its answer in a
table, thereby avoiding the work of recomputing the
answer every time it solves each sub-subproblem
 We typically apply dynamic programming to optimization
problems

 An optimization problem is the problem of finding the best


solution from all feasible solutions.

 Many possible solutions. Each solution has a value, and we


wish to find a solution with the optimal (minimum or
maximum) value.

 We call such a solution an optimal solution to the problem, as


opposed to the optimal solution, since there may be several
solutions that achieve the optimal value
When developing a dynamic-programming algorithm, we
follow a sequence of four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a
bottom-up fashion.
4. Construct an optimal solution from computed information.
If we need only the value of an optimal solution, and not
the solution itself, then we can omit step 4.
The two key ingredients that an optimization problem must have in order
for dynamic programming to apply:
Optimal substructure
If an optimal solution to the problem contains within it optimal
solutions to subproblems
Overlapping subproblems
When a recursive algorithm revisits the same problem repeatedly
The space of subproblems must be “small” in the sense that a Recursive
algorithm for the problem solves the same subproblems over and over, rather
than always generating new subproblems.
There are usually two equivalent ways to implement a dynamic-programming
approach.

1. Tabulation (Bottom-up)
solving all related sub-problems first, typically by filling up an n-dimensional table

2. Memoization (top-down)
maintaining a map of already solved sub problems, solve the "top" problem first which
typically recurses down to solve the sub-problems

These two approaches yield algorithms with the same asymptotic running time but
first approach has much better constant factors.
Given some items, pack the knapsack to get the maximum total
value. Each item has some weight and some value. Total weight
that we can carry is no more than some fixed number W.
So we must consider weights of items as well as their values.

Item # Weight Value


1 1 8
2 3 6
3 5 5
There are two versions of the problem:

1. “0-1 knapsack problem”


1. Items are indivisible; you either take an item
or not.
2. Some special instances can be solved with
dynamic programming

2. “Fractional knapsack problem”


1. Items are divisible: you can take any fraction
of an item
Given a knapsack with maximum capacity W, and a set
S
consisting of n items
Each item i has some weight wi and benefit value bi (all
wi
and W are integer values)
Problem: How to pack the knapsack to achieve
maximum total value of packed items?
Let’s first solve this problem with a straightforward algorithm
Since there are n items, there are 2n possible
combinations of items.
We go through all combinations and find the one with
maximum value and with total weight less or equal to
W
Running time will be O(2n)
We can do better with an algorithm based on dynamic programming, We need to
carefully identify the
subproblems

Defining a Subproblem

If items are labeled 1..n, then a subproblem would be to find an optimal solution
for Sk = {items labeled
1, 2, .. k}
This is a reasonable subproblem definition.

The question is: can we describe the final solution (Sn ) in terms of subproblems
(Sk)?

Unfortunately, we can’t do that.


w =3 Item Weight Benefit
w 1=2 w2 =4 w 3=5 4
b1 =3 b2 =5 # wi bi
b3 =8 b4 =4

1 2 3

Max weight: W = 20 ?
2 4 5
For S4:
Total weight: 14 S5 S4
Maximum benefit: 20 3 5 8

4 3 4
w1 =2 w2 =4 w3 =5 w5 =9
b1 =3 b2 =5 b3 =8 b5 =10 5 9 10

Solution for S4 is not part of the


solution for S5!!!
For S5:
Total weight: 20
Maximum benefit: 26
 As we have seen, the solution for S4 is not part of the
solution for S5
 So our definition of a subproblem is flawed and we need
another one!
• Let’s add another parameter: w, which will represent the maximum
weight for each subset of items
• Recursive Formula for subproblems

• The subproblem will be to compute V[k,w], i.e., to find an optimal


solution for
Sk = {items labeled 1, 2, .. k} in a knapsack of size w
• Assuming knowing V[i, j], where i=0,1, 2, … k-1,

j=0,1,2, …w, How to derive V[k,w]?


Recursive formula for subproblems:

 V [k 1, w] if wk  w
V [k, w]  
max{V [k 1, w],V [k 1, w  w k ]  bk } else

It means, that the best subset of Sk that has total weight w is:
1) the best subset of Sk-1 that has total weight < w, or
2) the best subset of Sk-1 that has total weight < w-wk plus the
item k
 V [k 1, w] if wk  w
V [k, w]  
 max{V [k 1, w],V [k 1, w k w ] k  b} else
for w = 0 to W
V[0,w] = 0
for i = 1 to n V[i,0] = 0

for i = 1 to n
for w = 0 to W
if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-
1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
for w = 0 to O(W)
W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0

for i = 1 to Repeat n times


n
for w = 0 to W
O(W)
< the rest of the code >

What is the running time of


this algorithm?
O(n*W)

Remember that the brute-force algorithm takes O(2n)


Let’s run our algorithm on the following data:

n = 4 (# of elements)
W = 5 (max weight)
Elements (weight, benefit):
(2,3), (3,4), (4,5), (5,6)
i\ 0 1 2 3 4 5
0W 0 0 0 0 0 0
1
2
3
4

for w = 0 to W
V[0,w] = 0
i\ 0 1 2 3 4 5
0W 0 0 0 0 0 0
1 0
2 0
3 0
4 0

for i = 1 to n
V[i,0] = 0
Items:
1: (2,3)
i\ 0 1 2 3 4 5 i=1 2: (3,4)
0W 0 0 0 0 0 0 3: (4,5)
bi=3
1 0 0 4: (5,6)
wi=2
2 0
w=1
3 0
w-wi =-
4 0
1
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=1 4: (5,6)
0W 0 0 0 0 0 0 bi=3
1 0 0 3
wi=2
2 0
w=2
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=1 4: (5,6)
0W 0 0 0 0 0 0 bi=3
1 0 0 3 3
wi=2
2 0
w=3
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=1 4: (5,6)
0W 0 0 0 0 0 0 bi=3
1 0 0 3 3 3
wi=2
2 0
w=4
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
i\ 3: (4,5)
0 1 2 3 4 5 i=1 4: (5,6)
W
0 0 0 0 0 0 0 bi=3
1 0 0 3 3 3 3
wi=2
2 0
w=5
3 0
w-wi =3
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0
w=1
3 0
w-wi =-
4 0
2
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3
w=2
3 0
w-wi =-
4 0
1
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4
w=3
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4 4
w=4
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4 4 7
w=5
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=3 4: (5,6)
0W 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 1..3
3 0 0 3 4
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=3 4: (5,6)
0W 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 4
3 0 0 3 4 5
w- wi=0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=3 4: (5,6)
0W 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=4 4: (5,6)
0W 0 0 0 0 0 0 bi=6
1 0 0 3 3 3 3
wi=5
2 0 0 3 4 4 7
w= 1..4
3 0 0 3 4 5 7
4 0 0 3 4 5
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=4 4: (5,6)
0W 0 0 0 0 0 0 bi=6
1 0 0 3 3 3 3
wi=5
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=0
4 0 0 3 4 5 7
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
 All of the information we need is in the table.
 V[n,W] is the maximal value of items that can be placed in the
Knapsack.
 Let i=n and k=W
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed knapsack?
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=4 4: (5,6)
0W 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=6
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=4 4: (5,6)
0W 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=6
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=3 4: (5,6)
0W 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=5
2 0 0 3 4 4 7 wi=4
3 0 0 3 4 5 7 V[i,k] = 7
V[i1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=2 4: (5,6)
0W 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=4
2 0 0 3 4 4 7 wi=3
3 0 0 3 4 5 7 V[i,k] = 7
V[i1,k] =3
4 0 0 3 4 5 7
k  wi=2
i=n, k=W
while i,k > 0
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=1 4: (5,6)
0W 0 0 0 0 0 0 k= 2
1 0 0 3 3 3 3 bi=3
2 0 0 3 4 4 7 wi=2
3 0 0 3 4 5 7 V[i,k] = 3
V[i1,k] =0
4 0 0 3 4 5 7
k  wi=0
i=n, k=W
while i,k > 0
if V[i,k]  V[i1,k] then
mark the ith item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 i=0 4: (5,6)
0W 0 0 0 0 0 0 k= 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack
4 0 0 3 4 5 7
should
i=n, k=W contain {1, 2}
while i,k > 0
if V[i,k]  V[i1,k] then
mark the nth item as in the knapsack
i = i1, k = k-wi
else
i = i1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\ 0 1 2 3 4 5 4: (5,6)
0W 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack
4 0 0 3 4 5 7
should
i=n, k=W contain {1, 2}
while i,k > 0
if V[i,k]  V[i1,k] then
mark the nth item as in the knapsack
i = i1, k = k-wi
else
i = i1
n = 4 (# of elements)
W = 5 (max weight)

Elements (weight, benefit):


(2,3), (3,4), (4,5), (5,6)
n = 4 (# of elements)
W = 8 (max weight)

Elements (weight, benefit):


(3,2), (4,3), (5,4), (6,1)
n = 4 (# of elements)
W = 8 (max weight)

Elements (weight, benefit):


(1,2), (2,3), (5,4), (6,5)
Constraint
Total weight must not exceed W = 8 Item # Weight Profit

Objective 1 2 1
Choose any number of item such
that the profit is maximized and
2 3 2
constraint is satisfied.
3 4 5
4 5 6
 We write the procedure recursively in a natural manner, but
modified to save the result of each subproblem (usually in an
array or hash table).
 The procedure now first checks to see whether it has previously
solved this subproblem. If so, it returns the saved value, saving
further computation at this level; if not, the procedure computes
the value in the usual manner.
 We say that the recursive procedure has been memoized; it
“remembers” what results it has computed previously.
Goal:
Solve only subproblems that are necessary and solve it only once

Memoization is another way to deal with overlapping subproblems in


dynamic programming
With Memoization, we implement the algorithm recursively:
If we encounter a new subproblem, we compute and store the solution.
If we encounter a subproblem we have seen, we look up the answer
Most useful when the algorithm is easiest to implement recursively
Especially if we do not need solutions to all subproblems.
for i = 1 to n MFKnapsack(i, w) if
V[i,w] < 0
for w = 1 to W
if w < wi
V[i,w] = -1
value = MFKnapsack(i-1, w)
else
for w = 0 to W
value = max(MFKnapsack(i-1, w),
V[0,w] = 0 bi + MFKnapsack(i-1, w-wi))
for i = 1 to n V[i,w] = value
V[i,0] = 0 return V[i,w]
ANY QUERY ?

You might also like