0% found this document useful (0 votes)
2 views

Lec08 Dynamic Programming2024

Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler subproblems, particularly useful for problems with overlapping subproblems and optimal substructure. Key examples include the Longest Common Subsequence and the Rod Cutting Problem, which illustrate the efficiency of DP over naive recursive approaches. The document also discusses the Matrix Chain Multiplication problem, emphasizing the importance of optimal parenthesization to minimize computation costs.

Uploaded by

yijoebackup
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lec08 Dynamic Programming2024

Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler subproblems, particularly useful for problems with overlapping subproblems and optimal substructure. Key examples include the Longest Common Subsequence and the Rod Cutting Problem, which illustrate the efficiency of DP over naive recursive approaches. The document also discusses the Matrix Chain Multiplication problem, emphasizing the importance of optimal parenthesization to minimize computation costs.

Uploaded by

yijoebackup
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

LECTURE 08

DYNAMIC PROGRAMMING
What is Dynamic Programming?
 Dynamic Programming: from Wikipedia — “method
for solving complex problems by breaking them down
into simpler subproblems”
 Used when problem breaks down into recurring small
sub-problems and the recursive solution results in
identical recursive calls that are executed more than
once
 Examples:
 Matrix chain multiplication
 Optimal binary search tree
 There can be many solution, but we wish to find a
solution with optimal value
Dynamic vs Divide & Concur

Dividing
problem into
sub-problems

Divide & Dynamic


conquer programming

Independent Overlapping
sub-problems sub-problems
Classification of Dynamic Programming

Dynamic
programming

Top-down Bottom-up
approach approach
Steps to develop the algorithms
 Steps of dynamic programming (DP):
 Conversion of the given problem into a search problem
 Search would be from a large search space
 Search space is generally segregated into various sub-spaces
 Need the partition algorithm which can be accomplished through recursive
algorithm
 Recursively define the value of an optimal solution
 Compute the optimal solution, typically bottom-up
 Construct the path of an optimal solution (if desired)
 Recursive calls are characterised and a non-recursive procedure is
developed for filling the table (the calculations can be used at a later
stage)
 Storing value in table, in case the value is needed again, no re-
computation is needed
 Thus, two key ingredients of DP:
 Optimal substructure
 Overlapping subproblems
 Bottom up algorithms usually outperform top-down ones by a constant factor
Disadvantages of Dynamic Programming

 More expertise is required in solving dynamic


programming problem
 Lack of general algorithm (means algorithm is always
specifically design to solve a problem and not able to
use in other problem)
 Dimensionality
LONGEST COMMON SUBSEQUENCE
Longest Common Subsequence
 Longest common subsequence (LCS): measure of similarity
between two given strings (with longer length of LCS, more
similarity could be found in the given strings)
 Can use in spellcheck & DNA matching
 Need to list out all possible sub-sequences of X and check the
existence of LCS
 To list all possible sub-sequences, the time is 2n
 Consider two sequence (strings), X has n characters, Y has m
characters (sequence: order is same, but not necessary is
continuous)
 Assume X = ‘CTGCTGA’ and Y = ‘TGTGT’
 We have TGCTG in X same with the sequence of Y TGTG
 Look for LCS from left to right
 What is the LCS?
 It is fast if done by human and of course, when the strings
are not long/many
If solve using recursive
“CTGCTGA”, “TGTGT”
……a lot more
combination

“CTG”, “CTGCT”, “CTGCT”, “TG”


“TGTGT” “TGTGT”
“CTGC”, “CTGCT”, “TGT”
“TGTGT”

“CT”, “TGTGT” “CTG”, “TGTGT”

May solve similar problems again and


again, when to get the solution?

Dynamic programming can help to


avoid that
Longest Common Subsequence

 To solve LCS by using dynamic approach:


 Covert the problem to a search problem
 Search space: all possible strings that are the sub-
sequences of X or Y
 Goal: find the strings that are common to both X
and Y and have the longest length
 There can be two cases: last character match/do
not match
 If the last character does not match, LCS[i][j] =
max(L1[i-1]L2[j], L1[i]L2[j-1])
 If the last character match, increase the length of
LCS, and process: LCS[i][j] = LCS[i-j][j-1]+1
Longest Common Subsequence

 Initialise 1st row and column to 0 values

A C G T

0 0 0 0 0

A 0

G 0

C 0

T 0

A 0

11
Longest Common Subsequence

 Initialise
Since A = A, row
1st
and column to 0 values
get the
diagonal value
LCS[0,0]+1: 0+1=1
A C G T

0 0 0 0 0

A 0 0+1=1

G 0

C 0

T 0

A 0

12
Longest Common Subsequence

Since A ≠ C, get the max value


by referring up one cell and left
cell, i.e. LCS[1,2] =
max(LCS[0,2], LCS[1,1])=1
A C G T

0 0 0 0 0

A 0 1 1

G 0

C 0

T 0

A 0

13
Longest Common Subsequence

 Final result

A C G T

0 0 0 0 0

A 0 1 1 1 1

G 0 1 1 2 2

C 0 1 2 2 2

T 0 1 2 2 3

A 0 1 2 2 3

The LCS is 3.

14
Optimal substructure & Overlapping sub
problems

 Optimal substructure
 Occurs when the optimal solution contains within it
optimal solutions to subproblems
 Matrix-chain multiplication uses two
subproblems, the two sub chains
 Overlapping sub problems
 Occurs when a recursive algorithm revisit the same
problem repeatedly
 Dynamic programming solve each subproblem only
once and stores the result
 In LCS, reduce exponential problem to O(mn)
ROD CUTTING PROBLEM
Rod Cutting Problem
 Given a rod of length n with n-1 cutting points

 Given a revenue for each length


Length 1 2 3 4 5 6 7 8 9 10
i
Price pi 1 5 8 9 10 17 17 20 24 30

 Challenge: Find the best cutting for the rod that maximise the
revenue
 For e.g., if the rod length is 8, cut it into length 2 and 6 will give a
revenue 5+17 = 22
Rod Cutting Problem
 Possible cutting of rod:
 The min length is 1, so the number of cut is n-1
 Assume that no cost incur for the cut, if n = 4, we will have:

 Total possibilities of cutting is exponential: 2(n-1)


P1 + Vn-1
P2 + Vn-2
…..

=
max

Pn-1 + V1
Sub problems can be defined as recurrence relationship to solve the rod
cutting problem. However, it gives exponential timePncomplexity
+ V0
Rod Cutting Problem ~ Recursive Tree
cutRod (n=4)

cutRod cutRod cutRod cutRod


(3) (2) (1) (0)

cutRod cutRod cutRod When a rod has


(2) (1) (0) length 4, we can cut
at most n-1, so we
cutRod cutRod cutRod can have 3 cuts, or
(1) (0) (0) 2 cuts, or 1 cut and
another 1 is cut
By using recursive, nothing.
cutRod
sub problem is
(0) Function call Repeat (no of
solved repetitively, times)
this is wasting time
cutRod(3) 1 = 20

cutRod(2) 2 = 21

cutRod(1) 4 = 22

cutRod(0) 8 = 23
Pseudocode of Rod Cutting ~ Dynamic
BOTTOM_UP_ROD_CUT(p, n)

1 Let r[0…n] be a new array

2 r[0] = 0 //no rod, means no revenue

3 for j = 1 to n //find the optimal solution

4 q = -∞

5 for i = 0 to j< i

6 r[i] = max(q, p[i] + r[j – (i+1)])


//find the decomposition of the new value of j that gives the best
//revenue by looking up all the max revenues for smaller values in
//the array r
7 return r[n]
MATRIX CHAINS
22
23
24
Matrix Chain
 Multiplication of A x B require p x q x r scalar
multiplications
 When we have more matrices, as it is associative, we
can have A(BC) or (AB)C.
 However, both give different number of multiplication
f
B
e j

e
A C
d i i,j d

f
Number of Multiplication
A= B= We have 1*1
8*7
9*5
Total 3 multiplication
We have 6 final result,
each result consists of 3
multiplication

 No of multiplications for AB = 3 * 3 * 2 = 18
Matrix Chain-Products

 Matrix Chain-Product:
 Compute A=A0*A1*…*An-1
 Ai is di × di+1
 Problem: How to parenthesize?
 Example
 B is 3 × 100
 C is 100 × 5
 D is 5 × 5
 (B*C)*D takes (3 x 100 x 5) + (3 x 5 x 5) = 1500 +
75 = 1575 ops
 B*(C*D) takes (3 x 100 x 5) + (100 x 5 x 5) = 1500
+ 2500 = 4000 ops
28
29
Matrix Chain Multiplication
A
 We S
B, CImatters:
G
will show that the way we group matrices when multiplying
DiA,ff
 Let A be a N
 Lete
2x10I
B be a 10x50F
matrix
I
r
 Let C bee
nc A N C
matrix
a 50x20 matrix
e
 Consider computing A(BC):!!! T
 # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
 # multiplications for A(BC) = 2x10x20 = 400
 Total multiplications = 10000 + 400 = 10400.

 Consider computing (AB)C:


 # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50 answer
matrix
 # multiplications for (AB)C = 2x50x20 = 2000,
 Total multiplications = 1000 + 2000 = 3000
Matrix Chain Multiplication

 Thus, our goal today is:


 Given a chain of matrices to multiply, determine the
fewest number of multiplications necessary to
compute the product.
 We just want to get the order of multiplying the
matrixes so that we have fewest multiplication
Matrix Chain Multiplication
 Formal Definition of the problem:
 Let A = A  A1 ... An-1
0
 Let Ni,j denotes the minimal number of multiplications
necessary to find the product:
Ai Ai+1 ... Aj.
 And let d ×d denote the dimensions of matrix Ai.
i i+1

 We must attempt to determine the minimal number of


multiplications necessary(N0,n-1) to find A,
 assuming that we simply do each single matrix
multiplication in the standard method.
Matrix Chain Multiplication

The key to solving this problem is noticing


the sub-problem optimality condition:
 If a particular parenthesization of the whole
product is optimal, then any sub-
parenthesization in that product is optimal as
well.

Say What?
 If (A (B ((CD) (EF)) ) ) is optimal
 Then (B ((CD) (EF)) ) is optimal as well
 Proof on the next slide…
Matrix Chain Multiplication
 Assume that we are calculating ABCDEF and that
the following parenthesization is optimal:
(A (B ((CD) (EF)) ) )
 Then it is necessarily the case that
(B ((CD) (EF)) )
 is the optimal parenthesization of BCDEF.

 Why is this? Let’s proof by contradiction


 Because if it wasn't, and say ( ((BC) (DE)) F) was
better, then it would also follow that
(A ( ((BC) (DE)) F) ) was better than
(A (B ((CD) (EF)) ) ),
 contradicting its optimality!
Matrix Chain Multiplication
 Our final multiplication will ALWAYS be of the form
 (A0 A1 ... Ak)  (Ak+1 Ak+2 ... An-1)

 In essence, there is exactly one value of k for which we should "split" our work
into two separate cases so that we get an optimal result.
 Here is a list of the cases to choose from:
 (A0)  (A1 Ak+2 ... An-1)
 (A0 A1)  (A2 Ak+2 ... An-1)
 (A0 A1A2)  (A3 Ak+2 ... An-1)
 ...
 (A0 A1 ... An-3)  (An-2  An-1)
 (A0 A1 ... An-2)  (An-1)

 Basically, count the number of multiplications in each of these choices and pick
the minimum.
 One other point to notice is that you have to account for the minimum number of
multiplications in each of the two products.
Matrix Chain Multiplication
 Consider the case multiplying these 4 matrices:
 A: 2x4
 B: 4x2
 C: 2x3
 D: 3x1

 1. (A)(BCD) - This is a 2x4 multiplied by a 4x1,


 so 2x4x1 = 8 multiplications, plus whatever work it will take to multiply
(BCD).

 2. (AB)(CD) - This is a 2x2 multiplied by a 2x1,


 so 2x2x1 = 4 multiplications, plus whatever work it will take to multiply
(AB) and (CD).

 3. (ABC)(D) - This is a 2x3 multiplied by a 3x1,


 so 2x3x1 = 6 multiplications, plus whatever work it will take to multiply
(ABC).
Various Approaches to Solve Matrix Chain-Product
1. An Enumeration Approach

 Matrix Chain-Product Algorithm


 Try all possible ways to parenthesize A=A0*A1*…
*An-1
 Calculate number of ops for each one
 Pick the one that is best
 Running time:
 The number of paranthesizations is equal to the
number of binary trees with n nodes
 This is exponential!
 It is called the Catalan number, and it is
almost 4n.
 This is a terrible algorithm!
2. A Greedy Approach

 Idea #1: repeatedly select the product that uses (up)


the most operations.
 Counter-example:
 A is 10 × 5
 B is 5 × 10
 C is 10 × 5
 D is 5 × 10
 Greedy idea #1 gives (A*B)*(C*D), which takes
(10x5x10)+(10x10x10)+(10x5x10) =
500+1000+500 = 2000 ops
 A*((B*C)*D) takes (10x5x10)+(5x10x5)+(5x5x10) =
500+250+250 = 1000 ops
3. Another Greedy Approach

 Idea #2: repeatedly select the product that uses the


fewest operations.
 Counter-example:
 A is 101 × 11
 B is 11 × 9
 C is 9 × 100
 D is 100 × 99
 Greedy idea #2 gives A*((B*C)*D)), which takes
109989+9900+108900=228789 ops
 (A*B)*(C*D) takes 9999+89991+89100=189090 ops
 The greedy approach is not giving us the optimal
value.
3. A “Recursive” Approach
 Define subproblems:
 Find the best parenthesization of Ai*Ai+1*…*Aj.
 Let Ni,j denote the number of operations done by this
subproblem (i to j).
 The optimal solution for the whole problem (0 to n-1) is N0,n-1.
 Subproblem optimality: The optimal solution can be defined in
terms of optimal subproblems
 There has to be a final multiplication (root of the expression
tree) for the optimal solution.
 Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1).
 Then the optimal solution N0,n-1 is the sum of two optimal
subproblems, N0,i and Ni+1,n-1 plus the time for the last
multiply.
 If the global optimum did not have these optimal
subproblems, we could define an even better “optimal”
solution.
4. A Dynamic Programming Algorithm

 Since subproblems overlap, we don’t use recursion.


 Instead, we construct optimal subproblems “bottom-
up.”
 Ni,i’s are easy, so start with them
 Then do length 2,3,… subproblems, and so on.
42
43
44
45
46
47
48
49
50
A1 * A2 * A3 * A4 * A5
(4x10) * (10x3) * (3x12)* (12x20) *(20x7)
Example p0=4; p1 = 10; p2 = 3; p3 =12; p4 = 20; p5
=7
i\ j 1 2 3 4 5

1 0
120 264 1080 1344
2 X 0
360 1320 1350
Check
3 the work:
X X 0
720 1140
(A1 * A2 )* ((A3 * A4) * A5)
4 X X X 0
1680
The resulting
5 Xmatrix: X (4*3)* X(3*20 20*
X 7) 0
Number of multiplication: (120) (720)
(3*20 20* 7) We have the selected k is 2 when
(3*7) M(1,3).
(420) Thus, we have (A1*A2) and
(4 * 7) another group will be
(84) (A3*A4*A5).
From (A3*A4*A5), we have the
120+720+420+84=1 selected k is 4. Then we have
344 (A3*A4), this forms the solution:
((A3*A4)*A5)
51
Example
i\ j 1 2 3 4 5

1 0
120 264 1080 1344
2 X 0
360 1320 1350
3 X X 0 1140
720
4 X X X 0
1680
5 X X X X 0

We focus on
the selected k
values to
determine
where to put
the parenthesis
53
54
5*4*6=120

55
4*6*2=48

56
6*2*7=84

57
m[2,3] = 48;+40 m[1,2]
= 120;+60
A1(A2xA3 ) We want the minimum,
=48+(5*4*2)=88 or and thus, 88 is the
(A1xA2)A3 = answer.
120+5*6*2=180

58
A2X(A3xA4 )
=84+(4*6*7)=252
or
(A2xA3)xA4 =
48+4*2*7=104

59
A1x(A2XA3xA4)
104+5*4*7=160
or
(A1xA2)x(A3xA4)
120+84+5*6*7=414
Or
(A1XA2xA3)XA4
88+5*2*7=158

60
Algorithm matrixChain(S):
Input: sequence S of n matrices to be multiplied
Output: number of operations in an optimal
paranethization of S
for i  0 to n-1 do
Ni,i  0
for b  1 to n-1 do
for i  0 to n-b-1 do
j  i+b
Ni,j  +infinity
for k  i to j-1 do
Ni,j  min{Ni,j , Ni,k +Nk+1,j +di
dk+1 dj+1}
0/1 KNAPSACK
The 0/1 Knapsack Problem
 Given: A set S of n items, with each item i having
 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but with
weight at most W.
 If we are not allowed to take fractional amounts, then this
is the 0/1 knapsack problem.
 In this case, we let T denote the set of items we take

 Objective: maximize  b
iT
i

 Constraint: w
iT
i W
Greedy Example

 Given: A set S of n items, with each item i having


 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but
with weight at most W.
“knapsack”

Items: Solution:
• 5 (2 in)
1 2 3 4 5 • 1 (4 in)
• 3 (2 in)
Weight: 4 in 2 in 2 in 6 in 2 in
W = 9 in w = 8 in
Benefit: $20 $3 $6 $25 $80
B = $106
Value: 5 1.5 3 4.17 40
Greedy Example

 Given: A set S of n items, with each item i having


 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but
with weight at most W.
“knapsack”

Items: Solution:
• 5 (2 in)
1 2 3 4 5 • 4 (6 in)

Weight: 4 in 2 in 2 in 6 in 2 in
W = 9 in w = 8 in
Benefit: $20 $3 $6 $27 $80
B = $107
Value: 5 1.5 3 4.5
A 0/1 Knapsack Algorithm, First Attempt

 Greedy: repeatedly add item with maximum ratio,


benefit/ weight
 Problem: greedy method does not have subproblem
optimality:
 {5, 2, 1} has benefit = 35, greedy not optimal
 {3, 4} has optimum benefit of 40!
 W = 11 kg

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
A 0/1 Knapsack Algorithm, Second Attempt

 Sk: Set of items numbered 1 to k.


 Define B[k,w] = best selection from Sk with weight at
most w
 Good news: this does have subproblem optimality:
Not using item k
 B[k  1, w] if wk  w
B[k , w] 
max{B[k  1, w], B[k  1, w  wk ]  bk } else
Using item
k
i.e., best subset of Sk with weight exactly w is either the
best subset of Sk-1 with weight w or the best subset of
Sk-1 with weight w-wk plus item k.
The 0/1 Knapsack Algorithm

 Recall definition of B[k,w]:


 B[k  1, w] if wk  w
B[k , w] 
max{ B[k  1, w], B[k  1, w  wk ]  bk } else
 Since B[k,w] is defined in terms of B[k-1,*], we can
reuse the same array
 By using DP, we can have the running time is: O(nW)
where n is the number of items, W is the capacity of
knapsack.
 Not a polynomial-time algorithm if W is large
 This is a pseudo-polynomial time algorithm
The 0/1 Knapsack Algorithm
W = 11 kg
k represents the item
Values in the cell, is B[k,w]
0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0
{1,2} 0
{1,2,3} 0
{1,2,3,4} 0
{1,2,3,4,5} 0

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg
k represents the item
Values in the cell, is B[k,w]

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0
{1,2,3} 0
{1,2,3,4} 0
{1,2,3,4,5} 0

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
, so we get 1
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
k=2,{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0
{1,2,3,4} 0
{1,2,3,4,5} 0

b2 = 6, We get 6 Items: 1 2 3 4 5
(it means that we have only item 2, no item 1.
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0 1 6 7 7 18 19 24 25 25 25 25
{1,2,3,4} 0
{1,2,3,4,5} 0

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0 1 6 7 7 18 19 24 25 25 25 25
{1,2,3,4} 0 1 6 7 7 18 22 24 28 29 29 40
{1,2,3,4,5} 0

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0 1 6 7 7 18 19 24 25 25 25 25
{1,2,3,4} 0 1 6 7 7 18 22 24 28 29 29 40
{1,2,3,4,5} 0 1 6 7 7 18 22 28 29 34 35 40

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0 1 6 7 7 18 19 24 25 25 25 25
{1,2,3,4} 0 1 6 7 7 18 22 24 28 29 29 40
{1,2,3,4,5} 0 1 6 7 7 18 22 28 29 34 35 40

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
The 0/1 Knapsack Algorithm
W = 11 kg

0 1 2 3 4 5 6 7 8 9 10 11
ø 0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
{1,2} 0 1 6 7 7 7 7 7 7 7 7 7
{1,2,3} 0 1 6 7 7 18 19 24 25 25 25 25
{1,2,3,4} 0 1 6 7 7 18 22 24 28 29 29 40
{1,2,3,4,5} 0 1 6 7 7 18 22 28 29 34 35 40

Items: 1 2 3 4 5
Weight (kg): 1 2 5 6 7
Benefit: 1 6 18 22 28
Algorithm 01Knapsack(S, W):
Input: set S of items w/ benefit bi
and weight wi; max.
weight W
Output: benefit of best subset with
weight at most W
for w  0 to W do
B[w]  0
for k  1 to n do
for w  W downto wk do
if B[w-wk]+bk > B[w] then
B[w]  B[w-wk]+bk
Exercise

 The matrixChain algorithm produces the following


table for matrices below. Find out how the total
number of operation 7925 at N0,4 is produced.
 A0: 20X35; A1: 35X15; A2: 15X5; A3: 5X10; A4:
10X12

You might also like