0% found this document useful (0 votes)
4 views

knapsack

The document discusses various dynamic programming problems, including the computation of the binomial coefficient, finding the longest path in a directed acyclic graph (DAG), the maximum square submatrix in a boolean matrix, and the World Series odds problem. It also covers the knapsack problem, detailing the design of a dynamic programming algorithm to find the most valuable subset of items that fit into a knapsack, including the use of memory functions for efficiency. The document provides recurrence relations, examples, and pseudocode for the algorithms, along with their time and space efficiencies.

Uploaded by

Manju Vino
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

knapsack

The document discusses various dynamic programming problems, including the computation of the binomial coefficient, finding the longest path in a directed acyclic graph (DAG), the maximum square submatrix in a boolean matrix, and the World Series odds problem. It also covers the knapsack problem, detailing the design of a dynamic programming algorithm to find the most valuable subset of items that fit into a knapsack, including the use of memory functions for efficiency. The document provides recurrence relations, examples, and pseudocode for the algorithms, along with their time and space efficiencies.

Uploaded by

Manju Vino
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

292 Dynamic Programming

9. Binomial coefficient Design an efficient algorithm for computing the bino-


mial coefficient C(n, k) that uses no multiplications. What are the time and
space efficiencies of your algorithm?
10. Longest path in a dag
a. Design an efficient algorithm for finding the length of the longest path in a
dag. (This problem is important both as a prototype of many other dynamic
programming applications and in its own right because it determines the
minimal time needed for completing a project comprising precedence-
constrained tasks.)
b. Show how to reduce the coin-row problem discussed in this section to the
problem of finding a longest path in a dag.
11. Maximum square submatrix Given an m × n boolean matrix B, find its
largest square submatrix whose elements are all zeros. Design a dynamic
programming algorithm and indicate its time efficiency. (The algorithm may
be useful for, say, finding the largest free square area on a computer screen
or for selecting a construction site.)
12. World Series odds Consider two teams, A and B, playing a series of games
until one of the teams wins n games. Assume that the probability of A winning
a game is the same for each game and equal to p, and the probability of
A losing a game is q = 1 − p. (Hence, there are no ties.) Let P (i, j ) be the
probability of A winning the series if A needs i more games to win the series
and B needs j more games to win the series.
a. Set up a recurrence relation for P (i, j ) that can be used by a dynamic
programming algorithm.
b. Find the probability of team A winning a seven-game series if the proba-
bility of it winning a game is 0.4.
c. Write pseudocode of the dynamic programming algorithm for solving this
problem and determine its time and space efficiencies.

8.2 The Knapsack Problem and Memory Functions


We start this section with designing a dynamic programming algorithm for the
knapsack problem: given n items of known weights w1, . . . , wn and values
v1, . . . , vn and a knapsack of capacity W , find the most valuable subset of the
items that fit into the knapsack. (This problem was introduced in Section 3.4,
where we discussed solving it by exhaustive search.) We assume here that all the
weights and the knapsack capacity are positive integers; the item values do not
have to be integers.
To design a dynamic programming algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in terms
8.2 The Knapsack Problem and Memory Functions 293

of solutions to its smaller subinstances. Let us consider an instance defined by the


first i items, 1 ≤ i ≤ n, with weights w1, . . . , wi , values v1, . . . , vi , and knapsack
capacity j, 1 ≤ j ≤ W. Let F (i, j ) be the value of an optimal solution to this
instance, i.e., the value of the most valuable subset of the first i items that fit into
the knapsack of capacity j. We can divide all the subsets of the first i items that fit
the knapsack of capacity j into two categories: those that do not include the ith
item and those that do. Note the following:
1. Among the subsets that do not include the ith item, the value of an optimal
subset is, by definition, F (i − 1, j ).
2. Among the subsets that do include the ith item (hence, j − wi ≥ 0), an optimal
subset is made up of this item and an optimal subset of the first i − 1 items
that fits into the knapsack of capacity j − wi . The value of such an optimal
subset is vi + F (i − 1, j − wi ).
Thus, the value of an optimal solution among all feasible subsets of the first i
items is the maximum of these two values. Of course, if the ith item does not fit
into the knapsack, the value of an optimal subset selected from the first i items
is the same as the value of an optimal subset selected from the first i − 1 items.
These observations lead to the following recurrence:

max{F (i − 1, j ), vi + F (i − 1, j − wi )} if j − wi ≥ 0,
F (i, j ) = (8.6)
F (i − 1, j ) if j − wi < 0.
It is convenient to define the initial conditions as follows:
F (0, j ) = 0 for j ≥ 0 and F (i, 0) = 0 for i ≥ 0. (8.7)
Our goal is to find F (n, W ), the maximal value of a subset of the n given items
that fit into the knapsack of capacity W, and an optimal subset itself.
Figure 8.4 illustrates the values involved in equations (8.6) and (8.7). For
i, j > 0, to compute the entry in the ith row and the j th column, F (i, j ), we
compute the maximum of the entry in the previous row and the same column
and the sum of vi and the entry in the previous row and wi columns to the left.
The table can be filled either row by row or column by column.

0 j –wi j W
0 0 0 0 0

i –1 0 F(i –1, j – wi ) F (i –1, j )


wi, vi i 0 F (i, j )

n 0 goal

FIGURE 8.4 Table for solving the knapsack problem by dynamic programming.
294 Dynamic Programming

capacity j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1 = 12 1 0 0 12 12 12 12
w2 = 1, v2 = 10 2 0 10 12 22 22 22
w3 = 3, v3 = 20 3 0 10 12 22 30 32
w4 = 2, v4 = 15 4 0 10 15 25 30 37
FIGURE 8.5 Example of solving an instance of the knapsack problem by the dynamic
programming algorithm.

EXAMPLE 1 Let us consider the instance given by the following data:

item weight value


1 2 $12
2 1 $10 capacity W = 5.
3 3 $20
4 2 $15

The dynamic programming table, filled by applying formulas (8.6) and (8.7),
is shown in Figure 8.5.
Thus, the maximal value is F (4, 5) = $37. We can find the composition of an
optimal subset by backtracing the computations of this entry in the table. Since
F (4, 5) > F (3, 5), item 4 has to be included in an optimal solution along with an
optimal subset for filling 5 − 2 = 3 remaining units of the knapsack capacity. The
value of the latter is F (3, 3). Since F (3, 3) = F (2, 3), item 3 need not be in an
optimal subset. Since F (2, 3) > F (1, 3), item 2 is a part of an optimal selection,
which leaves element F (1, 3 − 1) to specify its remaining composition. Similarly,
since F (1, 2) > F (0, 2), item 1 is the final part of the optimal solution {item 1,
item 2, item 4}.

The time efficiency and space efficiency of this algorithm are both in (nW ).
The time needed to find the composition of an optimal solution is in O(n). You
are asked to prove these assertions in the exercises.

Memory Functions
As we discussed at the beginning of this chapter and illustrated in subsequent
sections, dynamic programming deals with problems whose solutions satisfy a
recurrence relation with overlapping subproblems. The direct top-down approach
to finding a solution to such a recurrence leads to an algorithm that solves common
subproblems more than once and hence is very inefficient (typically, exponential
8.2 The Knapsack Problem and Memory Functions 295

or worse). The classic dynamic programming approach, on the other hand, works
bottom up: it fills a table with solutions to all smaller subproblems, but each of
them is solved only once. An unsatisfying aspect of this approach is that solutions
to some of these smaller subproblems are often not necessary for getting a solution
to the problem given. Since this drawback is not present in the top-down approach,
it is natural to try to combine the strengths of the top-down and bottom-up
approaches. The goal is to get a method that solves only subproblems that are
necessary and does so only once. Such a method exists; it is based on using memory
functions.
This method solves a given problem in the top-down manner but, in addition,
maintains a table of the kind that would have been used by a bottom-up dynamic
programming algorithm. Initially, all the table’s entries are initialized with a spe-
cial “null” symbol to indicate that they have not yet been calculated. Thereafter,
whenever a new value needs to be calculated, the method checks the correspond-
ing entry in the table first: if this entry is not “null,” it is simply retrieved from the
table; otherwise, it is computed by the recursive call whose result is then recorded
in the table.
The following algorithm implements this idea for the knapsack problem. After
initializing the table, the recursive function needs to be called with i = n (the
number of items) and j = W (the knapsack capacity).

ALGORITHM MFKnapsack(i, j )
//Implements the memory function method for the knapsack problem
//Input: A nonnegative integer i indicating the number of the first
// items being considered and a nonnegative integer j indicating
// the knapsack capacity
//Output: The value of an optimal feasible subset of the first i items
//Note: Uses as global variables input arrays W eights[1..n], V alues[1..n],
//and table F [0..n, 0..W ] whose entries are initialized with −1’s except for
//row 0 and column 0 initialized with 0’s
if F [i, j ] < 0
if j < Weights[i]
value ← MFKnapsack(i − 1, j )
else
value ← max(MFKnapsack(i − 1, j ),
Values[i] + MFKnapsack(i − 1, j − Weights[i]))
F [i, j ] ← value
return F [i, j ]

EXAMPLE 2 Let us apply the memory function method to the instance consid-
ered in Example 1. The table in Figure 8.6 gives the results. Only 11 out of 20
nontrivial values (i.e., not those in row 0 or in column 0) have been computed.
296 Dynamic Programming

capacity j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1 = 12 1 0 0 12 12 12 12
w2 = 1, v2 = 10 2 0 — 12 22 — 22
w3 = 3, v3 = 20 3 0 — — 22 — 32
w4 = 2, v4 = 15 4 0 — — — — 37

FIGURE 8.6 Example of solving an instance of the knapsack problem by the memory
function algorithm.

Just one nontrivial entry, V (1, 2), is retrieved rather than being recomputed. For
larger instances, the proportion of such entries can be significantly larger.

In general, we cannot expect more than a constant-factor gain in using the


memory function method for the knapsack problem, because its time efficiency
class is the same as that of the bottom-up algorithm (why?). A more significant
improvement can be expected for dynamic programming algorithms in which a
computation of one value takes more than constant time. You should also keep in
mind that a memory function algorithm may be less space-efficient than a space-
efficient version of a bottom-up algorithm.

Exercises 8.2
1. a. Apply the bottom-up dynamic programming algorithm to the following
instance of the knapsack problem:

item weight value


1 3 $25
2 2 $20
3 1 $15 capacity W = 6.
4 4 $40
5 5 $50

b. How many different optimal subsets does the instance of part (a) have?
c. In general, how can we use the table generated by the dynamic program-
ming algorithm to tell whether there is more than one optimal subset for
the knapsack problem’s instance?

You might also like