0% found this document useful (0 votes)
3 views

Submod Value

This paper presents a randomized continuous greedy algorithm that achieves a (1 - 1/e)-approximation for the Submodular Welfare Problem in the value oracle model, which involves distributing items among players with monotone submodular utility functions. The authors demonstrate that this approximation is optimal for cases with equal players and can be extended to submodular maximization under any matroid constraint. Additionally, the continuous greedy algorithm shows potential for broader applications in combinatorial optimization problems.

Uploaded by

sajwos.ff
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Submod Value

This paper presents a randomized continuous greedy algorithm that achieves a (1 - 1/e)-approximation for the Submodular Welfare Problem in the value oracle model, which involves distributing items among players with monotone submodular utility functions. The authors demonstrate that this approximation is optimal for cases with equal players and can be extended to submodular maximization under any matroid constraint. Additionally, the continuous greedy algorithm shows potential for broader applications in combinatorial optimization problems.

Uploaded by

sajwos.ff
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Optimal Approximation for the Submodular Welfare

Problem in the Value Oracle Model

Jan Vondrák
Department of Mathematics
Princeton University
Princeton, NJ 08544, USA
[email protected]

ABSTRACT General Terms


In the Submodular Welfare Problem, m items are to be dis- Algorithms, Economics
tributed among n players with utility functions wi : 2[m] →
R+ . The utility functions are assumed to be monotone and Keywords
submodular. Assuming that player i receives Pna set of items
Si , we wish to maximize the total utility Submodular functions, matroids, combinatorial auctions
i=1 wi (Si ). In
this paper, we work in the value oracle model where the
only access to the utility functions is through a black box 1. INTRODUCTION
returning wi (S) for a given set S. Submodular Welfare A function f : 2X → R is monotone if f (S) ≤ f (T ) when-
is in fact a special case of the more general problem of ever S ⊆ T . We say that f is submodular, if
submodular maximization subject to a matroid constraint:
max{f (S) : S ∈ I}, where f is monotone submodular and f (S ∪ T ) + f (S ∩ T ) ≤ f (S) + f (T )
I is the collection of independent sets in some matroid. for any S, T . Usually we also assume that f (∅) = 0. Sub-
For both problems, a greedy algorithm is known to yield modular functions arise naturally in combinatorial optimiza-
a 1/2-approximation [21, 16]. In special cases where the tion, e.g. as rank functions of matroids, in covering prob-
matroid is uniform (I = {S : |S| ≤ k}) [20] or the sub- lems, graph cut problems and facility location problems [5,
modular function is of a special type [4, 2], a (1 − 1/e)- 18, 24]. Unlike minimization of submodular functions which
approximation has been achieved and this is optimal for can be done in polynomial time [10, 23], problems involving
these problems in the value oracle model [22, 6, 15]. A submodular maximization are typically NP-hard. Research
(1 − 1/e)-approximation for the general Submodular Wel- on problems involving maximization of monotone submodu-
fare Problem has been known only in a stronger demand lar functions dates back to the work of Nemhauser, Wolsey
oracle model [4], where in fact 1 − 1/e can be improved [9]. and Fisher in the 1970’s [20, 21, 22].
In this paper, we develop a randomized continuous greedy
algorithm which achieves a (1 − 1/e)-approximation for the 1.1 Combinatorial auctions
Submodular Welfare Problem in the value oracle model. We Recently, there has been renewed interest in submodular
also show that the special case of n equal players is approx- maximization due to applications in the area of combinato-
imation resistant, in the sense that the optimal (1 − 1/e)- rial auctions. In a combinatorial auction, n players compete
approximation is achieved by a uniformly random solution. for m items which might have different value for different
Using the pipage rounding technique [1, 2], we obtain a players, and also depending on the particular combination
(1 − 1/e)-approximation for submodular maximization sub- of items allocated to a given player. In full generality, this
ject to any matroid constraint. The continuous greedy algo- is expressed by the notion of utility function wi : 2[m] → R+
rithm has a potential of wider applicability, which we demon- which assigns a value to each set of items potentially allo-
strate on the examples of the Generalized Assignment Prob- cated to player i. Since this is an amount of information
lem and the AdWords Assignment Problem. exponential in m, we have to clarify how the utility func-
tions are accessible to an algorithm. Unless we consider a
Categories and Subject Descriptors special class of utility functions with a polynomial-size repre-
sentation, we typically resort to an oracle model. An oracle
F.2 [Theory of computing]: Analysis of algorithms and
answers a certain type of queries about a utility function.
problem complexity
Two types of oracles have been commonly considered:
• Value oracle. The most basic query is: What is the
value of wi (S)? An oracle answering such queries is
Permission to make digital or hard copies of all or part of this work for called a value oracle.
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies • Demand oracle. Sometimes, a more powerful oracle
bear this notice and the full citation on the first page. To copy otherwise, to is considered, which can answer queries of the following
republish, to post on servers or to redistribute to lists, requires prior specific type: Given an assignment of pricesPto items p : [m] →
permission and/or a fee.
STOC’08, May 17–20, 2008, Victoria, British Columbia, Canada. R, which set S maximizes wi (S) − j∈S pj ? Such an
Copyright 2008 ACM 978-1-60558-047-0/08/05 ...$5.00. oracle is called a demand oracle.
Our goal is toPfind disjoint sets S1 , . . . , Sn maximizing the 1.2 Submodular maximization subject to a ma-
total welfare n i=1 wi (Si ). Regardless of what type of oracle troid constraint
we consider, there are computational issues that make the The Submodular Welfare Problem is in fact a special case
problem very hard in general. In particular, consider players of the following problem, considered by Fisher, Nemhauser
who are single-minded in the sense that each desires one and Wolsey in 1978 [21]. The reduction appears in [17]; we
particular set Ti , wi (S) = 1 if Ti ⊆ S and 0 otherwise. review it briefly in Section 2.
Then both value and demand queries are easy to answer;
however, the problem is equivalent to set packing which is
known to have no m−1/2+ǫ -approximation unless P = N P
Submodular maximization subject to a matroid con-
[13, 27]. Thus, restricted classes of utility functions need to
straint.
Given a monotone submodular function f : 2X → R+ (by
be considered if one intends to obtain non-trivial positive
a value oracle) and a matroid 3 M = (X, I) (by a member-
results.
ship oracle), find maxS∈I f (S).
A class of particular interest is the class of monotone sub-
modular functions. An equivalent definition of this class is
In [20], Nemhauser, Wolsey and Fisher studied the special
as follows: Let fS (j) = f (S + j) − f (S) denote the marginal
case of a uniform matroid, I = {S ⊂ X : |S| ≤ k}. In this
value of item j with respect to S. (We write S + j instead
case, they showed that a greedy algorithm yields a (1−1/e)-
of S ∪ {j} to simplify notation.) Then f is monotone if
approximation. Nemhauser and Wolsey also proved that a
fS (j) ≥ 0 and f is submodular if fS (j) ≥ fT (j) whenever
better approximation is impossible with a polynomial num-
S ⊂ T . This can be interpreted as the property of dimin-
ber of value queries [22]. Note thatS the Max k-cover prob-
ishing returns known in economics and arising naturally in
lem, to find k sets maximizing | j∈K Aj |, is a special case
certain settings. This leads to what we call the Submodular
of submodular maximization subject to a uniform matroid
Welfare Problem.
constraint. Feige proved that 1 − 1/e is optimal even for this
explicitly posed problem, unless P = N P [6].
The Submodular Welfare Problem. For general matroids, the greedy algorithm delivers a fac-
Given m items and n players with monotone submodular tor of 1/2 [21]. Several subsequent results indicated that it
utility functions wi : 2[m] → R+ , we seek a partition of the might be possible to improve the factor to 1−1/e: apart from
items into disjoint sets S1 , S2 , . . . , Sn in order to maximize the case of uniform matroids, (1 − 1/e)-approximation was
P n
i=1 wi (Si ). also obtained for a “maximum coverage problem with group
budget constraints” [3, 1], for a related problem of submodu-
This problem was first studied by Lehmann, Lehmann and lar maximization subject to a knapsack constraint [25], and
Nisan [16]. They showed that a simple on-line greedy algo- most recently for any matroid constraint and a special class
rithm gives a 1/2-approximation for this problem. There has of submodular functions, sums of weighted rank functions
been no further progress over 1/2 in the value oracle model, [2]. A key ingredient in the last result is the technique of pi-
except for lower order terms and special cases. On the nega- page rounding, introduced by Ageev and Sviridenko [1] and
tive side, it was proved that the Submodular Welfare Prob- adapted to matroid optimization problems by Calinescu et
lem cannot be approximated to a factor better than 1 − 1/e, al. [2]. In [2], this technique converts a fractional LP solu-
unless P = N P [15]; recently, Mirrokni, Schapira and the tion into a feasible integral solution. Nonetheless, since it is
author showed that a better than (1 − 1/e)-approximation unclear how to write a suitable LP for a general submodu-
would require exponentially many value queries, regardless lar function given by a value oracle, this approach seems to
of P = N P [19]. A (1 − 1/e)-approximation was devel- apply only to a restricted class of functions.
oped by Dobzinski and Schapira [4]; however, their algo-
rithm works in the demand oracle model. In fact, it turns 1.3 Our results
out that a factor strictly better than 1− 1/e can be achieved
In this paper, we design a randomized algorithm which
in the demand oracle model [9]. (The hardness result of [15]
provides a (1−1/e)-approximation for the Submodular Wel-
is circumvented by the demand oracle model, since demand
fare Problem, using only value queries. The algorithm can
queries on the hard instances are in fact NP-hard to an-
be viewed as a continuous greedy process which delivers an
swer.) In the value oracle model, Dobzinski and Schapira [4]
n approximate solution to a non-linear continuous optimiza-
gave an (on-line) 2n−1 -approximation, and also a (1 − 1/e)-
tion problem. Employing the technique of pipage rounding
approximation in the special case where utilities can be writ-
[1, 2], we also obtain a (1−1/e)-approximation for submodu-
ten explicitly as coverage functions for some set system.
lar maximization subject to an arbitrary matroid constraint.
The welfare maximization problem has been also con-
This settles the approximation status of these problems in
sidered under assumptions on utility functions somewhat
the value oracle model. Interestingly, in the special case
weaker than submodularity, namely subadditivity1 and frac-
of Submodular Welfare with n equal players, the optimal
tional subadditivity2 . In the value oracle model, m−1/2+ǫ -
(1 − 1/e)-approximation is in fact obtained by a uniformly
approximation for these classes of utility functions is impos-
random solution.
sible with a polynomial number value queries [19]. In con-
The continuous greedy algorithm can be also applied to
trast, the demand oracle model allows 1/2-approximation for
problems where the underlying matroid is exponentially large.
subadditive utility functions and (1−1/e)-approximation for
Our algorithm provides a (1 − e−α )-approximation, assum-
fractionally subadditive functions [7].
ing that we have an α-approximation for maximizing cer-
1
tain linear functions over the matroid. This improves the
f is subadditive if f (S ∪ T ) ≤ f (S) + fP
(T ) for any S, T .
2
P if f (S) ≤ T αT f (T ) for any
f is fractionally subadditive 3
A matroid is a system of “independent sets”, generalizing
αT ≥ 0 such that ∀j ∈ S; T :j∈T αT ≥ 1. the notion of linear independence of vectors [24].
∂F
previously known factors of α(1 − 1/e) [11] and α/(1 + α) ∂yj
is non-increasing with respect to yi . It can be seen that
[2, 12] in this framework. In particular, we obtain a (1 − this implies (1). Also, it means that a smooth submodular
1/e − o(1))-approximation for the Generalized Assignment function is concave along any non-negative direction vector;
Problem (which is suboptimal but more practical than pre- however, it is not necessarily concave in all directions.
viously known algorithms [11, 9]), and a (1 − e−1/2 − o(1))-
approximation for the AdWords Assignment Problem (im- Extension by expectation.
proving the previously known (1/3 − o(1))-approximation For a monotone submodular function f : 2X → R+ , a
[12]). canonical extension to a smooth monotone submodular func-
Remark. The notion of a continuous greedy algorithm has tion can be obtained as follows [2]: For y ∈ [0, 1]X , let ŷ
been used before. Wolsey proposed a continuous greedy al- denote a random vector in {0, 1}X where each coordinate is
gorithm for the problem of maximizing a “real-valued sub- independently rounded to 1 with probability yj or 0 other-
modular function” over a knapsack polytope already in 1982 wise. Then, define
[26]. We note that this algorithm is somewhat different from X Y Y
ours. Wolsey’s algorithm increases one fractional variable at F (y) = E[f (ŷ)] = f (R) yi (1 − yj ).
a time, the one maximizing local gain. In our setting, his R⊆X i∈R j ∈R
/
algorithm in fact reduces to the standard greedy algorithm
This is a multilinear polynomial which satisfies
which yields only a 1/2-approximation.
Our insight is that a suitable variant of the continuous ∂F
= E[f (ŷ) | ŷj = 1] − E[f (ŷ) | ŷj = 0] ≥ 0
greedy method lends itself well to non-linear optimization ∂yj
problems with certain analytical properties. This bears some by monotonicity of f . For i 6= j, we get
resemblance to gradient descent methods used in continuous
optimization. However, ours is not a continuous process con- ∂2F
= E[f (ŷ) | ŷi = 1, ŷj = 1] − E[f (ŷ) | ŷi = 1, ŷj = 0]
verging to a local optimum; rather, it is a process running ∂yi ∂yj
for a fixed amount of time resulting in a globally approxi- − E[f (ŷ) | ŷi = 0, ŷj = 1] + E[f (ŷ) | ŷi = 0, ŷj = 0]
mate solution. We discuss this in Section 3. In Section 4, we
≤ 0
present a discrete version of our algorithm which, combined
with pipage rounding, gives a (1 − 1/e)-approximation for ∂2 F
by the submodularity of f . In addition, ∂yj 2
= 0, since F
submodular maximization subject to a matroid constraint.
is multilinear.
In the special case of the Submodular Welfare Problem, pi-
page rounding is actually not needed. We present a self-
contained (1 − 1/e)-approximation algorithm in Section 5. Matroid polytopes.
Finally, we discuss a generalization of our framework in Sec- We consider polytopes P ⊂ RX+ with the property that

tion 6. for any x, y, 0 ≤ x ≤ y, y ∈ P ⇒ x ∈ P . We call such a


polytope down-monotone.
A down-monotone polytope of particular importance here
2. PRELIMINARIES is the matroid polytope. For a matroid M = (X, I), the
matroid polytope is defined as
Smooth submodular functions. P (M) = conv {1I : I ∈ I}.
A discrete function f : 2X → R is submodular, if f (S ∪
T ) + f (S ∩ T ) ≤ f (S) + f (T ) for any S, T ⊆ X. As a As shown by Edmonds [5], an equivalent description is
continuous analogy, Wolsey [26] defines submodularity for a X
function F : [0, 1]X → R as follows: P (M) = {x ≥ 0 : ∀S ⊆ X; xj ≤ rM (S)}.
j∈S
F (x ∨ y) + F (x ∧ y) ≤ F (x) + F (y) (1)
Here, rM (S) = max{|I| : I ⊆ S & I ∈ I} is the rank
where (x ∨ y)i = max{xi , yi } and (x ∧ y)i = min{xi , yi }. function of matroid M. From this description, it is clear
Similarly, a function is monotone if F (x) ≤ F (y) whenever that P (M) is down-monotone.
x ≤ y coordinate-wise. In particular, Wolsey works with
monotone submodular functions that are piecewise linear Pipage rounding.
and in addition concave. In this paper, we use a related A very useful tool that we invoke is a rounding technique
property which we call smooth monotone submodularity. introduced by Ageev and Sviridenko [1], and adapted for the
matroid polytope by Calinescu et al. [2]. We use it here as
Definition 2.1. A function F : [0, 1]X → R is smooth
the following black box.
monotone submodular if
• F ∈ C2 ([0, 1]X ), i.e. it has second partial derivatives Lemma 2.2. There is a polynomial-time randomized al-
everywhere. gorithm which, given a membership oracle for matroid M =
(X, I), a value oracle for a monotone submodular function
• For each j ∈ X, ∂F
∂yj
≥ 0 everywhere (monotonicity). f : 2X → R+ , and y ∈ P (M), returns an independent set
S ∈ I of value f (S) ≥ (1−o(1))E[f (ŷ)] with high probability.
∂2F
• For any i, j ∈ X (possibly equal), ≤ 0 every-
∂yi ∂yj The o(1) term can be made polynomially small in n = |X|.
where (submodularity). Since any fractional solution y ∈ P (M) can be converted to
∂F ∂F an integral one without significant loss in the objective func-
Thus the gradient ∇F = ( ∂y ,..., ∂yn
) is a nonnegative
1 tion, we can consider the continuous problem max{F (y) :
∂2F
vector. The submodularity condition ∂yi ∂yj
≤ 0 means that y ∈ P (M)} instead of max{f (S) : S ∈ I}.
Remark. In previous work [2], the pipage rounding tech- Claim. y(1) ∈ P and F (y(1)) ≥ (1 − 1/e)OP T .
nique was used in conjunction with linear programming to First of all, the trajectory for t ∈ [0, 1] is contained in P ,
approximate max{f (S) : S ∈ I} for a special class of sub- since
modular functions. However, it is not clear how to write a Z t
linear program for a submodular function without any spe- y(t) = v(y(τ ))dτ
cial structure. Therefore, we abandon this approach and 0

attack the non-linear optimization problem max{F (y) : y ∈ is a convex linear combination of vectors in P . To prove the
P (M)} directly. approximation guarantee, fix a point y and suppose that
x∗ ∈ P is the true optimum, OP T = F (x∗ ). The essence of
The Submodular Welfare Problem. our analysis is that the rate of increase in F (y) is equal to
It is known that the Submodular Welfare Problem is a spe- the deficit OP T − F (y). This kind of behavior always leads
cial case of submodular maximization subject to a matroid to a factor of 1 − 1/e, as we show below.
constraint [17]. To fix notation, let us review the reduction Consider a direction v ∗ = (x∗ ∨ y) − y = (x∗ − y) ∨ 0.
here. Let the set of n players be P , the set of m items Q, This is a nonnegative vector; since v ∗ ≤ x∗ ∈ P and P is
and for each i ∈ P , let the respective utility function be down-monotone, we also have v ∗ ∈ P . By monotonicity,
wi : 2Q → R+ . We define a new ground set X = P × Q, F (y + v ∗ ) = F (x∗ ∨ y) ≥ F (x∗ ) = OP T . Note that y + v ∗ is
with a function f : 2X → R+ defined S as follows: Every set not necessarily in P but this is not an issue. Consider the ray
S ⊆ X can be written uniquely as S = i∈P ({i}×Si ). Then of direction v ∗ starting at y, and the function F (y+ξv ∗ ), ξ ≥
let 0. The directional derivative of F along this ray is dF dξ
= v∗ ·
f (S) =
X
wi (Si ). ∇F . Since F is smooth submodular and v ∗ is nonnegative,
i∈P
F (y + ξv ∗ ) is concave in ξ and dF dξ
is non-increasing. By
the mean value theorem, there is some c ∈ [0, 1] such that
Assuming that each wi is a monotone submodular function,
F (y + v ∗ ) − F (y) = dF ≤ dF = v ∗ · ∇F (y). Since
˛ ˛
˛ ˛
it is easy to verify that f is also monotone submodular. The dξ ξ=c dξ ξ=0

interpretation of this construction is that we make |P | copies v ∗ ∈ P , and v(y) ∈ P maximizes v · ∇F (y), we get
of each item, one for each player. However, in reality we can v(y)·∇F (y) ≥ v ∗ ·∇F (y) ≥ F (y+v ∗ )−F (y) ≥ OP T −F (y).
only allocate one copy of each item. Therefore, let us define (2)
a partition matroid M = (X, I) as follows: Now let us return to our continuous process and analyze
I = {S ⊆ X | ∀j; |S ∩ (P × {j})| ≤ 1}. F (y(t)). By the chain rule and using (2), we get
Then the Submodular Welfare Problem is equivalent to max dF X ∂F dyj
= = v(y(t)) · ∇F (y(t)) ≥ OP T − F (y(t)).
{f (S) : S ∈ I}. Due to Lemma 2.2, we know that instead of dt ∂yj dt
j
this discrete problem, it suffices to consider the non-linear
optimization problem max{F (y) : y ∈ P (M)}. We remark This means that F (y(t)) dominates the solution of the dif-
that in this special case, we do not need the full strength ferential equation dφ
dt
= OP T − φ(t), φ(0) = 0, which is
of Lemma 2.2; instead, y ∈ P (M) can be converted into an φ(t) = (1 − e−t )OP T . This proves F (y(t)) ≥ (1 − e−t )OP T .
integral solution by simple randomized rounding. We return
to this issue in Section 5.
4. SUBMODULAR MAXIMIZATION SUB-
3. THE CONTINUOUS GREEDY PROCESS JECT TO A MATROID CONSTRAINT
In this section, we present the analytic picture behind Let us assume now that the polytope in question is a ma-
our algorithm. This section is not formally needed for our troid polytope P (M) and the smooth submodular function
main result. Our point of view is that the analytic intu- is F (y) = E[f (ŷ)]. We consider the question how to find a
ition explains why 1 − 1/e arises naturally in problems in- fractional solution y ∈ P (M) of value F (y) ≥ (1−1/e)OP T .
volving maximization of monotone submodular functions. The continuous greedy process provides a guide on how to
We consider any down-monotone polytope P and a smooth design our algorithm. It remains to deal with two issues.
monotone submodular function F . For concreteness, the
• To obtain a finite algorithm, we need to discretize the
reader may think of the matroid polytope and the function
time scale. This introduces some technical issues re-
F (y) = E[f (ŷ)] defined in the previous section. Our aim is
garding the granularity of our discretization and the
to define a process that runs continuously, depending only
error incurred. We show how to handle this issue for
on local properties of F , and produces a point y ∈ P ap-
the particular choice of F (y) = E[f (ŷ)].
proximating the optimum OP T = max{F (y) : y ∈ P }. We
propose to move in the direction of a vector constrained by • In each step, we need to find v(y) = argmaxv∈P (M) (v ·
P which maximizes the local gain. ∇F (y)). Apart from estimating ∇F (which can be
done by random sampling), observe that this amounts
The continuous greedy process. to a linear optimization problem over P (M). This
We view the process as a particle starting at y(0) = 0 and means finding a maximum-weight independent set in
following a certain flow over a unit time interval: a matroid, a task which can be solved easily.
dy
= v(y), In analogy with Equation (2), we use a suitable bound on
dt ∂F
the optimum. Note that ∂y j
= E[f (ŷ) | ŷj = 1] − E[f (ŷ) |
where v(y) is defined as
ŷj = 0]; we replace this by E[fŷ (j)] which works as well and
v(y) = argmaxv∈P (v · ∇F (y)). will be more convenient.
Lemma 4.1. Let OP T = maxS∈I f (S). Consider any the discussion above, we obtain
y ∈ [0, 1]X and let R denote a random set corresponding
to ŷ, with elements sampled independently according to yj . F (y(t + δ)) − F (y(t)) ≥ E[f (R(t) ∪ D(t)) − f (R(t))]
Then
X
≥ Pr[D(t) = {j}] E[fR(t) (j)]
X
j
OP T ≤ F (y) + max E[fR (j)].
I∈I X
j∈I = δ(1 − δ)|I(t)|−1 E[fR(t) (j)]
j∈I(t)
Proof. Fix an optimal solution OP∈ I. By submodular-
ity, we have OP T = f (O) ≤ f (R) + j∈O fR (j) for any set
X
≥ δ(1 − nδ) E[fR(t) (j)].
R. By taking the Pexpectation over a random
P R as above, j∈I(t)
OP T ≤ E[f (R) + j∈O fR (j)] = F (y) + j∈O E[fR (j)] ≤ P
Recall that I(t) is an independent set maximizing j∈I ωj (t)
P
F (y) + maxI∈I j∈I E[fR (j)].
where ωj (t) are our estimates of E[fR(t) (j)]. By standard
The Continuous Greedy Algorithm. Chernoff bounds, the probability that the error in any es-
Given: matroid M = (X, I), monotone submodular function timate is more than OP T /n2 is exponentially small in n
f : 2X → R+ . (note that OP T ≥ maxR,j fR (j)). Hence, w.h.p. we in-
cur an error of at most OP T /n in our computation of the
1. Let δ = 1/n2 where n = |X|. Start with t = 0 and maximum-weight independent set. Then we can write
y(0) = 0.
F (y(t + δ)) − F (y(t))
2. Let R(t) contain each j independently with probability !
yj (t). For each j ∈ X, estimate
X
≥ δ(1 − nδ) max E[fR(t) (j)] − OP T /n
I∈I
j∈I
ωj (t) = E[fR(t) (j)].
≥ δ(1 − 1/n)(OP T − F (y(t)) − OP T /n)
by taking the average of n5 independent samples. ˜ T − F (y(t)))
≥ δ(OP
3. Let I(t) be a maximum-weight independent set in M, ˜ T = (1−2/n)OP T .
using Lemma 4.1, δ = 1/n2 and setting OP
according to the weights ωj (t). We can find this by
From here, OP˜ T − F (y(t + δ)) ≤ (1 − δ)(OP˜ T − F (y(t)))
the greedy algorithm. Let
and by induction, OP˜ T − F (y(kδ)) ≤ (1 − δ)k OP ˜ T . For
y(t + δ) = y(t) + δ · 1I(t) . k = 1/δ, we get

4. Increment t := t + δ; if t < 1, go back to Step 2. ˜ T ≤ 1 OP


˜ T − F (y(1)) ≤ (1 − δ)1/δ OP
OP ˜T.
Otherwise, return y(1). e
˜ T ≥ (1−1/e−o(1))OP T .
Therefore, F (y(1)) ≥ (1−1/e)OP
The fractional solution found by the continuous greedy al-
gorithm is a convex combination of independent sets, y(1) =
Remark. By a more careful analysis, we can eliminate
P
δ t 1I(t) ∈ P (M). In the second stage of the algorithm,
we take the fractional solution y(1) and apply pipage round- the error term and achieve a clean approximation factor
ing to it. Considering Lemma 2.2, it suffices to prove the of 1 − 1/e. We can argue as follows: Rather than R(t) ∪
following. D(t), we can consider R(t) ∪ D̃(t), where D̃(t) is indepen-
dent of R(t) and contains each element j with probability
Lemma 4.2. The fractional solution y found by the Con- ∆j (t)/(1 − yj (t)). It can be verified that R(t + δ) = R(t) ∪
tinuous Greedy Algorithm satisfies with high probability D̃(t). Hence the analysis goes through with D̃(t) and we
P
„ « get F (y(t + δ)) − F (y(t)) ≥ j Pr[D̃(t) = {j}]E[fR(t) (j)] ≥
1
F (y) = E[f (ŷ)] ≥ 1 − − o(1) · OP T.
P
δ(1 − nδ) j∈I(t) E[fR(t) (j)]/(1 − yj (t)). Observe that this
e
is equivalent to our previous analysis when yj (t) = 0, but
Proof. We start with F (y(0)) = 0. Our goal is to es- we get a small gain as the fractional variables increase.
timate how much F (y(t)) increases during one step of the Denote ω ∗ (t) = maxj ωj (t). The element achieving this is
algorithm. Consider a random set R(t) corresponding to always part of I(t). By Lemma 4.1 and submodularity, we
ŷ(t), and an independently random set D(t) that contains know that at any time, ω ∗ (t) ≥ n1 (OP ˜ T − F (y(t))), where
each item j independently with probability ∆j (t) = yj (t + ˜
OP T = (1 − o(1))OP T depends on the accuracy of our
δ) − yj (t). I.e., ∆(t) = y(t + δ) − y(t) = δ · 1I(t) and D(t) sampling estimates. Also, if j ∗ (t) is the element achieving
is a random subset of I(t) where each element appears in- ω ∗ (t), we know that yj ∗ (t) (t) cannot be zero all the time.
dependently with probability δ. It can be seen easily that Even focusing only on the increments corresponding to j ∗ (t)
F (y(t + δ)) = E[f (R(t + δ))] ≥ E[f (R(t) ∪ D(t))]. This (summing up to 1 overall), at most half of them can occur
follows from monotonicity, because R(t + δ) contains items when yj ∗ (t) (t) < 2n1
. Let’s call these steps ”bad”, and the
independently with probabilities yj (t) + ∆j (t), while R(t) ∪ 1
steps where yj ∗ (t) (t) ≥ 2n ”good”. In a good step, we have
D(t) contains items independently with (smaller) probabil- ∗ ∗
ω (t)/(1 − yj (t)) ≥ ω (t) + 2n 1 ˜T −
ω ∗ (t) ≥ ω ∗ (t) + 2n1 2 (OP
ities 1 − (1 − yj (t))(1 − ∆j (t)). ∗

Now we are ready to estimate how much F (y) gains at F (y(t))) instead of ω ∗ (t) in the original analysis. In good
time t. It is important that the probability that any item steps, the improved analysis gives
appears in D(t) is very small, so we can focus on the contri- 1
butions from sets D(t) that turn out to be singletons. From F (y(t + δ)) − F (y(t)) ≥ δ(1 − nδ)(1 + ˜ T − F (y(t))).
)(OP
2n2
By taking δ = o( n13 ), we get that F (y(t + δ)) − F (y(t)) ≥ 2. Let Ri (t) be a random set containing each item j inde-
δ(1 + nδ)(OP˜ T − F (y(t))). Then, we can conclude that in pendently with probability yij (t). For all i, j, estimate
good steps, we have the expected marginal profit of player i from item j,
˜ T − F (y(t + δ)) ≤ (1 − δ − nδ 2 ))(OP
OP ˜ T − F (y(t))), ωij (t) = E[wi (Ri (t) + j) − wi (Ri (t))]
by taking the average of (mn)5 independent samples.
while in bad steps
˜ T − F (y(t + δ)) ≤ (1 − δ + nδ 2 ))(OP
˜ T − F (y(t))). 3. For each j, let ij (t) = argmaxi ωij (t) be the preferred
OP
player for item j (breaking possible ties arbitrarily).
Overall, we get Set yij (t + δ) = yij (t) + δ for the preferred player i =
ij (t) and yij (t + δ) = yij (t) otherwise.
1 1
F (y(1)) ≥ ˜T
(1 − (1 − δ + nδ 2 ) 2δ (1 − δ − nδ 2 ) 2δ )OP
4. Increment t := t + δ; if t < 1, go back to Step 2.
≥ (1 − (1 − δ)1/δ )OP˜ T ≥ (1 − 1/e + Ω(δ))OP ˜T.
5. Allocate each item j independently, with probability
We can also make our sampling estimates accurate enough yij (1) to player i.
˜ T = (1−o(δ))OP T . We conclude that F (y(1)) ≥
so that OP
(1 − 1/e + Ω(δ))OP T . Lemma 4.2 and the discussion above imply our main result.

Theorem 5.1. The Continuous Greedy Algorithm gives


Pipage rounding. Finally, we use pipage rounding to con- a (1 − 1/e − o(1))-approximation (in expectation) for the
vert y into an integral solution. Using Lemma 2.2, we obtain Submodular Welfare Problem in the value oracle model.
an independent set S of value f (S) ≥ (1 − 1/e)OP T w.h.p.
Remark 1. Our choices of δ and the number of random
5. SUBMODULAR WELFARE samples are pessimistic and can be improved in this special
case. Again, the o(1) error term can be eliminated by a
In this section, we return to the Submodular Welfare Prob- more careful analysis. We believe that our algorithm is quite
lem. We deal with a partition matroid M here which allows practical, unlike the (1 − 1/e)-approximation using demand
us to simplify our algorithm and avoid the technique of pi-
queries [4], or its improvement [9], which both require the
page rounding. For a set of players P and a set of items Q,
ellipsoid method.
the new ground set is X = P × Q, hence it is natural to Remark 2. In the special case where all n players have
associate variables yij with player-item pairs. The variable the same utility function, we do not need to run any non-
expresses the extent to which item j is allocated to player i. trivial algorithm at all. It can be seen from the continuous
In each step, we estimate the expected marginal value ωij of greedy process (Section 3) that in such a case the greedy
element (i, j), i.e. the expected marginal profit that player i
trajectory is equal to yij (t) = n1 t for all i, j (WLOG - out
derives from adding item j. With respect to these weights,
of all optimal directions, we pick the symmetric one, v =
we find a maximum independent set in M; this means se- ( n1 , . . . , n1 )). Hence, the fractional solution that we obtain is
lecting a preferred player for each item j, who derives the
yij = n1 for all i, j and we know that it satisfies F (y) ≥ (1 −
maximum marginal profit from j. Then we increase the re-
1/e)OP T . Thus, a (1 − 1/e)-approximation is obtained by
spective variable for each item and its preferred player.
allocating each item uniformly at random. It is interesting
The partition matroid polytope can be written as
to note that the hardness results [15, 19] hold even in this
n
X special case. Therefore, the problem of n equal submodular
P (M) = {y ≥ 0 : ∀j; yij ≤ 1}. utility functions is approximation resistant in the sense that
i=1 it is impossible to beat a blind random solution.
The continuous greedy algorithm finds a point y ∈ P (M) of 5.1 Counterexample to a possible simplifica-
value F (y) ≥ (1 − 1/e)OP T . Here,
tion of the algorithm
n
X Here we consider a possible way to simplify our continu-
F (y) = E[f (ŷ)] = E[f (Ri )] ous greedy algorithm. Instead of applying pipage rounding
i=1
orPrandomized rounding to the fractional solution y(1) =
where Ri contains each item j independently with proba- δ t 1I(t) , it seems reasonable to compare all the indepen-
bility yij . Formally, the sets Ri should also be sampled dent sets I(t) in this linear combination and return the most
independently for different players, but this does not af- valuable one. However, the non-linearity of the objective
fect the expectation. We can modify the sampling model function defeats this intuitive approach. Quite surprisingly,
and instead allocate each item independently to exactly one the value of each I(t) can be an arbitrarily small fraction of
player, item Pj to player i with probability yij . This is possi- OPT.
ble because n i=1 yij ≤ 1, and yields a feasible allocation of
expected value F (y). We obtain the following self-contained Example. Consider an instance arising from the Submod-
algorithm. ular Welfare Problem, where we have n players and n items.
The utility of each player is equal to w(S) = min{|S|, 1};
The Continuous Greedy Algorithm for Submodular i.e., each player wants at least one item. Obviously, the op-
Welfare. timal solution assigns one item to each player, which yields
OP T = n.
1. Let δ = 1/(mn)2 . Start with t = 0 and yij (0) = 0 for The continuous greedy algorithm (see Section 5) builds a
all i, j. fractional solution y. Given a partial solution after a certain
number of steps, we consider the random sets Ri defined by The continuous greedy algorithm for SAP. The ground
y and the expected marginal values given by set of M is exponentially large here, so we cannot use the al-
n
gorithm of Section 4 as a black box. First of all, the number
of steps in the continuous greedy algorithm depends on the
Y
ωij = E[w(Ri + j) − w(Ri )] = Pr[Ri = ∅] = (1 − yij ).
j=1
discretization parameter δ. It can be seen that it is sufficient
here to choose δ polynomially small in the rank of M, which
A preferred player is chosen for each item by selecting the is n. The algorithm works with variables corresponding to
largest ωij and incrementing the respective variable yij . The the ground set X; let us denote them by xi,S where S ∈ Fi .
algorithm may run in such a way that in each step, yij (t) = Note that in each step, only n variables are incremented
βi (t) for all j, and the preferred player for all items is the (one for each bin i) and hence the number of nonzero vari-
same. This is true at the beginning (yij (0) = 0), and ables remains polynomial. Based on these variables, we can
assuming inductively that yij (t) = βi for all j, we have generate a random set R ⊂ X in each step. However, we
ωij = (1 − βi )n . The choice of a preferred player for each cannot estimate all marginal values ωi,S = E[fR (i, S)] since
item is given by minimizing βi , and we can assume that the these are exponentially many. What we do is the following.
algorithm selects the same player i∗ for each item. In the For each element j, we estimate ωij = E[fR (i, j)], where
next step, we will have yij (t + δ) = βi + δ if i = i∗ , and βi fR (j) = f (R + (i, j)) − f (R), the marginal profit of adding
otherwise. Hence again, yij (t + δ) depends only on i. item j P to bin i, compared to its assignment in R. Then
Eventually, the algorithm finds the fractional solution yij = ωi,S = j∈S ωij for any set S. Finding a maximum-weight
1/n for all i, j. By randomized rounding (or pipage round- independent set I ∈ I means finding the optimal set Si for
ing), we obtain expected value n(1 − (1 − 1/n)n ). However, each bin i, given the weights ωi,S . This is what we call the
the solution found by the continuous greedy algorithm in single-bin subproblem. We use the itemP weights ωij and try
each step consists of all items being assigned to the same to find a set for each bin maximizing j∈S ωij . If we can
player, which yields value 1. solve this problem α-approximately (α < 1), we can also
find an α-approximate maximum-weight independent set I.
6. OTHER APPLICATIONS Consequently, we obtain a (1 − e−α )-approximation for the
Separable Assignment Problem. This beats both the factor
Finally, let us discuss a generalization of our framework. α(1 − 1/e) obtained by using the Configuration LP [11] and
Let us consider a setting where we cannot optimize linear the factor α/(1 + α) obtained by a simple greedy algorithm
functions over P exactly, but only α-approximately (α < 1). [2, 12].
Let us consider the continuous setting (Section 3). Assume
that in each step, we are able to find a vector v(y) ∈ P such
that v(y) · ∇F (y) ≥ α maxv∈P v · ∇F (y) ≥ α(OP T − F (y)).
This leads to a differential inequality
The Generalized Assignment Problem.
Special cases of the Separable Assignment Problem are
dF obtained by considering different types of collections of fea-
≥ α(OP T − F (y(t)))
dt sible sets Fi . When
P each Fi is given by a knapsack prob-
lem, Fi = {S : j∈S sij ≤ 1}, we obtain the Generalized
whose solution is F (y(t)) ≥ (1 − e−αt )OP T . At time t = 1,
Assignment Problem (GAP). Since there is an FPTAS for
we obtain a (1 − e−α )-approximation. The rest of the analy-
the knapsack problem, we have α = 1 − o(1) and we obtain
sis follows as in Section 4. This has interesting applications.
a (1 − o(1))-approximation for the Generalized Assignment
Problem. We remark that a (1 − 1/e + ǫ)-approximation can
The Separable Assignment Problem. be achieved for some very small ǫ > 0, using an exponen-
An instance of the Separable Assignment Problem (SAP) tially large “Configuration LP” and the ellipsoid method [9];
consists of m items and n bins. Each bin i has an asso- in comparison, our new algorithm is much more practical.
ciated collection of feasible sets Fi which is down-closed
(A ∈ Fi , B ⊆ A ⇒ B ∈ Fi ). Each item j has a value
vij , depending on the bin i where it’s placed. The goal is
The AdWords Assignment Problem.
A related problem defined in [11] is the AdWords Assign-
to P disjoint feasible sets Si ∈ Fi so as to maximize
Pnchoose ment Problem (AAP). Here, bins correspond to rectangular
i=1 j∈Si vij . areas associated with keywords where certain ads can be
displayed. Each bidder has a rectangular ad of given dimen-
Reduction to a matroid constraint. Let us review the
sions that might be displayed in multiple areas. Bidder j
reduction from [2]. We define X = {(i, S) | 1 ≤ i ≤ n, S ∈
is willing to pay vij to have his ad displayed in area i, but
Fi } and a function f : 2X → R+ ,
overall his spending is limited by a budget Bj .
f (S) =
X
max{vij : ∃(i, S) ∈ S, j ∈ S}. A reduction to submodular maximization subject to a ma-
j
i troid constraint is given in [12]. We have a ground set
X = {(i, S) : S ∈ Fi } where Fi is the collection of feasi-
It is clear that f is monotone and submodular. We maximize ble sets of ads that fit in area i. The matroid constraint is
this function subject to a matroid constraint M = (X, I), that we choose at most one set S for each area i. The sets
where S ∈ I iff S contains at most one pair (i, S) for each i. are not necessarily disjoint. The objective function is
Such a set S corresponds to an assignment of set S to bin i
for each (i, S) ∈ S. This is equivalent to SAP: although the 8 9
bins can be assigned overlapping sets in this formulation, X < X =
we only count the value of the most valuable assignment for f (S) = min vij , Bj .
each item. j
:
(i,S)∈S:j∈S
;
Again, this function is monotone and submodular. Given [12] P. Goundan and A. Schulz. Revisiting the greedy
a random assignment R, the expected marginal value
P of an approach to submodular set function maximization,
element (i, S) can be written as E[fR (i, S)] = j∈S ωij , manuscript (2007).
where ωij = E[Vj (R + (i, j)) − Vj (R)], and Vj (R) is the [13] J. Håstad. Clique is hard to approximate within n1−ǫ ,
amount spent by bidder j in R. We can estimate the val- Acta Mathematica 182 (1999), 105–142.
ues ωij by random sampling and then find a (1/2 − o(1))- [14] K. Jansen and G. Zhang. On rectangle packing:
approximation to maxS∈Fi E[fR (i, S)] by using the rect- maximizing benefits, Proc. of 15th ACM-SIAM SODA
angle packing algorithm of [14] with weights ωij . Conse- (2004), 204–213.
quently, we can approximate the maximum-weight indepen- [15] S. Khot, R. Lipton, E. Markakis and A. Mehta.
dent set within a factor of 1/2 − o(1) in each step and Inapproximability results for combinatorial auctions
obtain a (1 − e−1/2 − o(1))-approximation for AAP. This with submodular utility functions, Proc. of WINE
beats the 21 (1 − 1/e)-approximation given in [11] and also 2005, Lecture Notes in Computer Science 3828,
the ( 31 − o(1))-approximation in [12]. 92–101.
[16] B. Lehmann, D. J. Lehmann and N. Nisan.
Acknowledgements. Combinatorial auctions with decreasing marginal
The author is grateful to all those who have participated in utilities, ACM Conference on El. Commerce 2001,
various inspiring discussions on this topic: Chandra Chekuri, 18–28.
Uri Feige, Mohammad Mahdian, Vahab Mirrokni and Benny [17] B. Lehmann, D. J. Lehmann and N. Nisan.
Sudakov. Special thanks are due to Chandra Chekuri for Combinatorial auctions with decreasing marginal
introducing me to submodular maximization subject to a utilities (journal version), Games and Economic
matroid constraint and pointing out some important con- Behavior 55 (2006), 270–296.
nections. [18] L. Lovász. Submodular functions and convexity. A.
Bachem et al., editors, Mathematical Programmming:
7. REFERENCES The State of the Art, 235–257.
[1] A. Ageev and M. Sviridenko. Pipage rounding: a new [19] V. Mirrokni, M. Schapira and J. Vondrák. Tight
method of constructing algorithms with proven information-theoretic lower bounds for welfare
performance guarantee, J. of Combinatorial maximization in combinatorial auctions, manuscript
Optimization 8 (2004), 307–328. (2007).
[2] G. Calinescu, C. Chekuri, M. Pál and J. Vondrák. [20] G. L. Nemhauser, L. A. Wolsey and M. L. Fisher. An
Maximizing a submodular set function subject to a analysis of approximations for maximizing submodular
matroid constraint, Proc. of 12th IPCO (2007), set functions I, Mathematical Programming 14 (1978),
182–196. 265–294.
[3] C. Chekuri and A. Kumar. Maximum coverage [21] M. L. Fisher, G. L. Nemhauser and L. A. Wolsey. An
problem with group budget constraints and analysis of approximations for maximizing submodular
applications, Proc. of APPROX 2004, Lecture Notes in set functions II, Math. Programming Study 8 (1978),
Computer Science 3122, 72–83. 73–87.
[4] S. Dobzinski and M. Schapira. An improved [22] G. L. Nemhauser and L. A. Wolsey. Best algorithms
approximation algorithm for combinatorial auctions for approximating the maximum of a submodular set
with submodular bidders, Proc. of 17th SODA (2006), function, Math. Operations Research 3:3 (1978),
1064–1073. 177–188.
[5] J. Edmonds. Matroids, submodular functions and [23] A. Schrijver. A combinatorial algorithm minimizing
certain polyhedra, Combinatorial Structures and Their submodular functions in strongly polynomial time,
Applications (1970), 69–87. Journal of Combinatorial Theory, Series B 80 (2000),
[6] U. Feige. A threshold of ln n for approximating Set 346–355.
Cover, Journal of the ACM 45 (1998), 634–652. [24] A. Schrijver. Combinatorial optimization - polyhedra
[7] U. Feige. Maximizing social welfare when utility and efficiency, Springer-Verlag Berlin Heidelberg, 2003.
functions are subadditive, Proc. of 38th STOC (2006), [25] M. Sviridenko. A note on maximizing a submodular
41–50. set function subject to knapsack constraint,
[8] U. Feige, V. Mirrokni and J. Vondrák. Maximizing Operations Research Letters 32 (2004), 41–43.
non-monotone submodular functions, Proc. of 48th [26] L. Wolsey. Maximizing real-valued submodular
FOCS (2007), 461–471. functions: Primal and dual heuristics for location
[9] U. Feige and J. Vondrák. Approximation algorithms problems, Math. of Operations Research 7 (1982),
for combinatorial allocation problems: Improving the 410–425.
factor of 1 − 1/e, Proc. of 47th FOCS (2006), 667–676. [27] D. Zuckerman. Linear degree extractors and
[10] L. Fleischer, S. Fujishige and S. Iwata. A inapproximability, Theory of Computing 3 (2007),
combinatorial, strongly polynomial-time algorithm for 103–128.
minimizing submodular functions, Journal of the
ACM 48:4 (2001), 761–777.
[11] L. Fleischer, M. X. Goemans, V. Mirrokni and M.
Sviridenko. Tight approximation algorithms for
maximum general assignment problems, Proc. of 17th
ACM-SIAM SODA (2006), 611–620.

You might also like