0% found this document useful (0 votes)
28 views

Distributed Implementations of Vickrey-Clarke-Groves Mechanisms

This document discusses distributed implementations of Vickrey-Clarke-Groves (VCG) mechanisms. VCG mechanisms are commonly used in mechanism design to implement outcomes that maximize total value across agents. However, mechanism design theory assumes computation is centralized, whereas distributed implementations determine the outcome through the self-interested actions of agents. Distributed implementations introduce new opportunities for manipulation that must be addressed. The authors propose principles for distributing computation in a way that incentivizes truthful behavior and results in an ex post Nash equilibrium.

Uploaded by

gkgj
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Distributed Implementations of Vickrey-Clarke-Groves Mechanisms

This document discusses distributed implementations of Vickrey-Clarke-Groves (VCG) mechanisms. VCG mechanisms are commonly used in mechanism design to implement outcomes that maximize total value across agents. However, mechanism design theory assumes computation is centralized, whereas distributed implementations determine the outcome through the self-interested actions of agents. Distributed implementations introduce new opportunities for manipulation that must be addressed. The authors propose principles for distributing computation in a way that incentivizes truthful behavior and results in an ex post Nash equilibrium.

Uploaded by

gkgj
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Distributed Implementations of Vickrey-Clarke-Groves Mechanisms

The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.

Citation

Parkes, David C., and Jeffrey Shneidman. 2004. Distributed implementations of Vickrey-Clarke-Groves mechanisms. In AAMAS 2004: Proceedings of the third joint conference on autonomous and multiagent systems, July 19-24, 2004, New York City, New York, USA, ed. IEEE Computer Society, 261268. Piscataway, N.J.: IEEE. doi:10.1109/AAMAS.2004.108 December 15, 2012 1:45:27 AM EST http://nrs.harvard.edu/urn-3:HUL.InstRepos:4054438 This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.termsof-use#LAA

Published Version Accessed Citable Link Terms of Use

(Article begins on next page)

Distributed Implementations of Vickrey-Clarke-Groves Mechanisms


David C. Parkes Division of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge MA 02138 [email protected] Abstract
Mechanism design (MD) provides a useful method to implement outcomes with desirable properties in systems with self-interested computational agents. One drawback, however, is that computation is implicitly centralized in MD theory, with a central planner taking all decisions. We consider distributed implementations, in which the outcome is determined by the self-interested agents themselves. Clearly this introduces new opportunities for manipulation. We propose a number of principles to guide the distribution of computation, focusing in particular on Vickrey-Clarke-Groves mechanisms for implementing outcomes that maximize total value across agents. Our solutions bring the complete implementation into an ex post Nash equilibrium.

Jeffrey Shneidman Division of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge MA 02138 [email protected]
putational mechanism design. Indeed, Lesser [19] recently described this unication, of methods in cooperative methods with self-interested methods, as one of the major challenges for multiagent systems research. The distributed implementation of mechanisms introduces new opportunities for agent manipulation. For instance, consider distributing the winner-determination of a second-price auction across bidding agents. Clearly each agent would like to understate the maximal value of bids from other agents to increase its chance of winning and to decrease its payment. A distributed implementation provides each agent with an algorithm (or a specication of an algorithm). A successful (or faithful) distributed implementation must provide the right incentives, so that an agent will choose to follow the intended algorithm. We seek implementation in an ex post Nash equilibrium, such that no agent can usefully deviate from its algorithm even if it knows the private values of other agents. All our observations in this paper are quite simple, but we think quite powerful. We provide three general principles for distributed implementation: partition-based, in which computation is carefully distributed across agents; information-revelation based, in which agents only perform restricted computation, as necessary to reveal information about their local private information; and redundancy-based, in which multiple agents are asked to perform the same piece of computation, with deviations punished. We will often draw on examples and motivation from the Vickrey-Clarke-Groves mechanism, but the ideas are more general. We include stylized examples to illustrate how to combine existing algorithmic paradigms from cooperative problem solving with the principles for faithful distributed implementation.

1. Introduction
Mechanism design [18] is concerned with the design of procedures to implement an outcome with desirable properties in systems with self-interested agents that have private information about their preferences and capabilities. Mechanism design has largely focused on a special class of mechanisms in which the computation required to determine the outcome is completely centralized. These are the direct-revelation mechanisms, in which each agent reports its private information to a center that computes the outcome and reports the solution back to the agents. We introduce the fundamentally new problem of distributed implementation, in which the goal is to use the same self-interested agents to determine the outcome. It has now been over 10 years since the rst infusion of ideas from mechanism design into distributed AI [12, 27]. Mechanism design has been adopted in many settings, for instance for determining a shared plan of action [16], for the allocation of shared resources [34, 26], or for structuring negotiation between agents [28]. Our hope is that the Distributed Implementation problem will facilitate the integration of methods for cooperative problem solving in Distributed AI with the methods to handle self-interest in com-

1.1. Related Work


Feigenbaum and colleagues [14, 13] initiated the study of distributed algorithmic mechanism design (DAMD), with a focus on studying particular communication topolo-

gies and providing distributed algorithms with good computational complexity and good communication properties. However, DAMD has deemphasized incentive issues, and does not consider whether an agent will choose to follow a particular algorithm. Shneidman and Parkes [30] provided the seeds for this work, with an early denition of the concept of algorithm compatibility. More recently, Shneidman and Parkes [29] have completed a careful case study of distributed implementation for interdomain routing, bringing an earlier algorithm due to Feigenbaum et al. [13] into equilibrium. Monderer and Tennenholtz [23] have studied a simple single-item auction problem in which agents must forward messages from other agents to a center, using information hiding and redundancy to bring faithful forwarding into an equilibrium. We focus in this paper on a model in which agents can communicate with the center directly on a trusted channel thus removing this concern. Smorodinsky and Tennenholtz [33] consider free-riding in multi-party computation by agents with costly computation, and provide incentives to elicit computational effort from agents. However their work does not take an implementation perspective, and there is no private information. Perhaps the closest work in the literature to ours is Brewers computation-procuring auction [5], in which incentives are used to distribute winner-determination across participants in an ascending-price combinatorial auctions. Agents that can nd and submit an improved solution are paid some share of the revenue improvement. Although Brewer does not provide formal equilibrium analysis, an experimental study suggests this computation procuring auction was effective in eliciting effort from human bidders. Similar ideas can also be traced to the use of the bid queue to store partial solutions in the AUSM mechanism [2], and (in a cooperative setting) to work on computational ecosystems [8]. Shoham and Tennenholtz [31, 32] have considered computation in a system of self-interested agents with private inputs. The agents are either reluctant to provide information, or want to know the result of computation but prefer to keep this from their peers. However, their goals are quite different. All computation is centralized, and the focus is on computation but not implementation (i.e., not on taking decisions in a world). The notion of an Efcient Learning Equilibrium [4] shares our idea of bringing algorithms into an equilibrium. Combining redundancy with a commitment to implement a bad outcome if agents dont send the same message is well known in the literature on implementation in complete information settings where every agent, but not the center, knows all private information albeit for revealing (common) type information and not for eliciting effort (see Jackson [17] for a survey). However, agents still reveal full information to the center, and the center still de-

termines the outcome of the social-choice rule (e.g. [24]). Multi-stage game forms are used to allow equilibrium renements that knock-out undesirable equilibria, so that the outcome is implemented in all equilibria,1 but not to facilitate distributed computation. Recent extensions have considered implementation with incomplete information, but still with centralized computation, and while adopting difcult solution concepts, for example perfect Bayesian implementation [6] and sequential equilibrium [1]).

2. Preliminaries
We rst introduce notions from traditional (centralized) mechanism design. A more leisurely introduction to mechanism design is provided by Jackson [18] and Parkes [25, chapter 2]. Dash et al. [10] provide a recent multi-agent perspective on important challenges in the eld of computational mechanism design.

2.1. Mechanism Design


The standard setting for mechanism design considers a world with possible outcomes O, and agents i I (with N agents altogether). Agent i has private type i i , which denes the agents utility ui (o; i ) for outcome o O. A standard (direct-revelation) mechanism M = (f, ) = denes a procedure in which agents report types (1 . . . N ) and the mechanism rules select outcome ). We write to emphasize that agents can misreport f ( their true types (which are not observable to the center). A mechanism denes a non-cooperative game of incomplete information because agents do not know the types of i given reports other agents. Agent is utility for report i = ( 1 , . . . , i , +i , . . . , N ) is ui (f ( i , i ); i ). An important concept in MD is that of incentive-compatibility (IC), which says that agents will choose to reveal their types truthfully in equilibrium. A mechanism that achieves truthrevelation in a dominant-strategy equilibrium (every agents strategy is best-response whatever the strategies of other agents), is termed strategyproof, dened as: i , i ); i ), i , i = i , i ui (f (i , i ); i ) ui (f ( Strategyproof is particularly useful because agents do not need to model the other agents to play their bestresponse. Finally, an IC mechanism is said to implement outcome f () in equilibrium; and f () is the social-choice function implemented within the mechanism.

2.2. Vickrey-Clarke-Groves Mechanisms


In particular, consider a world in which the outcome o = (k, p) denes both a choice k K, for some dis1 We are less concerned with multiple equilibrium because the center in our model can also choose to incur some computational cost and check whether agents deviate. Also, we assume that the intended algorithm (implemented in software) helps to correlate agents on a desired equilibrium, providing a focal point (see also [4].

crete choice set K, and a payment p = (p1 , . . . , pN ) by each agent to the center. For example, the choice could dene a set of actions to be performed by agents as part of a plan, or an allocation of items. The type of an agent now denes its value vi (k ; i ) for a choice k , and its utility is quasi-linear in value and payments, dened as ui (o; i ) = vi (k ; i ) pi . In this setting, the Vickrey-Clarke-Groves (VCG) (see [18]) mechanism is strategyproof, and implements the social-welfare maximizing (or efcient) choice. We dene economy EN to include all agents, and marginal economies {EN 1 , EN 2 , . . .} as the economies with each agent removed in turn. The VCG denes choice rule ) = arg maxkK k ( i vi (k ; i ), and payment rule: ) = vi (k ( ); i ) {VN VN i } pvcg,i ( (1)

A strategy si i is a mapping from state and (private) type to an action. Actions may be internal, in which case they are computational actions, or external, in which case they are message-sending actions. An agents local state includes its computational state, as well as a complete history of all messages ever received or sent by the agent and its model of other agents. Denition 1 Distributed mechanism dM = (g, , sm ) is an (ex post) faithful implementation of social-choice function g (sm ()) O when intended algorithm sm is an ex post Nash equilibrium.
Formally, strategy prole s = (s 1 , . . . , sN ) is an ex post Nash equilibrium when: ui (g (s i (i ), si (i )); i ) ui (g (si (i ), si (i )); i )

i ) and VN i = where VN = maxkK i vi (k ; j ), i.e. the value of the efcient maxkK j =i vj (k ; choice in the marginal economy without agent i.

3. Distributed Implementations
We now describe a distributed implementation, focusing on a setting in which there is still a center, ultimately responsible for selecting and enforcing an outcome. We will seek to off-load as much of the computation as possible onto the agents, but require that this computation is in an equilibrium. We assume that each agent can communicate directly through the center, via a trusted channel.2 The basic model of communication assumes messagepassing between agents, and a state-based model for computation, with each agent maintaining an internal state, performing computation to modify that state, and sending messages that depend on state.3 A distributed mechanism dM = (g, , sm ) denes an outcome rule g , a feasible strategy space = (1 . . . N ), and an intended (or suggested) strategy sm = m m (sm as the intended imple1 , . . . , sN ). We also refer to s mentation. It is helpful to think of sm as the algorithm that the designer would like every agent to follow. Given strategy s , it is convenient to write s() to denote the complete sequence of actions taken by agents when following joint strategy s, given private types . The outcome rule g denes the outcome g (s()) O, selected when agents follow strategy s and have types . The center selects outcome g (s()) based on information provided by agents during the course of the algorithm. Taken together, this denes a non-cooperative game.
2 3 Shneidman & Parkes [29] consider a more general model with no center, and with self-enforcement of the nal outcome by the agents. The model can be formalized to make the games that we describe precise, for example introducing a start state and end state, and dening state-transition functions. Such a formalism is tangential to the main thrust of this paper, and will be avoided.

for all agents, for all si = s i , for every type i , and for all types i of other agents. In words, no agent would like to deviate from s i even with knowledge of the private type information of the other agents. As a solution concept, ex post Nash relies on the rationality of other agents, but remains useful because an agent need not model the preferences of other agents. Given distributed mechanism dM = (g, , sm ), it is useful to categorize the external actions in the intended implementation into message-passing, information-revelation, and computational actions. Denition 2 External actions ae sm i (h, i ) are messagepassing actions when agent i simply forwards a message received from another agent, unchanged, to one (or more) of its neighbors. Message-passing actions are included to allow for peerto-peer communication. Denition 3 External actions ae sm i (h, i ) are information-revelation actions when any feasible deviation from these actions by agent i is entirely equivalent to following the intended implementation for some other rei ; i.e., g (s , sm (i )) = g (sm ( i ), sm (i )), ported type i i i m for all i , where s differs from si (i ) only in these info-revelation actions. Informally, information-revelation actions can be executed by a dumb agent that only knows type i and can only respond to questions about type, such as is choice k1 preferred to choice k2 ?, what is the value for choice k1 ?, etc. By denition, the only role that these actions play in the implementation is in revealing private information.4
4 The denition carefully excludes actions in which useful computation is also smuggled within the message, for example solve problem P1 if your value is v1 and solve problem P2 if your value is v2 . This is precluded because there are presumably arbitrary deviations from computing the solution to P1 or P2 , that are not performed in any intended implementation, for any private type.

Denition 4 External actions ae sm i (h, i ) are computational actions when they are neither message-passing nor information-revelation actions. Although this denition of computational actions is somewhat indirect, the point is that external actions (or messages) that we classify as computational are doing more than forwarding a message from another agent or revealing private information. Presumably, computational actions (if they have any use within the implementation) are sending results from local computation. It is important to emphasize that we have only characterized the external actions. Computational agents are continually performing internal actions computation to support these external actions, and these actions (or at least a specication) are also dened in an intended strategy. For instance, an agent must perform (internal) computation in responding to an information-revelation action which bundle of goods maximize your utility given prices p?. We can now dene the important notions of incentivecompatibility (IC), communication compatibility (CC), and algorithm compatibility (AC) in this context. Denition 5 Distributed mechanism dM is CC { resp. IC, AC} if an agent cannot receive higher utility by deviating from the intended message-passing { resp. informationrevelation, computational} actions in an equilibrium. CC, IC, and AC are required properties of a faithful distributed implementation. Moreover, a distributed mechanism dM = (g, , sm ) that is IC, CC and AC is a faithful implementation of g (sm ()), when IC, CC and AC all hold in a single equilibrium. Remark 1 The only social-choice functions that can be implemented in an ex post Nash distributed implementation are those implementable in strategyproof direct-revelation mechanisms (follows from the revelation principle [20].) Remark 2 We assume that agents are self-interested but benevolent, in the sense that an agent will implement the intended strategy as long as it does not strictly prefer some other strategy. Thus, a weak ex post Nash equilibrium is sufcient for a faithful implementation. Further, a distributed mechanism may have multiple equilibria. We are content to achieve implementation in one of these equilibria, which is consistent with the mechanism design literature.

(2) Take your favorite distributed algorithm for computing the efcient choice, for instance: (i) Distributed systematic search, such as Adopt [22], for solving constrained optimization problems. (ii) Mathematical-programming based decompositions, such as Dantzig-Wolfe and column generation [15]. (iii) Asynchronous Cooperative Problem Solving with Shared Storage, such as blackboard models (see [7] for a recent summary) and hint-exchange models [8]). and use this algorithm to dene an intended strategy, sm , to determine the efcient choice in each of {EN , EN 1 , . . . , }. Let C and denote these candidate choices. (3) The center adopts choice k = arg maxkCand i i ) for EN , and choice k i = arg maxkCand vi (k ; j =i j ) for each marginal economy. vj (k ; Step (3), in which the maximal choice is taken from the set of candidates for each economy {EN , EN 1 , . . .}, can require the center to adopt a simple heuristic to modify a choice from one economy so that it is feasible in another. For instance, given an allocation of goods in an auction setting, the center can simply drop any allocation to agent i in k when considering the value of this solution for EN i . Suppose the canonical distributed algorithm is used to dene a distributed VCG mechanism, with VCG payments computed on the basis of the nal choices (denoted i by agents = i. Now, k , k 1 , k 2 , . . .). Fix reports m the utility, ui (g (s (i , i )); i ), to agent i from the intended strategy is: vi (k ; i ) +
j =i

j ) vj (k ;
j =i

j ) vj (k i ;

4. A Canonical Distributed Implementation


To illustrate why faithful distributed implementation can be difcult, and also to introduce a general class of distributed VCG mechanisms, consider the following canonical distributed algorithm for determining the efcient choice in economies {EN , EN 1 , . . .}: i to the cen(1) Every agent is asked to report its type ter. Upon receipt, the center broadcasts these types to the agents.

In a centralized VCG the agent would choose to report its true type in equilibrium, because its report can only inuence its utility indirectly through its effect on the choice selected by the mechanism. By the standard Groves argument, reporting a true type is optimal because the mechanism will then choose k to exactly maximize vi (k ; i ) + j =i vj (k ; j ). In a distributed implementation, agent i can also: a) change the choice of k through its computational and message-passing and information-revelation actions within the distributed algorithm sm ; b) change the choice of k i through its actions within the distributed algorithm sm . Indeed, strategy sm is not in equilibrium. To see this, notice that agent i can now also inuence the choice of k i . Agent i will always prefer to understate the total value of VN i , and thus prefer to obstruct any progress towards a good solution to this problem to the best of its ability. At best, the center will then adopt the same k as the choice without agent i, so that the agents payment is zero because it appears that there is no better choice for the other agents even if agent i were not present.

5. An Easy Special Case: Groves Mechanisms


If our goal was simply to implement the social-welfaremaximizing outcome, and if running a budget-decit was acceptable and the center can make a net payment to agents, then we can use the canonical approach for a faithful distributed implementation. We can use the Groves mechanism, in which payments are: ) = pgroves,i (
j =i

the agent should follow sm to maximize its total utility from the standard Groves argument. Although we describe the partition principle in the context of the canonical distributed algorithm with each agent reporting its type as a rst step, the result trivially extends to distributed mechanisms in which the center elicits dynamic value information, as long as it nally learns the value of the choices and shares necessary information with agents to perform the computation. Note that it is important that no agent can tamper with the reports from other agents. (An agent is paid an amount equal to their reported value, so it would always want to overstate the value of other agents for the selected choice.) This is achieved in our model, because agents can report their type directly to the center. However, the partition principle still allows agents to send messages peer-to-peer during the implementation of a distributed algorithm. It is only the initial information-revelation that must be direct to the center along a trusted channel. Example 1 [Distributed Systematic Search] Choose your favorite algorithm for distributed systematic search (such as Modi et al.s DCOP algorithm [22]). First, form a search tree including all agents, and have the agents solve EN . Then, form a search tree involving all agents except agent 1 and have them solve for EN 1 . Do the same for agent 2, and so on until all marginal economies are solved, with the center receiving a choice from the root of the tree in each case. Finally, implement the choice reported for EN , and VCG payments on the basis of solutions to marginal economies. Example 2 [Cooperative Problem Solving] Agents report types to the center, that broadcasts this information and also maintains a blackboard (see [9, 7]), on which it maintains the current best solution to {EN , EN 1 , . . .}. In the intended algorithm, agents follow a best-effort strategy, searching for, and suggesting, improvements to any problem. A best-effort strategy is dened as an algorithm that will eventually nd an improvement when one exists. Here, we suppose the center audits new posts, and only accepts solutions onto the blackboard that improve the current solution. (This prevents agent i from scuppering progress towards solving EN i .) The mechanism terminates when every agent reports that EN is solved correctly and every agent except agent i reports that EN i is solved correctly. Finally, the center implements the VCG outcome on the basis of the nal solutions. Many variations of this general blackboard-style approach are possible. For example, agents can be provided with shared scratch space to post (but not overwrite) partial solutions (similar to the hint-sharing methods proposed in the cooperative problem solving methods of Clearwater et al. [8]). A blackboard approach can also be used in an in-

); j ) vj (k (

(2)

These are the payments from the center that align the incentives of each agent with that of maximizing total value. The VCG payments (Equation 1) are a specialization of Groves payments, introducing the additional payment term VN i from agent i to the center. Theorem 1 Distributed mechanism dM for the Groves mechanism, in which a canonical distributed algorithm is used to determine the efcient choice in EN , is an (ex post) faithful implementation of the efcient choice and Groves payments. i from other Proof: The utility to agent i, given reports agents is vi (k ; i ) + j =i vj (k ; j ), where k solves i ). Agent i can inuence the choice of k maxk i vi (k, through both revelation and through its computational and message-passing actions. But, the Groves payments align agent is incentives with the efcient choice k in EN , and the agent will follow the intended strategy when this is also pursued by other agents. Groves mechanisms can also be easily extended to provide a faithful distributed implementation of any afne maximizer, with the choice selected to solve maxk i wi vi (k ; i ) + bk where wi , bk 0 are set by the designer.

6. The Partition Principle: VCG Mechanisms


Now consider the problem of implementing the VCG outcome as a distributed mechanism. Unlike Groves, the VCG mechanism does not run at a decit in many MAS problems (for example when used for a Combinatorial Auction [34]). Theorem 2 (Partition Principle) Distributed mechanism dM for the VCG mechanism, in which a canonical distributed algorithm is adopted to solve {EN , EN 1 , . . .}, and in which computation is partitioned so that sm () will allow the center to solve EN i whatever the actions of agent i, is an (ex post) faithful distributed implementation of the efcient choice and VCG payments. Proof: The utility to agent i is vi (k ; i ) i ( j =i vj (k ; j ) j =i vj (k ; j )). Agent i cani not inuence the choice of k , and once this is xed

cremental revelation mode, in which agents reveal new information about their own value in posting new solutions. Another way to think about how to write a distributed algorithm for VCG that satises the partition principle is to consider algorithms with the following characteristics: (1) agents only communicate with the center by suggesting candidate choices (2) any candidate from agent i that for which agent i has no (reported) value is ignored by the center (3) partitioning is static, in that the computation agent i is asked to perform does not depend on results from computation from any other agent. We refer to such paradigms as static-partitioning because of property (3). Property (2) is critical because it ensures that an optimal statically-partitioned algorithm can never rely on agent i to provide computation that helps to solve EN i . With this, it is clear that these static-partitioning methods must satisfy the partitioning-principle, and provide faithful implementations of the VCG outcome.5 As an example, let EN +1 denote the efcient choice problem, maxkK i vi (k ; i ), subject to the additional constraint that v1 (k ; 1 ) > 0 (loosely, we say the solution must contain agent 1). Similarly, let EN +12 denote the problem maxkK j =2 vj (k ; j ) s.t. v1 (k ; 1 ) > 0. Example 3 [static partitioning] Partition the computation across agents according to the following schedule: {EN +1 , EN +12 , EN +13 , . . .} to agent 1, {EN +2 , EN +21 , EN +23 , EN +24 , . . .} to agent 2, etc. Each agent can adopt any sound and complete algorithm to solve its assigned problems. Finally, the center compiles the solutions, e.g. VN = max k +1 , k +2 , . . . , k +m , where m indexes the N th agent and k +i denotes the reported solution to EN +i .

i for which the intended strategy sm (h1 , i ) = single type 1 m 2 2 a and s (h , i ) = a . As an example, consider an ascending-price auction in which straightforward bidding is the intended strategy, with an agent bidding for the item while the price is no greater than its value and it is not winning. Consistency requires that no agent can retract an earlier bid and that all bids must be at the current ask price (no jump bids). No agent would want to take either action if following a straightforward bidding strategy. We say that a distributed mechanism supports consistency checking when every pair of information-revelation actions must be consistent (either through constraints, or through checking and then implementing a signicantly bad outcome in case of a violation, such as excluding an agent from the system). Theorem 3 (Information-Revelation Principle) Distributed mechanism dM = (g, , sm ) with consistencychecking is an (ex post) faithful implementation when the only actions are information-revelation actions and when f () = g (sm ()) is strategyproof. Proof: Since all actions are information-revelation actions, m the space of possible outcomes is g (sm i (i ), si (i )), but m m i ); i ) g (si (i ), si (i )) = f (i , i ), and ui (f (i , i , i ); i ) for all i , all i , and all i = i by the ui (f ( strategyproofness of f (). We can consider the application of this informationrevelation principle to a distributed VCG mechanism. Corollary 1 Distributed mechanism dM = (g, , sm ) is an (ex post) faithful implementation of the VCG outcome when all actions are information-revelation actions and the implementation g (sm ()) correctly computes the efcient choice and VCG payments for all types. The distributed mechanisms constructed around the information-revelation principle do not fall under the canonical distributed algorithms in 4 because the center need not know the exact value of the solutions to {EN , EN i , . . .}. For example, in a single-item Vickrey auction the center only needs to know that v1 p, v2 = p and vj p for all j / {1, 2} to implement the Vickrey payment p. Example 4 [Ascending Auctions] The ascending-price combinatorial auctions (CA) described in Mishra & Parkes [21] are ex post Nash distributed implementations of the VCG mechanism. The ascending-price auctions (implicitly) maintain prices pi (S ), on every bundle of goods S , and the intended straightforward bidding strategy has each agent responding with its demand set Di (p) = {S : vi (S ) pi (S ) vi (S ) pi (S ), S = S }, for

7. Information-Revelation Principle
The information-revelation principle is a very general observation, in no way limited to distributed implementations of efcient outcomes. Rather, it applies to the distributed implementation of any strategyproof social choice function. We need an additional property, called informationrevelation consistency, which can be achieved either through checking, or through rules that constrain the feasible strategy-space. Denition 6 Information revelation actions a1 and a2 , by agent i in states h1 and h2 are consistent when there is a
5 We need a static partitioning to prevent results from agent i being used to help in the computation by another agent in solving EN i . Similarly, the center must only pick across candidates, with no additional combination operators.

all prices p. revealed-preference information from each agent is consistent across rounds. Decentralized optimization algorithms, such as DantzigWolfe, Benders, and column generation [15, 3] have received much attention for solving large-scale structured optimization problems. A typical situation supposes that a rm needs to determine an allocation of resources across units, where individual units best understand their needs but the rm must impose global resource constraints. In the Dantzig-Wolfe decomposition, prices are published over a sequence of rounds, with units responding with preferred allocations. This information is aggregated in the center, which eventually announces an optimal global solution. These approaches are a very natural t with the informationrevelation principle: Example 5 Adopt a decentralized optimization algorithm, such as Dantzig-Wolfe, and use it to compute the solution to {EN , EN 1 , . . .}. Ensure consistency, such that revealed preferences are consistent across rounds (this also ensures convergence). All responses from agents in Dantzig-Wolfe are information-revelation actions, and as such this provides an (ex post) faithful implementation of the VCG outcome.

Theorem 4 (Redundancy Principle) Distributed mechanism dM = (g, , sm ) constructed with a chunk, duplicate and punish scheme is an (ex post) faithful implementation when social-choice function f () = g (sm ()) is strategyproof. Proof: Consider agent i, and x the strategy sm i of other agents. First, whatever the information-revelation actions, agent i should choose a strategy that is faithful to the intended computational strategy because any deviation will lead to a penalty that by assumption exceeds any potential benet. Then, we can assume w.o.l.g. that agent i will follows the intended computational strategy, and then appeal to the information-revelation principle and the strategyproofness of f () = g (sm ()), because the only remaining actions are information-revelation actions. , Example 6 [Pair-wise Chunking] Collect reported types and then ask any two agents solve EN , any two agents to solve EN 1 , and so on, for every agent. If the choices reported back for any problem differ, then the center can step in and determine the correct answer and punish. Notice that this simple distributed implementation works even if agent 1 is asked to solve VN 1 . Example 7 [Systematic Search] A more intricate example is to consider a distributed version of a systematic search algorithm, in which the center structures a search tree and allocates pairs of agents to conduct the search under nodes. For example state-of-the-art winner-determination algorithms for CAs use branch-on-bids coupled with LPbased heuristics to determine optimal allocations [11]. Such a search could be structured to ask agents 2 and 3 to continue to follow algorithm A and search under a particular node for 20 steps and then report back the new search tree, and so on . . .

8. Redundancy Principle
The redundancy principle is another very general observation, in no way limited to distributed implementations of efcient choices. Rather, it applies to the distributed implementation of any strategyproof social-choice function in which the computation can be usefully chunked into a sequence of steps, with each step given to two or more agents. Consider the following chunk, duplicate and punish algorithmic paradigm: = ( 1 , . . . , N ) (1) Agents report types (2) Partition the distributed computation into chunked steps sm1 , sm2 , . . . , smT . (3) Give each chunked step to 2 or more agents, providing necessary inputs to allow the computation to be performed. (4) The center steps in and repeats the calculation if the responses differ, punishing one (or both) agents when the response differs from that in the intended algorithm. Punishment can be by removing the agent from the system for some period of time or some other punitive sanction, such as imposing a ne. Note that the center is assumed to have the computational resources to perform a check when agents respond with two different answers.6
6 This prevents an agent from threatening another agent, which would happen with a simpler scheme that punished both agents under any disagreement. We can also do this checking even when there is agreement, with some small probability, to handle the remaining issue of multiple equilibria. However, as we already argued, we thing software acts as a useful correlating device from this perspective.

9. Discussion
There are many outstanding issues and lots of interesting directions: Costly Computation. On one hand, we assume that computation is costly (else why else would we want to distribute it across agents?) but on the other hand, we assume that computation is free (else why else would an agent happily perform a computation for the center when it is indifferent about the result of the computation?). This is a tricky place to be! Future work should strive to explicitly consider an agents computational cost within implementation. Restricted Communication Networks. The model in this paper assumes that an agent can send a message to the center without interference from another agent. What are the implications of restricted communication networks, for example multi-agent systems in which messages can only be sent peer-to-peer [23, 29]?

Self-Enforcing Outcomes. Can we nd ways to relax the assumption that the center can enforce an outcome? This has been considered in an interdomain routing setting [29], in which an agents neighbors know the outcome (and the prescribed actions) and are able to monitor the agents actions and report deviations to the center. Specic Instantiations. It will be interesting to build out specic instantiations of the stylized examples provided in this paper, in an effort to begin to understand the computational effectiveness of distributed implementations of incentive-compatible mechanisms.

10. Conclusions
In addressing the implicit centralization of mechanism design theory, we have described three general principles to guide the development of faithful distributed implementations, in which self-interested agents choose to perform the computation and help the center to determine an appropriate outcome. We hope this work will start an interesting conversation between researchers familiar with methods in DAI for solving distributed problems with cooperative agents with researchers in DAI familiar with methods for handling agent self-interest through centralized techniques from mechanism design. The goal should be distributed implementations with good computational properties and good incentive properties.

Acknowledgments
Useful comments and suggestions were received during presentations of earlier versions of this work at EC03, Stanford, MIT, and AAMAS03. This work is supported in part by NSF grant IIS0238147.

References
[1] S. Baliga. Implementation in economic environments with incomplete information: The Use of Multi-Stage Games. Games and Economic Behavior, 27:173183, 1999. [2] J. S. Banks, J. O. Ledyard, and D. Porter. Allocating uncertain and unresponsive resources: An experimental approach. The Rand Journal of Economics, 20:125, 1989. [3] S. Bradley, A. Hax, and T. Magnanti. Applied Mathematical Programming. Addison-Wesley, 1977. [4] R. Brafman and M. Tennenholtz. Efcient learning equilibrium. Articial Intelligence, 2004. To appear. [5] P. J. Brewer. Decentralized computation procurement and computational robustness in a smart market. Economic Theory, 13:4192, 1999. [6] S. Brusco. Implementing action proles with sequential mechanisms. Review of Economic Design, 3:271300, 1998. [7] N. Carver. A revisionist view of blackboard systems. In Proc. 1997 Midwest Art. Intell. and Cog. Sci. Conf., 1997. May. [8] S. H. Clearwater, B. A. Huberman, and T. Hogg. Cooperative solution of constraint satisfaction problems. Science, 254:11811183, 1991. [9] D. D. Corkill, K. Q. Gallagher, and P. M. Johnson. Achieving exibility, efciency and generality in blackboard architectures. In Proc. 6th Nat. Conference on Art. Intelligence (AAAI), pages 1823, 1987.

[10] R. K. Dash, N. R. Jennings, and D. C. Parkes. Computationalmechanism design: A call to arms. IEEE Intelligent Systems, pages 4047, November 2003. Special Issue on Agents and Markets. [11] S. de Vries and R. V. Vohra. Combinatorial auctions: A survey. Informs Journal on Computing, 15(3):284309, 2003. [12] E. Ephrati and J. S. Rosenschein. The clarke tax as a consensus mechanism among automated agents. In Proc. 9th National Conference on Articial Intelligence (AAAI-91), pages 173178, July 1991. [13] J. Feigenbaum, C. Papadimitriou, R. Sami, and S. Shenker. A BGP-based mechanism for lowest-cost routing. In Proceedings of the 2002 ACM Symposium on Principles of Distributed Computing, pages 173182, 2002. [14] J. Feigenbaum and S. Shenker. Distributed Algorithmic Mechanism Design: Recent Results and Future Directions. In Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 113, 2002. [15] A. M. Geoffrion. Elements of large-scale mathematical programming. Management Science, 16(11):652691, 1970. [16] L. Hunsberger and B. J. Grosz. A combinatorial auction for collaborative planning. In Proc. 4th International Conference on MultiAgent Systems (ICMAS-00), pages 151158, 2000. [17] M. O. Jackson. A crash course in implementation theory. Technical Report SSWP 1076, California Institute of Technology, 1999. [18] M. O. Jackson. Mechanism theory. In The Encyclopedia of Life Support Systems. EOLSS Publishers, 2000. [19] V. Lesser. Cooperative multiagent systems: A personal view of the state of the art. IEEE Transactions on Knowledge and Data Engineering, 11(1), 1999. [20] A. Mas-Colell, M. D. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, 1995. [21] D. Mishra and D. C. Parkes. Ascending price Vickrey auctions using primal-dual algorithms. Technical report, Harvard University, 2004. [22] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. Solving distributed conraint optimization problems optimally,efciently, and asynchronously. AIJ, 2004. Forthcoming. [23] D. Monderer and M. Tennenholtz. Distributed games: From mechanisms to protocols. In Proc. 16th National Conference on Articial Intelligence (AAAI-99), pages 3237, July 1999. [24] J. Moore and R. Repullo. Subgame perfect implementation. Econometrica, 56(5):11911220, Sept. 1988. [25] D. C. Parkes. Iterative Combinatorial Auctions: Achieving Economic and Computational Efciency. PhD thesis, University of Pennsylvania, May 2001. [26] D. C. Parkes and S. Singh. An MDP-based approach to Online Mechanism Design. In In Proc. 17th Annual Conf. on Neural Information Processing Systems (NIPS03), 2003. [27] J. S. Rosenschein and G. Zlotkin. Designing conventions for automated negotiation. AI Magazine, 1994. Fall. [28] T. Sandholm. Agents in electronic commerce: Component technologies for automated negotiation and coalition formation. Autonomous Agents and Multi-Agent Systems, 3(1):7396, 2000. [29] J. Shneidman and D. C. Parkes. Overcoming rational manipulation in mechanism implementations. Technical report, Harvard University, 2003. Submitted for publication. [30] J. Shneidman and D. C. Parkes. Using redundancy to improve robustness of distributed mechanism implementations. In Fourth ACM Conf. on Electronic Commerce (EC03), 2003. (Poster paper). [31] Y. Shoham and M. Tennenholtz. On rational computability and communication complexity. Games and Economic Behavior, 35:197 211, 2001. [32] Y. Shoham and M. Tennenholtz. Non-cooperative computation: Boolean functions with correctness and exclusivity. Theoretical Computer Science, 2004. To appear. [33] R. Smorodinsky and M. Tennenholtz. Overcoming free riding in multi-party computations The anonymous case. Technical report, Technion, 2003. [34] M. P. Wellman, W. E. Walsh, P. R. Wurman, and J. K. MacKieMason. Auction protocols for decentralized scheduling. Games and Economic Behavior, 35:271303, 2001.

You might also like