Abstract

In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of minimizing expected ex post regret, we are able to adapt statistical machine learning techniques to the design of payment rules. This computational approach to mechanism design is applicable to domains with multi-dimensional types and situations where computational efficiency is a concern. Specifically, given an outcome rule and access to a type distribution, we train a support vector machine with a specific structure imposed on the discriminant function, such that it implicitly learns a corresponding payment rule with desirable incentive properties. We extend the framework to adopt succinct k -wise dependent valuations, leveraging a connection with maximum a posteriori assignment on Markov networks to enable training to scale up to settings with a large number of items; we evaluate this construction in the case where k=2. We present applications to multiparameter combinatorial auctions with approximate winner determination, and the assignment problem with an egalitarian outcome rule. Experimental results demonstrate that the construction produces payment rules with low ex post regret, and that penalizing classification error is effective in preventing failures of ex post individual rationality.

Highlights

  • Mechanism design studies situations where a set of agents each hold private information about their preferences over different outcomes

  • We focus on two situations where strategyproof payment rules are not available: a greedy outcome rule for a multi-minded combinatorial auction in which each agent is interested in a constant number of bundles, and an assignment problem with an egalitarian outcome rule, i.e., an outcome rule that maximizes the minimum value of any agent

  • 12The barrier to using more data is not the availability of the data itself, but the time required for training, because training time scales quadratically in the size of the training set due to the use of non-linear kernels. 13The payment of an agent under the VCG-based payment rule pvcg is equal to the marginal externality imposed by the agent on the other agents, relative to the outcome rule in question

Read more

Summary

Introduction

Mechanism design studies situations where a set of agents each hold private information about their preferences over different outcomes. The designer chooses a center that receives claims about such preferences, selects and enforces an outcome, and optionally collects payments. The classical approach is to impose incentive compatibility, ensuring that agents truthfully report their preferences in strategic equilibrium. Subject to this constraint, the goal is to identify a mechanism, i.e., a way of choosing an outcome and payments based on agents’ reports, that optimizes a given design objective like social welfare, revenue, or some notion of fairness. It can be analytically cumbersome to derive optimal mechanisms for domains that are “multi-dimensional” in the sense that each agent’s private information is described through more than a single number, and few results are known in this case. Second, incentive compatibility can be costly, in that adopting it as a hard con-

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.