Abstract

In many random assignment problems, the central planner pursues their own policy objective, such as matching size and minimum quota fulfillment. Several practically important policy objectives do not align with agents' preferences and are known to be incompatible with strategy-proofness. This paper demonstrates that such policy objectives can be attained using mechanisms that satisfy Bayesian incentive compatibility within a restricted domain of von Neumann Morgenstern utilities. We establish that a mechanism satisfies Bayesian incentive compatibility in an inverse-bounded-indifference domain if and only if the mechanism satisfies the three axioms of swap monotonicity, lower invariance, and interior upper variance. We apply this axiomatic characterization to analyze the incentive property of the constrained random serial dictatorship mechanism (CRSD). CRSD is designed to generate an individually rational assignment that optimizes the central planner's policy objective function. Since CRSD satisfies these axioms, it is Bayesian incentive compatible within an IBI domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call