Abstract

In a multi-party machine learning system, different parties cooperate on optimizing towards better models by sharing data in a privacy-preserving way. A major challenge in learning is the incentive issue. For example, if there is competition among the parties, one may strategically hide their data to prevent other parties from getting better models.In this paper, we study the problem through the lens of mechanism design and incorporate the features of multi-party learning in our setting. First, each agent's valuation has externalities that depend on others' types and actions. Second, each agent can only misreport a type lower than his true type, but not the other way round. We provide the optimal truthful mechanism in the separable utility setting, as well as necessary and sufficient conditions for truthful mechanisms in general cases. Finally, we propose an algorithm to find the desirable mechanism that is truthful, individually rational, efficient and weakly budget-balanced, and analyze the computational complexity of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call