Abstract
Prediction algorithms are often designed under the assumption that the training data is provided to the algorithm, and that the algorithm has no control over the quality of the training data. In many situations, however, the training data is collected by surveying people, for instance, in the prediction of the future demand for a product by surveying a number of potential customers, or the prediction of the winner of an election by surveying potential voters. Collecting data from people is much cheaper, easier and faster today due to the emergence of several commercial crowdsourcing platforms such as Amazon Mechanical Turk and others. In such situations, it is possible to monetarily incentivize the respondents to provide higher quality inputs. In any realistic setup, the responses obtained from people (“the agents”) are noisy: one cannot expect a naive customer to gauge the sales of a product accurately. Moreover, every individual has a different expertise and ability, and will likely react differently to the amount of money paid per task. For example, some people may be active users of the surveyed product, therefore have a better understanding of its anticipated usage. We assume that the surveyor (“the principal”) has no knowledge of the behavior of individual agents. It is therefore important to design an appropriate incentive mechanism for the prediction procedure that exploits the heterogeneity of the agents, motivating them to participate and exert suitable levels of effort. An appropriate incentive will provide higher quality data and as result, a superior prediction performance. This requirement motivates the problem at the interface between statistical estimation and mechanism design considered in this paper. As compared to problems that tackle only one of the prediction and the mechanism design problems, the problem of joint design poses a significantly greater challenge. From the statistical prediction point of view, the challenge is that every sample is drawn from a different distribution, whose properties are unknown apriori to the principal. From the mechanism design perspective, the challenge is that the incentivization procedure not only needs to ensure that agents report truthfully, but also needs to ensure that each agent exerts an effort that minimizes the overall prediction error. In this paper, we formulate and optimally solve a “parametric” form of this joint design problem. More specifically, the principal desires to predict a parameter of a known distribution. Each agent is modeled in a parametric fashion, with her work quality (or expertise) governed by a single parame-
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.