Abstract

Data crowdsourcing has found a broad range of applications (e.g., environmental monitoring and image classification) by leveraging the “wisdom” of a potentially large crowd of “workers” (e.g., mobile users). A key metric of crowdsourcing is data accuracy, which relies on the quality of the participating workers’ data (e.g., the probability that the data are equal to the ground truth). However, the data quality of a worker can be its own private information (which the worker learns, e.g., based on its location) that it may have incentive to misreport, which can, in turn, mislead the crowdsourcing requester about the accuracy of the data. This issue is further complicated by the fact that the worker can also manipulate its effort made in the crowdsourcing task and the data reported to the requester, which can also mislead the requester. In this paper, we devise truthful crowdsourcing mechanisms for quality, effort, and data elicitation (QEDE) , which incentivize strategic workers to truthfully report their private worker quality and data to the requester, and make truthful effort as desired by the requester. The truthful design of the QEDE mechanisms overcomes the lack of ground truth and the coupling in the joint elicitation of the worker quality, effort, and data. Under the QEDE mechanisms, we characterize the socially optimal and the requester's optimal (RO) task assignments, and analyze their performance. We show that the RO assignment is determined by the largest “virtual quality” rather than the highest quality among workers, which depends on the worker's quality and the quality's distribution. We evaluate the QEDE mechanisms using simulations that demonstrate the truthfulness of the mechanisms and the performance of the optimal task assignments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call