Abstract

Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving, such as image classification, optical character recognition, and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of multi-class labeling tasks under strict budget constraint. We properly profile the tasks’ difficulty levels and workers’ quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational, and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.