Abstract

Macrotasking crowdsourcing systems like Elance and Fiverr serve as efficient platforms for requesters to outsource challenging and innovative tasks that require special skills to workers. It is widely practiced in such systems that requesters reward workers based on requesters' assessment on solution quality. The challenge is that requesters' assessment may not be accurate to reflect the intrinsic quality of a solution due to human factors like personal preferences or biases. In this work, we consider answering the following questions: How to design a mechanism to incentivize workers provide high quality solutions in the presence of such human factors? How to formally study the impact of human factors on workers' financial incentive to participate? We design a mechanism to incentivize workers to provide high-quality contributions, which is robust to human factors. Our incentive mechanism consists of a “task bundling scheme” and a “rating system”, which reward workers based on requesters' rating on the solution quality. We propose a probabilistic model to capture human factors, and quantify their impact on the incentive mechanism. We formulate an optimization framework to select appropriate rating system parameters, which can be viewed as a tradeoff between “system efficiency”, i.e., the total number of tasks can be solved given a fixed reward budget, and the “rating system complexity”, which determines the human cognitive cost and time in expressing ratings. We also formulate an optimization framework to select appropriate bundling size, which can tradeoff system efficiency against service delay (i.e., the waiting time to form a task bundle). Finally, we conduct experiments on a dataset from Elance. Experimental results show that our incentive mechanism achieves at least 99.95 percent of the theoretical maximum system efficiency with a service delay of at most 2.3639 hours. Furthermore, we discover that the rating system in Elance is too complex, and it should be simplified to a binary rating system (i.e., two rating points).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.