Abstract

Crowdsourcing involves employing a large number of workers, creating HITs (Human Intelligent Tasks), submitting them to a crowdsourcing platform and providing a monetary reward for each HIT. One of the advantages of using crowdsourcing is that the tasks can be highly parallelized, that is, the work is performed by a high number of people in a decentralized setting. The design also offers a means to cross-check the accuracy of the answers by assigning each task to more than one person thus relying on majority consensus as well as rewarding the workers according to their performance and productivity. Since each worker is paid per task, the costs can significantly increase, irrespective of the overall accuracy of the results. Thus, one important question when designing such crowdsourcing tasks that arise is can we estimate apriori - before launching the experiment - how many workers to employ and how many tasks to assign to each worker when dealing with large amounts of tasks. Thus, the main research question we aim to answer is: ‘Can we a-priori estimate optimal workers and tasks’ assignment to obtain maximum accuracy on all tasks?’.We introduce a two-staged statistical guideline, CrowdED, for optimal crowdsourcing experimental design in order to a-priori, via simulations, estimate optimal workers and tasks’ assignment to obtain maximum accuracy for crowdsourcing tasks. We describe the methodology and evaluate it by comparing with real-world experiments and show that the method performs better than random selection of values.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call