Abstract
We consider the crowdsourcing task of learning the answer to simple multiple-choice microtasks. In order to provide statistically significant results, one often needs to ask multiple workers to answer the same microtask. A stopping rule is an algorithm that for a given microtask decides for any given set of worker answers if the system should stop and output an answer or iterate and ask one more worker. A quality score for a worker is a score that reflects the historic performance of that worker. In this paper we investigate how to devise better stopping rules given such quality scores. We conduct a data analysis on a large-scale industrial crowdsourcing platform, and use the observations from this analysis to design new stopping rules that use the workers’ quality scores in a non-trivial manner. We then conduct a simulation based on a real-world workload, showing that our algorithm performs better than the more naive approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.