Abstract

The increasing popularity of crowdsourcing markets enables the application of crowdsourcing classification tasks. How to conduct quality control in such an application to achieve accurate classification results from noisy workers is an important and challenging task, and has drawn broad research interests. However, most existing works do not exploit the label acquisition phase, which results in their disability of making a proper budget allocation. Moreover, some works impractically make the assumption of managing workers, which is not supported by common crowdsourcing platforms such as AMT or CrowdFlower. To overcome these drawbacks, in this paper, we devise a Dynamic Label Acquisition and Answer Aggregation (DLTA) framework for crowdsourcing classification tasks. The framework proceeds in a sequence of rounds, adaptively conducting label inference and label acquisition. In each round, it analyzes the collected answers of previous rounds to perform proper budget allocation, and then issues the resultant query to the crowd. To support DLTA, we propose a generative model for the collection of labels, and correspondingly strategies for label inference and budget allocation. Experimental results show that compared with existing methods, DLTA obtains competitive accuracy in the binary case. Besides, its extended version, which plugs in the state-of-the-art inference technique, achieves the highest accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.