Abstract

Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers’ actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers’ behavior. In this work, we attempt to interpret the workers’ behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker’s level of dedication to a task, according to the task’s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.

Highlights

  • Crowdsourcing systems are intended to bring together requesters, who have tasks they need to complete, with human workers, who are willing to perform them in exchange for a payment

  • A requester announces a task in the Mechanical Turk (MTurk) platform in the form of a Human Intelligence Task (HIT), as they are called in MTurk, together with additional information on the task and the corresponding payment

  • Our aim is to study the crowd of MTurk in an environment free from extra monetary incentives, instructions that might guide the workers’ behavior, a priori control techniques, and HITs that might be familiar to the workers

Read more

Summary

Introduction

Crowdsourcing systems are intended to bring together requesters, who have tasks they need to complete, with human workers, who are willing to perform them in exchange for a payment. Amazon Mechanical Turk (MTurk) [1] is the leading player in this market, and it will become the object of our focus hereafter. A requester announces a task in the MTurk platform in the form of a Human Intelligence Task (HIT), as they are called in MTurk, together with additional information on the task and the corresponding payment. Comunidad.madrid/servicios/educacion/ convocatorias-ayudas-investigacion), NSF of China grant 61520106005 (http://www.nsfc.gov.cn/ english/site_1/index.html) and the Ministry of Science and Innovation Es/portal/site/MICINN/aei) grant PID2019109805RB-I00 (ECID) cofounded by FEDER. The funders has no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.