Abstract

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review.Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task.It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are required.In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks.This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.We found that:(1) Self-correction is effective for making workers reconsider their judgments.(2) Self-correction is effective more if workers are shown the task results of higher-quality workers during the second stage.(3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks.(4) A Perceptual learning effect is observed, particularly with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers.These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach.However, (5) no long-term effects of the self-correction task were transferred to other similar tasks in two different settings.

Highlights

  • Ensuring the quality of obtained data is a primary problem in crowdsourcing; numerous studies have attempted to improve the quality of task result data

  • (4) A Perceptual learning effect is observed, with workers who moderately change answers in the second stage. This suggests the possibility that we can estimate the learning potential of workers. These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach

  • We show that, with self-correction, more evident improvements are shown in the quality of task results by high-quality workers in the second assignment

Read more

Summary

Introduction

Ensuring the quality of obtained data is a primary problem in crowdsourcing; numerous studies have attempted to improve the quality of task result data. For the categorization/labeling task, which is considered to account for a large portion of microtasks in a crowdsourcing service such as Amazon Mechanical Turk, three approaches are commonly used. With Amazon Mechanical Turk, most requesters attempt to recruit workers with high approval ratings or category masters selected by the platform. Shah and Zhou proposed a two-stage setting for crowdsourced tasks, named self-correction, which shows the task results of other workers to each worker after the results are submitted, allowing the worker to update their results (Shah and Zhou, 2016). Self-correction can be incorporated into crowdsourcing tasks performed on commercial crowdsourcing services as an external task

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call