Abstract

In micro-task crowdsourcing markets such as Amazon's Mechanical Turk, how to obtain high quality result without exceeding the limited budgets is one main challenge. The existing theory and practice of crowdsourcing suggests that uneven task difficulty plays a crucial role to task quality. Yet, it lacks a clear identifying method to task difficulty, which hinders effective and efficient execution of micro-task crowdsourcing. This paper explores the notion of task difficulty and its influence to crowdsourcing, and presents a difficulty-based crowdsourcing method to optimize the crowdsourcing process. We firstly identify task difficulty feature based on a local estimation method in the real crowdsourcing context, followed by proposing an optimization method to improve the accuracy of results, while reducing the overall cost. We conduct a series of experimental studies to evaluate our method, which show that our difficulty-based crowdsourcing method can accurately identify the task difficulty feature, improve the quality of task performance and reduce the cost significantly, and thus demonstrate the effectiveness of task difficulty as task modeling property.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call