Abstract

Crowd workers struggle to earn adequate wages. Given the limited task-related information provided on crowd platforms, workers often fail to estimate how long it would take to complete certain microtasks. Although there exist a few third-party tools and online communities that provide estimates of working times, such information is limited to microtasks that have been previously completed by other workers, and such tasks are usually booked immediately by experienced workers. This paper presents a computational technique for predicting microtask working times (i.e., how much time it takes to complete microtasks) based on past experiences of workers regarding similar tasks. The following two challenges were addressed during development of the proposed predictive model --- (i) collection of sufficient training data labeled with accurate working times, and (ii) evaluation and optimization of the prediction model. The paper first describes how 7,303 microtask submission data records were collected using a web browser extension --- installed by 83 Amazon Mechanical Turk (AMT) workers --- created for characterization of the diversity of worker behavior to facilitate accurate recording of working times. Next, challenges encountered in defining evaluation and/or objective functions have been described based on the tolerance demonstrated by workers with regard to prediction errors. To this end, surveys were conducted in AMT asking workers how they felt regarding prediction errors in working times pertaining to microtasks simulated using an "imaginary" AI system. Based on 91,060 survey responses submitted by 875 workers, objective/evaluation functions were derived for use in the prediction model to reflect whether or not the calculated prediction errors would be tolerated by workers. Evaluation results based on worker perceptions of prediction errors revealed that the proposed model was capable of predicting worker-tolerable working times in 73.6% of all tested microtask cases. Further, the derived objective function contributed to realization of accurate predictions across microtasks with more diverse durations.

Highlights

  • Crowd workers often struggle to earn appropriate wages (Irani and Silberman, 2013; McInnis et al, 2016; Ipeirotis, 2010)

  • Prediction accuracy by methods: Evaluation results obtained in this study reveal that use of the CrowdSense approach contributes to the determination of both the overall accuracy based on subjective worker perception and best prediction score

  • Collection of real-world data is very difficult owing to following reasons: i) there is no control on the experimenter side with regard to which pairs of predicted/actual working times must be used for questioning workers; this makes it difficult to sample enough data for each pair; ii) workers need to use to a certain kind of working-time prediction system to check predicted working times and complete a microtask to record the actual working time; this is too much work to be done for collection of a single sample; iii) there are other factors, such as requester preferences or microtask content, that add noise to or bias data, thereby making CrowdSense less generalized

Read more

Summary

INTRODUCTION

Crowd workers often struggle to earn appropriate wages (Irani and Silberman, 2013; McInnis et al, 2016; Ipeirotis, 2010). There exist online platforms123 and worker tools proposed by researchers (Callison-Burch, 2014; Hanrahan et al, 2015) that leverage working records collected from users to calculate working times, thereby suggesting which microtasks are likely the most lucrative Such working-time calculation is not always possible if microtask completion history is not previously provided by any worker. 91,060 data samples were collected from 875 unique workers to obtain CrowdSense, a set of evaluation results demonstrating the workers’ perception of whether or not they accepted the prediction error between the predicted and actual working times for a hypothetical microtask.

RELATED WORK
Unfair Pay in Crowd Markets
Estimating Working Time of Microtasks
Subjective Perception Measurement
TRAINING DATA COLLECTION
Defining and Measuring Working Time
Data Collection with a Web Browser Script
Data Description
TRAINING AND EVALUATION OF PROPOSED WORKING TIME PREDICTION MODEL
Strategy For Estimating JNDs
Microtask Survey Design
Defining An Evaluation Function Based on Collected Results
Defining An Objective Function
EXPERIMENT
Settings
Objective
Results
LIMITATIONS AND FUTURE
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call