In microtask crowdsourcing systems like Amazon Mechanical Turk (AMT) and Appen Figure-Eight, workers often employ task selection strategies, completing sequences of tasks to maximize earnings. While previous literature has explored the effects of sequential tasks with varying complexities of the same type, there is a lack of knowledge on the consequences when multiple types of tasks with similar levels of difficulty are performed. This study examines the impact of sequences of three frequently employed task types, namely image classification, text classification, and surveys, on workers' engagement, accuracy, and perceived workloads. In addition, we analyze the influence of workers' personality traits on their strategies for selecting tasks. Our study, which involved 558 participants using AMT, found that engaging in sequences of distinct task types had a detrimental effect on classification task engagement and accuracy. It also increases the perceived task load and the worker's frustration. Nevertheless, the precise order of tasks does not significantly impact these results. Moreover, we showed a slight association between personality traits and the workers' selection strategy for the tasks. The results offered valuable knowledge for designing an efficient and inclusive crowdsourcing platform.
Read full abstract