Abstract

This paper identifies trends within and relationships between the amount of participation and the quality of contributions in three crowdsourced surveys. Participants were asked to perform a collective problem solving task that lacked any explicit incentive: they were instructed not only to respond to survey questions but also to pose new questions that they thought might-if responded to by others-predict an outcome variable of interest to them. While the three surveys had very different outcome variables, target audiences, methods of advertisement, and lengths of deployment, we found very similar patterns of collective behavior. In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions. By analyzing responses to questions that contained a built-in range of valid response we found that less than 0.2% of responses lay outside of those ranges, indicating that most participants tend to respond honestly to surveys of this form, even without explicit incentives for honesty. While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys. We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions. This suggests that future crowdsourced surveys may attract more predictive questions by prompting users to pose new questions at specific times during their participation and limiting question submission at non-optimal times.

Highlights

  • Crowdsourcing [1] holds that a large group of non-experts can each contribute a small amount of effort to solve a problem that would otherwise require a large amount of effort from a smaller, expert group to complete

  • The first deployment that we examine in this paper was a survey designed to crowdsource childhood predictors of adult BMI (Childhood BMI, [26])

  • As users were asked to collectively discover predictive questions, we investigate the predictive power of the top questions in each study

Read more

Summary

Introduction

Crowdsourcing [1] holds that a large group of non-experts can each contribute a small amount of effort to solve a problem that would otherwise require a large amount of effort from a smaller, expert group to complete. Amazon’s Mechanical Turk has been used to crowdsource data annotations [5], behavioral research [6], assessment of visualization design [7], human language technologies [8], and transcribing audio [9]. Other stand-alone websites have been used to crowdsource everything from mapping the aftermath of the 2010 earthquake in Haiti [10] to predicting protein structures [11]. In each of these examples, some crowdsourcing participants participate more than others, while some participants produce better quality contributions. We use the results of three crowdsourcing studies to examine the relationship between participation rates and quality of contribution

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.