Abstract

Creative tasks such as ideation or question proposal are powerful applications of crowdsourcing, yet the quantity of workers available for addressing practical problems is often insufficient. To enable scalable crowdsourcing thus requires gaining all possible efficiency and information from available workers. One option for text-focused tasks is to allow assistive technology, such as an autocompletion user interface (AUI), to help workers input text responses. But support for the efficacy of AUIs is mixed. Here we designed and conducted a randomized experiment where workers were asked to provide short text responses to given questions. Our experimental goal was to determine if an AUI helps workers respond more quickly and with improved consistency by mitigating typos and misspellings. Surprisingly, we found that neither occurred: workers assigned to the AUI treatment were slower than those assigned to the non-AUI control and their responses were more diverse, not less, than those of the control. Both the lexical and semantic diversities of responses were higher, with the latter measured using word2vec. A crowdsourcer interested in worker speed may want to avoid using an AUI, but using an AUI to boost response diversity may be valuable to crowdsourcers interested in receiving as much novel information from workers as possible.

Highlights

  • Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling (Welinder and Perona, 2010) all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation (Little et al, 2010)

  • We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient

  • It seems reasonable that crowdsourcers may want to use an AUI when building a crowdsourcing interface for a text-oriented task, especially since the popularity of AUIs makes it likely most crowd workers will understand their use

Read more

Summary

Introduction

Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling (Welinder and Perona, 2010) all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation (Little et al, 2010). Scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible. Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. Efficiency can be gained by assigning tasks to workers in the best order (Tran-Thanh et al, 2013), by filtering out poor tasks or workers, or by best incentivizing workers (Allahbakhsh et al, 2013). At the individual worker level, efficiency gains can come from

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.