Abstract

Applications that use games to harness human intelligence to perform various computational tasks are increasing in popularity and may be termed human computation games (HCGs). Most HCGs are collaborative in nature, requiring players to cooperate within a game to score points. Competitive versions, where players work against each other, are a more recent entrant, and have been claimed to address shortcomings of collaborative HCGs such as quality of computation. To date, however, little work has been conducted in understanding how different HCG genres influence computational performance and players' perceptions of such. In this paper we study these issues using image tagging HCGs in which users play games to generate keywords for images. Three versions were created: collaborative HCG, competitive HCG, and a control application for manual tagging. The applications were evaluated to uncover the quality of the image tags generated as well as users' perceptions. Results suggest that there is a tension between entertainment and tag quality. While participants reported liking the collaborative and competitive image tagging HCGs over the control application, those using the latter seemed to generate better quality tags. Implications of the work are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.