Abstract

Crowdsourcing has the potential to address key challenges in multimedia research. Multimedia evaluation, annotation, retrieval and creation can be obtained at a low time and monetary cost from the contribution of large crowds and by leveraging human computation. In fact, the applicative frontiers of this potential are yet to be discovered. And yet, challenges already arise as to how to cautiously exploit it. The crowd, as a users (workers) community, is a complex and dynamic system highly sensitive to changes in the form and the parametrization of their activities. Issues concerning motivation, reliability, and engagement are being more and more often documented, and need to be addressed. Since 2012, the International ACM Workshop on Crowdsourcing for Multimedia CrowdMM has welcomed new insights on the effective deployment of crowdsourcing towards boosting Multimedia research. On its fourth year, CrowdMM 2015 focuses on contributions addressing the key challenges that still hinder widespread adoption of crowdsourcing paradigms in the multimedia research community: identification of optimal crowd members (e.g., user expertise, worker reliability), providing effective explanations (i.e., good task design), controlling noise and quality in the results, designing incentive structures that do not breed cheating, and tackling privacy issues in data collection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call