Abstract

Due to internal challenges of evaluating crowdsourced ideas, organizations increasingly turn to crowds for the evaluation of ideas in their new product development processes. While wisdom and madness of crowds is well documented, less is known about the respective antecedents. Contrary to traditional evaluation processes in new product development crowdsourcing relies on self-selection of judges. In this study we investigate how two major individual-level antecedents, i.e. an individual’s knowledge in the domain of the contest and an individual’s distance to ideas, influence the decision to evaluate ideas and the subsequent quality of these evaluations. We analyze 8,740 evaluation decisions and 701 evaluations of new ideas in a two-phase crowdsourcing contest in the field of open data. By considering self-selection, which is at the heart of crowdsourcing, our results reveal that crowd judges self-select on ideas that lead to suboptimal evaluation quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.