Abstract

Problem definition : This paper studies the role of seekers’ problem specification in crowdsourcing contests for design problems. Academic/practical relevance : Platforms hosting design contests offer detailed guidance for seekers to specify their problems when launching a contest. Yet problem specification in such crowdsourcing contests is something the theoretical and empirical literature has largely overlooked. We aim to fill this gap by offering an empirically validated model to generate insights for the provision of information at contest launch. Methodology : We develop a game-theoretic model featuring different types of information (categorized as “conceptual objectives” or “execution guidelines”) in problem specifications and assess their impact on design processes and submission qualities. Real-world data are used to empirically test hypotheses and policy recommendations generated from the model, and a quasi-natural experiment provides further empirical validation. Results : We show theoretically and verify empirically that with more conceptual objectives disclosed in the problem specification, the number of participants in a contest eventually decreases; with more execution guidelines in the problem specification, the trial effort provision by each participant increases; and the best solution quality always increases with more execution guidelines but eventually decreases with more conceptual objectives. Managerial implications : To maximize the best solution quality in crowdsourced design problems, seekers should always provide more execution guidelines and only a moderate number of conceptual objectives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call