Abstract

As an emerging and promising approach, crowdsourcing-based software development has become popular in many domains due to the participation of talented pool of developers in the contests, and to promote the ability of requesters (or customers) to choose the ‘wining’ solution with respect to their desired quality levels. However, due to lack of a central mechanism for team formation, continuity in the developer’s work on consecutive tasks and risk of noise in submissions of a contest, there is a gap between the requesters of a domain and their quality concerns related to the adaptation of a crowdsourcing-based software development platform. In order to address concerns and aid requesters, we describe three measures; Quality of Registrant Developers (QRD), Quality of Contest (QC) and Quality of Support (QS) to compute and predict the quality of a crowdsourcing-based platform through historical information on its completed tasks. We evaluate the capacity of the QRD, QC and QS as assessors to predict the quality. Subsequently, we implement a crawler to mine the information of completed development tasks from the TopCoder platform to inspect the proposed measures. The promising results of our QRD, QC, and QS measures suggest to use the proposed measures to the requesters and researchers of other domains such as pharmaceutical research and development, in order to investigate and predict the quality of crowdsourcing-based software development platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call