Abstract

As an emerging and promising approach, crowdsourcing-based software development becomes popular in many domains due to the participation of talented pool of developers in the contests, and to promote the ability of requesters (or customers) to choose the 'wining' solution with respect to their desire quality levels. However, due to lack of a central mechanism for team formation, continuity in the developer's work on consecutive tasks and risk of noise in submissions of a contest, requesters of a domain have quality concerns to adopt a crowdsourcing-based software development platform. In order to address this concern, we proposed a measure Quality of Contest (QoC) to analyze and predict the quality of a crowdsourcing-based platform through historical information on its completed tasks. We evaluate the capacity of QoC as assessor to predict the quality. Subsequently, we implement a crawler to mine the information of completed development tasks from the TopCoder platform of Tech Platform Inc (TPI) to empirically investigate the proposed measures. The promising results of QoC measure suggest the applicability of the proposed measure across the other crowdsourcing-based platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call