Abstract

PurposeAs the number of joined service providers (SPs) in knowledge-intensive crowdsourcing (KI-C) continues to rise, there is an information overload problem for KI-C platforms and consumers to identify qualified SPs to complete tasks. To this end, this paper aims to propose a quality of service (QoS) evaluation framework for SPs in KI-C to effectively and comprehensively characterize the QoS of SPs, which can aid the efficient selection of qualified SPs.Design/methodology/approachBy literature summary and discussion with the expert team, a QoS evaluation indicator system for SPs in KI-C based on the SERVQUAL model is constructed. In addition, the Decision Making Trial and Evaluation Laboratory (DEMATEL) method is used to obtain evaluation indicators' weights. The SPs are evaluated and graded by the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and rank–sum ratio (RSR), respectively.FindingsA QoS evaluation indicator system for SPs in KI-C incorporating 13 indicators based on SERVQUAL has been constructed, and a hybrid methodology combining DEMATEL, TOPSIS and RSR is applied to quantify and visualize the QoS of SPs.Originality/valueThe QoS evaluation framework for SPs in KI-C proposed in this paper can quantify and visualize the QoS of SPs, which can help the crowdsourcing platform to realize differentiated management for SPs and assist SPs to improve their shortcomings in a targeted manner. And this is the first paper to evaluate SPs in KI-C from the prospect of QoS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call