Abstract

In the era of big data, a vast amount of data is created every day. Crowdsourcing systems have recently gained significance as an interesting practice in managing and performing big data operations. Crowdsourcing has facilitated the process of performing tasks that cannot be adequately solved by machines including image labeling, transcriptions, data validation and sentiment analysis. However, quality control remains one of the biggest challenges for crowdsourcing. Current crowdsourcing systems use the same quality control mechanism for evaluating different types of tasks. In this paper, we argue that quality mechanisms vary by task type. We propose a task ontology-based model to identify the most appropriate quality mechanism for a given task. The proposed model has been enriched by a reputation system to collect requesters' feedback on quality mechanisms. Accordingly, the reputation of each mechanism can be established and used for mapping between tasks and mechanisms. Description of the model's framework, algorithms, and its components' interaction are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call