Abstract

Crowdsourcing has found a wide range of application in Community Question Answering (CQA). However, one of its biggest challenges is the need to address the quality of crowd answers contributions. Therefore, this work proposed a system that seeks to validate answers to questions provided by respondents using responders’ attributes and crowd ranking technique. Weights were assigned to respondent answers based on their academic records, experience and understanding of the question to obtain valid answers. Thereafter, valid answers were ranked by the crowd using Borda Count algorithm. The proposed system was evaluated using Usability and User experience (UX) measurement. The result obtained demonstrated the effectiveness of the applied technique.

Highlights

  • The new information era provides readily available access to information, especially with the advent of the internet

  • Several studies have been carried out on how to make better the quality of the answers provided by Question Answering (QA) system, focusing on textual entailment, question type analysis, answer ranking by the crowd workers and domain experts and personal and community features of the answerer to determine the quality of the answers (Ríos-Gaona et al, 2012; Su et al, 2007; Ishikawa et al, 2011; Ojokoh & Ayokunle, 2012; Anderson et al, 2012; Schofield & Thielscher, 2019)

  • We leverage on the fact that the performance of the crowd workers determines the quality of the result of a crowdsourcing task, and the need to develop an effective and reliable question answering system that is capable of validating and evaluating the answers provided by the crowd because of their varying reliability as established in past works (Hung et al, 2017; Savenkov et al, 2016)

Read more

Summary

Introduction

The new information era provides readily available access to information, especially with the advent of the internet. Crowdsourcing as defined by Howe (2006) is an act of farming out a job ordinarily performed by a selected employee to an open-ended large group of people usually in the form of an open call The performance of these crowd workers largely determines the worth of the result obtained from a task. Several studies have been carried out on how to make better the quality of the answers provided by QA system, focusing on textual entailment, question type analysis, answer ranking by the crowd workers and domain experts and personal and community features (past history) of the answerer to determine the quality of the answers (Ríos-Gaona et al, 2012; Su et al, 2007; Ishikawa et al, 2011; Ojokoh & Ayokunle, 2012; Anderson et al, 2012; Schofield & Thielscher, 2019).

Related Works
User Interface
Database
Naïve Bayes Spam Filter
Separate Question from Answer
Criteria for Quality Answers
Weighted Voting System
Crowd Ranking
Experimental Setup
Evaluation
Results and Discussion
What is the rate at which the system design is attractive?
Conclusion and Future Works
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.