Abstract

Question Answering (QA) research is a significant and challenging task in Natural Language Processing. QA aims to extract an exact answer from a relevant text snippet or a document. The motivation behind QA research is the need of user who is using state-of-the-art search engines. The user expects an exact answer rather than a list of documents that probably contain the answer. In this paper, we consider a particular issue of QA that is gathering and scoring answer evidence collected from relevant documents. The evidence is a text snippet in the large corpus which supports the answer. For Evidence Scoring (ES) several efficient features and relations are required to extract for machine learning algorithm. These features include various lexical, syntactic and semantic features. Also, new structural features are extracted from the dependency features of the question and supported document. Experimental results show that structural features perform better, and accuracy is increased when these features are combined with other features. To score the evidence, for an existing question-answer pair, Logical Form Answer Candidate Scorer technique is used. Furthermore, an algorithm is designed for learning answer evidence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call