Abstract

We propose a case-based reasoning (CBR) approach to answer validation/answer scoring and reranking in question answering (QA) systems, where annotated answer candidates for known questions provide evidence for validating answer candidates for new questions. The use of CBR promises a continuous increase in answer quality, given user feedback that extends the case base. In the paper, we present the complete approach, emphasizing the use of CBR techniques, namely the structural case base, built with annotated MultiNet graphs, and corresponding graph similarity measures. We cover a priori relations to experienced answer candidates for former questions. We describe the adequate structuring of the case base and develop appropriate similarity measures. Finally we integrate CBR into an existing framework for answer validation and reranking that also includes logical answer validation and a shallow linguistic validation, using a learning-to-rank approach for the final answer ranking based on CBR-related features. In our experiments on QA@CLEF questions, the best learned models make heavy use of CBR features. The advantage already achieved by CBR will increase with time due to the automatic improvement with new user annotations given by relevance feedback.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call