Abstract

We propose a case-based reasoning (CBR) approach to answer validation/answer scoring and reranking in question answering (QA) systems, where annotated answer candidates for known questions provide evidence for validating answer candidates for new questions. The use of CBR promises a continuous increase in answer quality, given user feedback that extends the case base. In the paper, we present the complete approach, emphasizing the use of CBR techniques, namely the structural case base, built with annotated MultiNet graphs, and corresponding graph similarity measures. We cover a priori relations to experienced answer candidates for former questions. We describe the adequate structuring of the case base and develop appropriate similarity measures. Finally we integrate CBR into an existing framework for answer validation and reranking that also includes logical answer validation and a shallow linguistic validation, using a learning-to-rank approach for the final answer ranking based on CBR-related features. In our experiments on QA@CLEF questions, the best learned models make heavy use of CBR features. The advantage already achieved by CBR will increase with time due to the automatic improvement with new user annotations given by relevance feedback.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.