Abstract

Semantic matching between question and answer sentences involves recognizing whether a candidate answer is relevant to a particular input question. Given the fact that such matching does not examine a question or an answer individually, context information outside the sentence should be considered equally important to the within-sentence syntactic context. This motivates us to design a new question-answer matching model, built upon a cross-sentence, context-aware, bi-directional long short-term memory architecture. The interactive attention mechanisms are proposed which automatically select salient positional sentence representations, that contribute more significantly towards the relevance between two question and answer. A new quantity called context information jump is proposed to facilitate the formulation of the attention weights, and is computed via the joint states of adjacent words. An interactive-aware sentence representation is constructed by connecting a combination of multiple sentence positional representations to each hidden representation state. In the experiments, the proposed method is compared with existed models, using four public community datasets, and the evaluations show that it is very competitive. In particular, it offers 0.32%-1.8% improvement over the best performing model for three out of four datasets, while for the remaining one performance is around 0.2% of the best performer.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.