Abstract

Answer selection is an important subtask of the question answering domain in natural language processing(NLP) applications. In this task, attention mechanism is a widely used technique which focuses on the context information and interrelationship between different words in the sentences to allocate different weight and enhance feature. However, the natural characteristics of words themselves are not fully excavated, thus the performance may be limited to a certain extent. In this paper, we propose a novel Hierarchical Multidimensional Attention (HMDA) model to address this issue. Especially, HMDA proposes a new kind of attention mechanism, word-attention, a true individual attention which can enhance the implied meaning of the word itself to extract features from word level which are more unique. Then HMDA uses global co-attention to better utilize word-attention and capture more common similar features. In order to utilize this attention-based semantic information on different granularities differently, HMDA designs a multi-layer structure which makes full use of all attention mechanisms by embedding attention features to model hierarchically. HMDA obtains various fine-grained information between question and candidate answers and avoids information loss. Empirically, we demonstrate that our proposed model can consistently outperform the state-of-the-art baselines under different evaluation metrics on all TrecQA, WikiQA and InsuranceQA datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call