Abstract

Answer selection, as a crucial method for intelligent medical service robots, has become more and more important in natural language processing (NLP). However, there are still some critical issues in the answer selection model. On the one hand, the model lacks semantic understanding of long questions because of noise information in a question–answer (QA) pair. On the other hand, some researchers combine two or more neural network models to improve the quality of answer selection. However, these models focus on the similarity between questions and answers without considering background information. To this end, this paper proposes a novel refined answer selection method, which uses an attentive bidirectional long short-term memory (Bi-LSTM) network and a self-attention mechanism to solve these issues. First of all, this paper constructs the required knowledge-based text as background information and converts the questions and answers from words to vectors, respectively. Furthermore, the self-attention mechanism is adopted to extract the global features from the vectors. Finally, an attentive Bi-LSTM network is designed to address long-distance dependent learning problems and calculate the similarity between the question and answer with consideration of the background knowledge information. To verify the effectiveness of the proposed method, this paper constructs a knowledge-based QA dataset including multiple medical QA pairs and conducts a series of experiments on it. The experimental results reveal that the proposed approach could achieve impressive performance on the answer selection task and reach an accuracy of 71.4%, MAP of 68.8%, and decrease the BLUE indicator to 3.10.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call