Abstract

Relation detection plays a crucial role in knowledge base question answering, and it is challenging because of the high variance of relation expression in real-world questions. Traditional relation detection models based on deep learning follow an encoding-comparing paradigm, where the question and the candidate relation are represented as vectors to compare their semantic similarity. Max- or average-pooling operation, which is used to compress the sequence of words into fixed-dimensional vectors, becomes the bottleneck of information flow. In this paper, we propose an attention-based word-level interaction model (ABWIM) to alleviate the information loss issue caused by aggregating the sequence into a fixed-dimensional vector before the comparison. First, attention mechanism is adopted to learn the soft alignments between words from the question and the relation. Then, fine-grained comparisons are performed on the aligned words. Finally, the comparison results are merged with a simple recurrent layer to estimate the semantic similarity. Besides, a dynamic sample selection strategy is proposed to accelerate the training procedure without decreasing the performance. Experimental results of relation detection on both SimpleQuestions and WebQuestions datasets show that ABWIM achieves the state-of-the-art accuracy, demonstrating its effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call