Abstract

Currently, Knowledge Base Question Answering (KBQA) is an important research topic in the fields of information retrieval (IR) and natural language processing (NLP). The most common questions asked on the Web are simple questions, which can be answered by a single relational fact in a knowledge base (KB). However, answering simple questions automatically remains a challenging task in the IR and NLP research communities. Based on a review of various studies and a detailed analysis, we surmise that these challenges are primarily related to the following concerns: (1) how to effectively access a large-scale KB; and (2) how to effectively reduce the gap between NL questions and the structured semantics in a KB. Most previous studies have considered these as separate and independent subtasks, subject detection and predicate matching. Here, we propose a deep fused model that combines subject detection and predicate matching under a unified framework. Specifically, we employ a subject detection model to recognize the subject entity in a question, and a multilevel semantic model to learn the semantic representations for questions and predicates. These models share parameters, and can be trained jointly. We evaluated the proposed method on both English and Chinese KBQA datasets. The experimental results demonstrate that the proposed approach significantly outperforms state-of-the-art systems when applied to both datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call