Abstract

We study how to improve the performance of Question Answering over Knowledge Base (KBQA) by utilizing the factoid Question Generation (QG) in this paper. The task of question generation (QG) is to generate a corresponding natural language question given the input answer, while question answering (QA) is a reverse task to find a proper answer given the question. For the KBQA task, the answer could be regarded as a fact containing a predicate and two entities from the knowledge base. Training an effective KBQA system needs a lot of labeled data which are hard to acquire. And a trained KBQA system still performs poor when answering the questions corresponding with unseen predicates in the training process. To solve these challenges, we propose a unified framework to combine the QG and QA with the help of knowledge base and text corpus. The models of QA and QG are first trained jointly on the gold dataset, then the QA model is fine tuned by utilizing a supplemental dataset constructed by the QG model with the help of text evidence. We conduct experiments on two datasets SimpleQuestions and WebQSP with the Freebase knowledge base. Empirical results show that our framework improves the performance of KBQA and performs comparably with or even better than the state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call