Abstract

Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing Memory networks to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on Spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 F1 points.

Highlights

  • Question Answering (QA) has been a longstanding goal of natural language processing

  • Universal schema has been extensively used for relation extraction, this paper shows its applicability to QA

  • The contributions of the paper are as follows (a) We show that universal schema representation is a better knowledge source for QA than either knowledge base (KB) or text alone, (b) On the SPADES dataset (Bisk et al, 2016), containing real world fill-in-the-blank questions, we outperform state-of-the-art semantic parsing baseline, with 8.5 F1 points. (c) Our analysis shows how individual data sources help fill the weakness of the other, thereby improving overall performance

Read more

Summary

Introduction

Knowledge bases (KB) contains facts expressed in a fixed schema, facilitating compositional reasoning. These attracted research ever since the early days of computer science, e.g., BASEBALL (Green Jr et al, 1961). A major drawback of this paradigm is that KBs are highly incomplete (Dong et al, 2014) It is an open question whether KB relational structure is expressive enough to represent world knowledge (Stanovsky et al, 2014; Gardner and Krishnamurthy, 2017). Using distributed representation allows reasoning on sentences that are similar in meaning but different on the surface form We too use this variant to encode our textual relations

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call