Abstract

We introduce Episodic Memory QA, the task of answering personal user questions grounded on memory graph (MG), where episodic memories and related entity nodes are connected via relational edges. We create a new benchmark dataset first by generating synthetic memory graphs with simulated attributes, and by composing 100K QA pairs for the generated MG with bootstrapped scripts. To address the unique challenges for the proposed task, we propose Memory Graph Networks (MGN), a novel extension of memory networks to enable dynamic expansion of memory slots through graph traversals, thus able to answer queries in which contexts from multiple linked episodes and external knowledge are required. We then propose the Episodic Memory QA Net with multiple module networks to effectively handle various question types. Empirical results show improvement over the QA baselines in top-k answer prediction accuracy in the proposed task. The proposed model also generates a graph walk path and attention vectors for each predicted answer, providing a natural way to explain its QA reasoning.

Highlights

  • The task of question and answering (QA) has been extensively studied, where many of the existing applications and datasets have been focused on the fact retrieval task from a large-scale knowledge graph (KG) (Bordes et al, 2015), or machine reading comprehension (MRC) approaches given unstructured text (Rajpurkar et al, 2018)

  • We introduce the new task and dataset for Episodic Memory QA, in which the model answers personal and retrospective questions based on memory graphs (MG), where each episodic memory and its related entities (e.g. knowledge graph (KG) entities, participants, ...) are represented as the nodes connected via corresponding edges (Figure 1)

  • Parameters: We tune the parameters of each model with the following search space: graph embeddings size: {64, 128, 256, 512}, Bi-LSTM hidden states for the language model: {64, 128, 256, 512}, Memory Graph Networks (MGN) hidden states: {64, 128, 256, 512}, word embeddings size: {100, 200, 300}, and max memory slots: {1, 5, 10, 20, 40, 80}

Read more

Summary

Introduction

The task of question and answering (QA) has been extensively studied, where many of the existing applications and datasets have been focused on the fact retrieval task from a large-scale knowledge graph (KG) (Bordes et al, 2015), or machine reading comprehension (MRC) approaches given unstructured text (Rajpurkar et al, 2018). Sented as the nodes connected via corresponding edges (Figure 1) Examples of such queries include “Where did we go after we had brunch with Jon?”, “How many times did I go to jazz concerts last year?”, etc. “Who painted the Mona Lisa?”), requiring extensive candidate memory generation Another challenge we observe is the case where 2) target memory is only indirectly linked to reference memory or entities 3) queries are not confined to retrieval tasks, but include various types of questions such as counting, set comparing, etc., many of which remain unsolved or not considered in many QA tasks

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.