Abstract

The problem of generating structured Knowledge Graphs (KGs) is difficult and open but relevant to a range of tasks related to decision making and information augmentation. A promising approach is to study generating KGs as a relational representation of inputs (e.g., textual paragraphs or natural images), where nodes represent the entities and edges represent the relations. This procedure is naturally a mixture of two phases: extracting primary relations from input, and completing the KG with reasoning. In this paper, we propose a hybrid KG builder that combines these two phases in a unified framework and generates KGs from scratch. Specifically, we employ a neural relation extractor resolving primary relations from input and a differentiable inductive logic programming (ILP) model that iteratively completes the KG. We evaluate our framework in both textual and visual domains and achieve comparable performance on relation extraction datasets based on Wikidata and the Visual Genome. The framework surpasses neural baselines by a noticeable gap in reasoning out dense KGs and overall performs particularly well for rare relations.

Highlights

  • For human infants, it is seemingly easy to learn to reason about the relation between any two objects

  • We present our model L for Knowledge Graphs (KGs) reasoning, which is a differentiable variant of inductive logic programming (Muggleton, 1991)

  • We evaluate our model on tasks for two modalities: textual and visual relation extraction

Read more

Summary

Introduction

It is seemingly easy to learn to reason about the relation between any two objects. Infants show this capability because they learn to understand the world, and they acquire language by integrating cross-modal information. They do learn referents in language by statistically matching words with occurrences of objects in the environment, and begin to understand the characteristics and affordances of the objects. Since a crucial aspect of current AI systems is to learn appropriate representations for designated tasks, it seems important to reflect cross-modal learning in learning these representations. In The Little Match Girl’s dream, Hans Christian Andersen wrote: On the table was spread a snow-white tablecloth; upon it was a splendid porcelain service, and the roast goose was steaming famously with its stuffing of apple and dried plums

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call