Abstract

Knowledge graph (KG) completion aims at filling the missing facts in a KG, where a fact is typically represented as a triple in the form of ( head, relation, tail). Traditional KG completion methods compel two-thirds of a triple provided (e.g., head and relation) to predict the remaining one. In this paper, we propose a new method that extends multi-layer recurrent neural networks (RNNs) to model triples in a KG as sequences. It obtains state-of-the-art performance on the common entity prediction task, i.e., giving head (or tail) and relation to predict the tail (or the head), using two benchmark data sets. Furthermore, the deep sequential characteristic of our method enables it to predict the relations given head (or tail) only, and even predict the whole triples. Our experiments on these two new KG completion tasks demonstrate that our method achieves superior performance compared with several alternative methods.

Highlights

  • Knowledge graphs (KGs), such as DBpedia [1] and Freebase [2], often use triples, in the form of (h, r, t), to record billions of real-world facts, where h, t denote entities and r denotes a relation between h and t

  • We describe the procedure of each experiment and report the corresponding results. By carrying out these experiments, we want to answer the following three questions: 1) Can our method achieve state-of-the-art performance on some benchmark data sets? 2) How does our method perform on the new KG completion tasks such as relation prediction and triple prediction? 3) What are the strengths and weaknesses of each integrating strategy in our deep sequential model?

  • Note that the models in this experiment only predicted triples in the forward direction, because: (i) we have shown that the backward relation prediction results on filtered mean rank (FMR) were worse than the forward relation prediction results; and (ii) predicting forward triples of an entity is more natural to KG modeling

Read more

Summary

INTRODUCTION

Knowledge graphs (KGs), such as DBpedia [1] and Freebase [2], often use triples, in the form of (h, r, t), to record billions of real-world facts, where h, t denote entities and r denotes a relation between h and t. They model complex structures of KGs with a fixed expression (h, r, t) Such short sequences may be insufficient to provide adequate context for prediction, while it is time-consuming and difficult to construct valuable long sequences from a huge number of paths in KGs; and (ii) relations and entities are elements of two different types, and they appear in triples in a fixed order. We propose DSKG in this paper, a deep sequential model that extends multi-layer RNNs to model KGs. DSKG can smoothly model complex structures, since each cell in DSKG does not need to output an explicit prediction result, but gives its own comprehension by adding or removing information from the hidden state and conveys this processed hidden state to its cell.

RELATED WORK
Translational Models
Non-translational Models
BASIC AND DEEP SEQUENTIAL MODELS
Basic Sequential Model
Deep Sequential Model
Respective
Cascade
Batch Normalization
Dropout
EXPERIMENTS
Data Sets and Experiment Settings
Entity Prediction
Relation Prediction
Triple Prediction
Findings
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call