Abstract

Knowledge graph (KG) entity typing aims at inferring possible missing entity type instances in KG, which is a very significant but still under-explored subtask of knowledge graph completion. In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge in KGs. Specifically, we present two distinct knowledge-driven effective mechanisms of entity type inference. Accordingly, we build two novel embedding models to realize the mechanisms. Afterward, a joint model via connecting them is used to infer missing entity type instances, which favors inferences that agree with both entity type instances and triple knowledge in KGs. Experimental results on two real-world datasets (Freebase and YAGO) demonstrate the effectiveness of our proposed mechanisms and models for improving KG entity typing. The source code and data of this paper can be obtained from: https://github.com/Adam1679/ConnectE .

Highlights

  • The past decade has witnessed great thrive in building web-scale knowledge graphs (KGs), such as Freebase (Bollacker et al, 2008), YAGO (Suchanek et al, 2007), Google Knowledge Graph (Dong et al, 2014), which usually consists of a huge amount of triples in the form of (denoted (e, r, e))

  • This paper concentrates on KG entity typing, i.e. inferring missing entity type instances in KGs, which is an important sub-problem of KGC

  • We propose a novel framework for inferring missing entity type instances in KGs by connecting entity type instances and global triple information and correspondingly present two effective mechanisms

Read more

Summary

Introduction

The past decade has witnessed great thrive in building web-scale knowledge graphs (KGs), such as Freebase (Bollacker et al, 2008), YAGO (Suchanek et al, 2007), Google Knowledge Graph (Dong et al, 2014), which usually consists of a huge amount of triples in the form of (head entity, relation, tail entity) (denoted (e, r, e)). This paper concentrates on KG entity typing, i.e. inferring missing entity type instances in KGs, which is an important sub-problem of KGC. Each of which is in the formed of (entity, entity type) (denoted (e, t)), are essential entries of KGs and widely used in many NLP tasks such as relation extraction (Zhang et al, 2018; Jain et al, 2018), coreference resolution (Hajishirzi et al, 2013), entity linking (Gupta et al, 2017). Most previous works of KGC focus on inferring missing entities and relationships (Bordes et al, 2013; Wang et al, 2014; Lin et al, 2015; Dettmers et al, 2017; Ding et al, 2018; Nathani et al, 2019), paying less attention to entity type prediction. KGs usually suffer from entity types incompleteness. KG entity type incompleteness leads to some type-involved algorithms in KG-driven tasks grossly inefficient or even unavailable

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call