Abstract

As research on utilizing human knowledge in natural language processing has attracted considerable attention in recent years, knowledge graph (KG) completion has come into the spotlight. Recently, a new knowledge graph completion method using a pre-trained language model, such as KG-BERT, is presented and showed high performance. However, its scores in ranking metrics such as Hits@k are still behind state-of-the-art models. We claim that there are two main reasons: 1) failure in sufficiently learning relational information in knowledge graphs, and 2) difficulty in picking out the correct answer from lexically similar candidates. In this paper, we propose an effective multi-task learning method to overcome the limitations of previous works. By combining relation prediction and relevance ranking tasks with our target link prediction, the proposed model can learn more relational properties in KGs and properly perform even when lexical similarity occurs. Experimental results show that we not only largely improve the ranking performances compared to KG-BERT but also achieve the state-of-the-art performances in Mean Rank and Hits@10 on the WN18RR dataset.

Highlights

  • A Knowledge Graph (KG) is a graph-structured knowledge base, where real-world knowledge is represented in the form of triple (h, r, t): which means h and t have a relationship r

  • We evaluate the proposed method on two popular datasets WN18RR and FB15k-237, and experimental results show that our method could improve ranking performance by a large margin compared to KG-BERT

  • The results show that multi-task learning with two tasks (LP + Relation Prediction (RP)) and (LP + Relevance Ranking (RR)) could improve over the baseline by a large margin maintaining low Mean Rank (MR) scores

Read more

Summary

Introduction

A Knowledge Graph (KG) is a graph-structured knowledge base, where real-world knowledge is represented in the form of triple (h, r, t): (head entity, relation, tail entity) which means h and t have a relationship r. Several studies on the knowledge graph completion have been conducted (Bordes et al, 2013; Trouillon et al, 2016; Sun et al, 2019; Dettmers et al, 2018) They presented methods to model the connectivity patterns between entities in KG, and score functions to define the validity of the triple. These methods only consider graph structure and relational information depending on existing KG. They cannot predict well on triples that contain less frequent entities. Even though KG-BERT significantly improved mean ranks using preliminary linguistic information from BERT (Devlin et al, 2018), the results in other ranking metrics such as MRR and Hit@k are still behind the state-of-the-art models

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call