Abstract

Knowledge graph (KG), which is often described by a set of triplets (head, relation, tail), has shown to be very useful for many downstream applications, but suffers from the issue of incomplete connections. Knowledge graph completion (KGC) is to predict the missing connections among entities. Driven by recent advances of BERT in extracting meaningful textual representations, some recent KGC methods have been proposed to leverage the associated texts of triplets to assist the completion. Under the BERT-based framework, we argue that if more meaningful representations can be learned from the texts, better performance can be expected. Following this framework, inspired by recent successes of contrastive learning, we propose to adapt it to the KGC task by introducing a novel dual-encoder architecture to produce semantic-similar positive pairs, which have been widely shown to be pivotal in inducing semantic-rich representations for contrastive learning. To further improve the performance, a hard negative sampling strategy is developed to train the model. Extensive experimental results on three public datasets show that the proposed techniques can improve the performance of existing KGC methods effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call