Abstract

Translation-based knowledge graph embeddings learn vector representations of entities and relations by treating relations as translation operators over the entities in an embedding space. Since the translation is represented through a score function, translation-based embeddings are trained in general by minimizing a margin-based ranking loss, which assigns a low score to positive triples and a high score to negative triples. However, this type of embedding suffers from slow convergence and poor local optima because the loss adopts only one pair of a positive and a negative triple at a single update of learning parameters. Therefore, this paper proposes the N-pair translation loss that considers multiple negative triples at one update. The N-pair translation loss employs multiple negative triples as well as one positive triple and allows the positive triple to be compared against the multiple negative triples at each parameter update. As a result, it becomes possible to obtain better vector representations rapidly. The experimental results on link prediction prove that the proposed loss helps to quickly converge toward good optima at the early stage of training.

Highlights

  • Knowledge graph embedding aims at learning the representation of a knowledge graph by embedding the knowledge graph into a low dimensional vector space [1]

  • For a given knowledge graph expressed as a set of knowledge triples where each triple is composed of a relation (r) and two entities (h and t), knowledge graph embedding finds vector representations of h, t, and r by considering the structure of the knowledge graph

  • Even if translation-based knowledge graph embeddings yield promising results, they suffer from slow convergence and poor performance compared to other knowledge graph embeddings [16]

Read more

Summary

Introduction

Knowledge graph embedding aims at learning the representation of a knowledge graph by embedding the knowledge graph into a low dimensional vector space [1]. Several variants have been proposed by modifying a score function to find better vector representations [10,11,12,13,14] These translation-based embeddings are usually trained by minimizing the margin-based ranking loss over the knowledge graph. If minibatch size is one while training translation-based knowledge graph embeddings with the margin-based ranking loss, the positive triple in the minibatch is compared with only one negative triple at a single update of learning parameters. This paper proposes a simple but effective learning method based on a new loss function to incorporate multiple negative triples in training translation-based embeddings.

Related Work
Translation-Based Knowledge Graph Embeddings
Considering Multiple Negative Triples though N-Pair Translation Loss
Dataset
Evaluation Task and Protocol
Implementation
Experimental Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.