Abstract

Many well-performing embedding models for knowledge graphs employ a negative sampling framework to complete the representation learning in which the loss function is a critical component in distinguishing between positive and negative triplets. One of the most recently proposed loss functions is the double-limited scoring loss, which sets fixed upper and lower bounds respectively for positive and negative triplets. We find that, for all positive and negative triplets, fixed upper and lower bounds are not appropriate since triplets that are difficult to be distinguished usually have changing bounds. In this paper, we propose a self adaptive double-limited loss (ADL) that dynamically adjusts the upper limit of positive triplet scores and the lower limit of negative triplet scores through evaluating the score proportion between positive and negative triplets. Furthermore, based upon ADL, we build several knowledge graph embedding models, including TransE-ADL, TransH-ADL, TransD-ADL, TorusE-ADL, and ComplEx-ADL, in which the gradient descent technique is used to train their parameters. Dynamically adjusted bounds lead to a reasonable partition of positive and negative triplets within embedding space, improving prediction accuracy significantly. The experimental results of link prediction confirm this improvement compared to the state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call