Abstract

Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.

Highlights

  • Knowledge graph (KG) embeddings are low-dimensional representations for entites and relations

  • Qualitative Analysis Before evaluating the performance in specific task compared with other methods, we need to discuss the ability of our model to represent uncertainty in KG embedding

  • We discuss the relations in FB15k-237 with ‘/education’ as the domain, providing a determinant and trace of their covariances as shown in Table 2, from which we have made the following observations: 1. Our method has the ability to measure the uncertainty in KG embedding

Read more

Summary

Introduction

Knowledge graph (KG) embeddings are low-dimensional representations for entites and relations. Research on knowledge graph embedding occurs mainly along three main lines One of these lines of research includes studies based on translation. In the TransR model [8], an entity is a complex of multiple attributes, and different relationships focus on different attributes of the entity Another line of research includes studies based on semantic matching. The third line of research includes studies based on graph convolutional neural networks. Unlike previous works, which focused on shallow, fast models that can scale to large knowledge graphs, ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model KGs. Subsequently, the ConvKB [13] model has been used to explore the global relationships among samedimensional entries of the entity and relation embeddings. R-GCN [14] is another convolutional network designed for KBs, generalized from GCN [15] for unirelational data

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call