Abstract

Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is the task of inferring missing facts based on existing ones. Knowledge graph embedding, representing entities and relations in the knowledge graphs with high-dimensional vectors, has made significant progress in link prediction. The tensor decomposition models are an embedding family with good performance in link prediction. The previous tensor decomposition models do not consider the problem of attribute separation. These models mainly explore particular regularization to improve performance. No matter how sophisticated the design of tensor decomposition models is, the performance is theoretically under the basic tensor decomposition model. Moreover, the unnoticed task of attribute separation in the traditional models is just handed over to the training. However, the amount of parameters for this task is tremendous, and the model is prone to overfitting. We investigate the design approaching the theoretical performance of tensor decomposition models in this paper. The observation that measuring the rationality of specific triples means comparing the matching degree of the specific attributes associated with the relations is well-known. Therefore, the comparison of actual triples needs first to separate specific attribute dimensions, which is ignored by existing models. Inspired by this observation, we design a novel tensor ecomposition model based on Separating Attribute space for knowledge graph completion (SeAttE). The major novelty of this paper is that SeAttE is the first model among the tensor decomposition family to consider the attribute space separation task. Furthermore, SeAttE transforms the learning of too many parameters for the attribute space separation task into the structure’s design. This operation allows the model to focus on learning the semantic equivalence between relations, causing the performance to approach the theoretical limit. We also prove that RESCAL, DisMult and ComplEx are special cases of SeAttE in this paper. Furthermore, we classify existing tensor decomposition models for subsequent researchers. Experiments on the benchmark datasets show that SeAttE has achieved state-of-the-art among tensor decomposition models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.