Abstract

Knowledge graphs have been constructed to represent real-world knowledge. However, downstream tasks usually suffer from the incompleteness of the knowledge graphs. To predict the missing links, various models have been proposed by embedding the entities and relations into lower-dimensional spaces. Existing approaches usually ignore the fact that there are much fewer relations than entities and allow redundant parameters for relations. In this paper, we present MatricEs, a novel approach for link prediction, and propose its variations to reduce the dimension of relations space. In particular, MatricEs utilizes matrix embeddings and models the relation as a linear transformation from the head entity matrix to the tail entity matrix. MatricEs is universal as it subsumes many link prediction models. Recently, relation patterns draw much attention in building models with better ability of expression and interpretability. We formally define the relation patterns which are satisfied by MatricEs, including symmetry, antisymmetry, inversion, commutative compositions and non-commutative compositions, absorption and transitivity. Theoretical analysis shows that MatricEs is a valid, simple and universal model. Experiments show that MatricEs is effective as it outperforms most existing methods on link prediction datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.