Abstract

Knowledge graph embedding is a method to predict missing links in knowledge graphs by learning the interactions between embedded entities and relations in a continuous low-dimensional space. Current research on convolution-based models tends to provide sufficient interactions for extracting potential knowledge. However, sufficient interactions do not mean that they are reasonable. Our studies find that reasonable interactions can further stimulate knowledge extraction capability. Reasonable interactions need to ensure that the elements participating in interactions are disordered and in a reasonable number. To model reasonable interactions that cannot be specifically quantified, we propose a concise and effective model IntME to address this challenge. In detail, we utilize checked feature reshaping and disordered matrix multiplication to form two different types of feature maps to ensure the disorder of the interacting elements and control the number of elements before feature fusion by the shapes of the feature maps after channel scaling reshaping. In feature fusion, we employ large convolution filters and pointwise filters for the deep and shallow linear fusion of feature interactions, which can take into account both explicit and implicit knowledge extraction capability. The evaluations of four benchmark datasets show that IntME has a powerful performance in convolution-based models and a lower training cost, and also demonstrate that our proposed approaches based on reasonable interactions can effectively improve knowledge discovery capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call