Recently, more and more multi-view knowledge graphs with various important attributes are being constructed and applied, such as temporal information, geolocation, commonsense knowledge, multi-language and multi-modal. Knowledge graphs (KGs) have been widely applied in question answering systems, drug discovery, information retrieval, and so on. Knowledge graph representation learning (KGRL) is emerging as the most effective approach to tasks such as knowledge graph completion, entity alignment, knowledge reasoning and knowledge query. However, most previous work on KGs and KGRL are mainly designed for traditional KGs and only consider single-view, and the KGRL methods for special multi-view KGs faces the challenges of lacking a unified framework, insufficient interactive information and underutilizing important attributes. To bridge this gap, in this paper, we proposes a multi-view knowledge graphs (e.g., temporal KGs) representation learning method based on tensor tucker model (MvTuckER), which models the multi-view KGs from tuples to a nth-order binary tensor, and adopts the tensor n-mode product to capture complete interaction between and within different views. We also introduce low-rank and sparse approximations of core tensors to balance expressivity and complexity of MvTuckER. Moreover, we theoretically illustrate that our model is fully expressive for modeling multi-view KGs, discuss the working mechanism of MvTuckER from the perspective of logical operation, and explain how to migrate and extend our method to other types of views. We conducted extensive experiments on three multi-view knowledge graphs, and obtain 4.7%, 2.7%, and 5.3% effective improvement in Hit@1 respectively, which proves that the proposed MvTuckER can achieve state-of-the-art performance.
Read full abstract