Abstract

Knowledge graph (KG) embedding aims to project the original KG into a low-dimensional embedding vector space, so as to facilitate the completion of KGs and the application of KGs in other AI fields. Most existing models preserve certain proximity property of KGs in the embedding space, such as the first/second-order proximity and the sequence-aware higher-order proximity. However, the ubiquitous similarity relationship among different sequences has rarely been discussed. In this paper, we propose an unified framework to preserve the subgraph-aware proximity in the embedding space, holding that the sequences within a subgraph generally imply similar pattern. To analyze the impact of different composition of sequences on the subgraph-aware proximity, we classify the subgraphs into relation subgraph and complete subgraph based on the composition of their sequences. Accordingly, we provide three methods for KG sequence embedding module: (1) Simply adding the involved relations of the sequence in relation subgraph; (2) Recurrent neural networks for the sequences in complete subgraph; (3) Dilated RNN to match the special structure of KG sequences in complete subgraph. Empirically, we evaluate the proposed framework on the KG completion tasks of link prediction and entity classification. The results show that our framework performs better than the baselines by preserving the subgraph-aware proximity. Especially, exploring the special structure of KG sequences can further improve the performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call