Derived from knowledge bases, knowledge graphs represent knowledge expressions in graphs, which utilize nodes and edges to denote entities and relations conceptually. Knowledge graph can be described in textual triple form, consisting of head entities, tail entities and relations between entities. In order to represent elements in knowledge graphs, knowledge graph embedding techniques are proposed to map entities and relations into continuous vector spaces as numeric vectors for computational efficiency. Convolution-based knowledge graph embedding models have promising performance for knowledge graph representation learning. However, the input of those neural network-based models is frequently in handmade forms and may suffer from low efficiency in feature extraction procedure of the models. In this paper, a convolutional autoencoder is proposed for knowledge graph representation learning with entity pairs as input, aiming to obtain corresponding hidden relation representation. In addition, a bi-directional relation encoding network is utilized to represent semantic of entities in different directional relation patterns, as an encoder to output representation for initialization of the convolutional autoencoder. Experiments are conducted on standard datasets including, WN18RR, Kinship, NELL-995 and FB15k-237 as a link prediction task. Besides, input embedding matrix composed of different ingredients is designed to evaluate performances of the convolutional autoencoder. The results demonstrate that our model is effective in learning representation from entity feature interactions.
Read full abstract