Abstract

AbstractKnowledge graph embedding models are used to learn low‐dimensional representations of entities and relations in knowledge graphs. In this paper, we propose Multi‐RAttE, an attention‐based learning method for multiple relational knowledge graph embedding representation, which divides the information transfer in the knowledge graph into cross‐relational information transfer and relation‐specific information transfer, and divides the embedding of knowledge graph entities into structural embedding and multi‐relational embedding for joint learning. To objectively analyse the performance of the Multi‐RAttE model, we select two typical datasets and different representative baseline models for experimental evaluation on several tasks such as link prediction, multi‐relation prediction and node classification. The experimental results show that the Multi‐RAttE model improves 8% over the state‐of‐the‐art model Composition‐based Multi‐Relational Graph Convolutional Networks (CompGCN) in terms of Hits@1 metric on the link prediction task on FB15k‐237 dataset; on the multi‐relation prediction task, the accuracy improves by 1.8% and 3.7% in the auc metric and F1 metric, respectively. The experimental results have proved that the Multi‐RAttE model can effectively perform the representation of multiple relations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call