Abstract

Traditional knowledge graphs (KG) representation learning focuses on the link information between entities, and the effectiveness of learning is influenced by the complexity of KGs. Considering a multi-modal knowledge graph (MKG), due to the introduction of considerable other modal information(such as images and texts), the complexity of KGs further increases, which degrades the effectiveness of representation learning. To resolve this solve the problem, this study proposed the multi-modal knowledge graphs representation learning via multi-head self-attention (MKGRL-MS) model, which improved the effectiveness of link prediction by adding rich multi-modal information to the entity. We first generated a single-modal feature vector corresponding to each entity. Then, we used multi-headed self-attention to obtain the attention degree of different modal features of entities in the process of semantic synthesis. In this manner, we learned the multi-modal feature representation of entities. New knowledge representation is the sum of traditional knowledge representation and an entity’s multi-modal feature representation. Simultaneously, we successfully train our model on two existing models and two different datasets and verified its versatility and effectiveness on the link prediction task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call