Abstract

Knowledge graph embedding aims to learn representation vectors for the entities and relations. Most of the existing approaches learn the representation from the structural information in the triples, which neglects the content related to the entity and relation. Though there are some approaches proposed to exploit the related multimodal content to improve knowledge graph embedding, such as the text description and images associated with the entities, they are not effective to address the heterogeneity and cross-modal correlation constraint of different types of content and network structure. In this paper, we propose a multi-modal content fusion model (MMCF) for knowledge graph embedding. To effectively fuse the heterogenous data for knowledge graph embedding, such as text description, related images and structural information, a cross-modal correlation learning component is proposed. It first learns the intra-modal and inter-modal correlation to fuse the multimodal content of each entity, and then they are fused with the structure features by a gating network. Meanwhile, to enhance the features of relation, the features of the associated head entity and tail entity are fused to learn relation embedding. To effectively evaluate the proposed model, we compare it with other baselines in three datasets, i.e., FB-IMG, WN18RR and FB15k-237. Experiment result of link prediction demonstrates that our model outperforms the state-of-the-art in most of the metrics significantly, implying the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call