Abstract

Large-scale knowledge graphs are usually incomplete. Knowledge graph embedding has achieved encouraging performance in alleviating the incompleteness of knowledge graphs. There are approaches to leverage the multi-modal content, such as text description and images, to improve the performance of knowledge graph embedding. However,due to the heterogeneity across different modalities, current methods are not effective to fuse the multi-modal content and network structure information to learn the embedding. In this work, a dual-track model DuMF for knowledge graph embedding enhancement is proposed. The model includes two tracks to fuse multi-modal content and network structure respectively. In each track, the expressiveness of joint features is improved by the bilinear method, and meanwhile the task-specific important features are learned by deliberate attention. Finally, the fused features are generated by the gating network. To extensively evaluate the model, two challenging datasets are enriched with additional multi-modal data. Experimental results show that DuMF is superior to baselines in link prediction. The flexibility of the model is promising for the further improvement of model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call