Abstract

Entity Alignment (EA) is a crucial task in knowledge fusion, which aims to link entities with the same real-world identity from different Knowledge Graphs (KGs). Existing methods have achieved satisfactory performance, however, they mainly focus on single modal KG, which is difficult to be effectively applied to multi-modal scenes. In this paper, we propose a Multi-modal Joint entity Alignment Framework (MultiJAF), which can effectively utilize the knowledge of various modalities. Concretely, we first learn the embeddings of different modalities, i.e., structure, attribute and image modalities. Next, we adopt an attention-based multi-modal fusion network to integrate these embeddings and use obtained joint embeddings to compute a joint embedding-based similarity matrix SJ. Moreover, we design a Numerical Process Module (NPM) to infer a similarity matrix SN according to the numerical information of entities. In the end, we utilize a simple late fusion method to ensemble two similarity matrices for the final alignment. In addition, to reduce the cost of labeling data, we propose a novel NPM-based unsupervised multi-modal EA method. Experimental results on two real-world datasets demonstrate the effectiveness of our proposed MultiJAF.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call