Abstract

Fusing data from different sources to improve decision making in smart cities has received increasing attention. Collected data through sensors usually exist in a multi-modal form, such as values, images, and texts. Thus, designing models that handle multi-modal data has an important role in this field. Meanwhile, security and privacy issues cannot be ignored, as the leakage of big data may provide opportunities for criminals. To solve the above challenges, we focus on research on multi-modal entity alignment for knowledge graphs and proposed the Multi-Modal Interaction Entity Alignment model (MMIEA). The model is proposed from the perspective of fusing data from different modalities while maintaining privacy. We determined that the model is privacy-preserving because it does not need to transmit the raw data of each modality (only the vector representation is transmitted). Specifically, we introduce and improve the BERT-INT model for the entity alignment task in multi-modal knowledge graphs. Experimental results on two commonly used multi-modal datasets show that our method outperforms 17 algorithms, including nine multi-modal entity alignment methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call