Abstract

The objective of the cross-lingual entity alignment (EA) task is to identify equivalent entities in knowledge graphs (KGs) across different languages. Most existing works rely on relation triples to learn entity representations. In addition, some methods introduce attribute information to improve the performance of EA, but they model relation triples and attribute triples separately in different networks, which prevents the alignment information in different triples from spreading across the networks and limits their mutual interaction and enhancement. In this paper, we propose integrating entities and attributes into a unified entity-attribute graph, where the neighbors of an entity include not only other entities but also its attributes. To model the entity-attribute graph, we further design a novel Bi-Neighborhood Graph Neural Network (BNGNN). Specifically, BNGNN applies attribute-aware self-attention to neighborhood aggregation and generates two types of neighborhood features in each layer, i.e., entity neighborhood features and attribute neighborhood features. Then, the two features are combined through a highway gate to generate more representative entity embeddings. Finally, the outputs of multiple layers are integrated to obtain global-aware entity representations with high-level structural information. Furthermore, a two-stage training strategy is adopted to jointly perform entity alignment and attribute alignment in a semi-supervised framework. Extensive experiments conducted on two real-world datasets (regular and sparse) demonstrate that our BNGNN model consistently outperforms existing EA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call