Abstract

Entity alignment helps discover and link entities from different knowledge graphs (KGs) that refer to the same real-world entity, making it a critical technique for KG fusion. Most entity alignment methods are based on knowledge representation learning, which uses a mapping function to project entities from different KGs into a unified vector space and align them based on calculated similarities. However, this process requires sufficient pre-aligned entity pairs. To address this problem, this study proposes an entity alignment method based on joint learning of entity and attribute representations. Structural embeddings are learned using the triples modeling method based on TransE and PTransE and extracted from the embedding vector space utilizing semantic information from direct and multi-step relation paths. Simultaneously, attribute character embeddings are learned using the N-gram-based compositional function to encode a character sequence for the attribute values, followed by TransE to model attribute triples in the embedding vector space to obtain attribute character embedding vectors. By learning the structural and attribute character embeddings simultaneously, the structural embeddings of entities from different KGs can be transferred into a unified vector space. Lastly, the similarities in the structural embedding of different entities were calculated to perform entity alignment. The experimental results showed that the proposed method performed well on the DBP15K and DWK100K datasets, and it outperformed currently available entity alignment methods by 16.8, 27.5, and 24.0% in precision, recall, and F1 measure, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call