Abstract

Representation learning of knowledge graphs encodes both entities and relations into a continuous low-dimensional space. Most existing methods focus on learning representations with structured fact triples indicating relations between entities, ignoring rich additional information of entities including entity attributes and associated multimodal content descriptions. In this paper, we propose a new model to learn knowledge representations with entity attributes and multimedia descriptions (KR-AMD). Specifically, we construct three triple encoders to obtain structure-based entity representation, attribute-based entity representation and multimedia content-based entity representation, and finally generate the knowledge representations for knowledge graphs in KR-AMD. The experimental results show that, by special modeling of entity attributes and text-image descriptions, KR-AMD can significantly outperform state-of-the-art KR models in prediction of entities, attributes and relations, which validates the effectiveness of KR-AMD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call