Abstract
Representation Learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low dimensional space. There exits two kinds of representation methods for entities in Knowledge Graphs (KGs), including structure-based representation and description-based representation. Most methods represent entities with fact triples of KGs through translating embedding models, which can't integrate the rich information in entities descriptions with triple structure information. In this paper, we propose a novel RL method named as Representation Learning with Complete semantic Description of Knowledge Graphs (RLCD), which can exploit all semantic information of entity descriptions and fact triples of KGs, to enrich the semantic representations of KGs. More specifically, we explore Doc2Vec encoder model to encode all semantic information of entity descriptions without losing the relevance in the context of entities descriptions, and further learn knowledge representations from triples with entity descriptions. The experiment results show that RLCD gets better performance that state-of-the-art method DKRL, in terms of mean rank value and HITS. Moreover, RLCD is much faster than DKRL.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.