Abstract

Conventional knowledge graph representation learn the representation of entities and relations by projecting triples in the knowledge graph to a continuous vector space. The vector representation increases the precision of link prediction and the efficiency of downstream tasks. However, these methods cannot process previously unseen entities during the knowledge graph evolution. In other words, the model trained on the source knowledge graph cannot be applied to the target knowledge graph containing new unseen entities. Recently, a few subgraph-based link prediction models obtained the inductive ability, but they all neglect semantic information. In this work, we propose an inductive representation learning model TGraiL which considers not only the topological structure but also semantic information. First, distance in the subgraph is used to encode the node’s topological structure. Second, the projection matrix is used to encode the entity type information. Finally, both kinds of information are fused for training to acquire the ultimate vector representation of entities. The experimental results indicate that the model’s performance has been significantly improved compared to the existing baseline models, demonstrating the method’s effectiveness and superiority.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.