Abstract

Knowledge graphs (KGs) express relationships between entity pairs, and many real-life problems can be formulated as knowledge graph reasoning (KGR). Conventional approaches to KGR have achieved promising performance but still have some drawbacks. On the one hand, most KGR methods focus only on one phase of the KG lifecycle, such as KG completion or refinement, while ignoring reasoning over other stages, such as KG extraction. On the other hand, traditional KGR methods, broadly categorized as symbolic and neural, are unable to balance both scalability and interpretability. To resolve these two problems, we take a more comprehensive perspective of KGR with regard to the whole KG lifecycle, including KG extraction, completion, and refinement, which correspond to three subtasks: knowledge extraction, relational reasoning, and inconsistency checking. In addition, we propose the implementation of KGR using a novel neural symbolic framework, with regard to both scalability and interpretability. Experimental results demonstrate that our proposed methods outperform traditional neural symbolic models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.