Abstract

The path-based model has remarkably succeeded in the knowledge graph (KG) multi-hop reasoning task. It employs all available resources to accomplish various complex path reasoning tasks and continuously explores new graph paths. However, existing multi-hop reasoning methods rely heavily on the high reward, which is fed back to the model when the agent searches for the target. In contrast, most previous methods focused on efficiently querying the correct answer while disregarding the logic and validity of the entire reasoning chain. It contradicts the intention of various complex reasoning tasks in real-world scenarios. Unreasonable paths will cause the selection to deviate from normal cognition, resulting in invalid path resource information. Therefore, we must be able to complete specific tasks via these reliable paths. In order to address these issues, we proposed a Reinforcement Learning-Based Knowledge Reasoning Model with Logical Embedding (RKLE) to enhance the interpretability of the reasoning chain. RKLE assembles the logical structure (query structure) with additional nodes in the current step and develops the logical reward shaping to assist the agent in selecting a more reasonable path. Finally, experimental results on several benchmarks demonstrate that our approach can search for the correct answers more efficiently than existing path-based methods and that the corresponding reasoning chain is interpretable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call