Abstract

Recently, reinforcement learning (RL)-based methods have achieved remarkable progress in both effectiveness and interpretability for complex question answering over knowledge base (KBQA). However, existing RL-based methods share a common limitation: the agent is usually misled by aimless exploration, as well as sparse and delayed rewards, leading to a large number of spurious relation paths. To address this issue, a new adaptive reinforcement learning (ARL) framework is proposed to learn a better and interpretable model for complex KBQA. First, instead of using a random walk agent, an adaptive path generator is developed with three atomic operations to sequentially generate the relation paths until the agent reaches the target entity. Second, a semantic policy network is presented with both character-level and sentence-level information to better guide the agent. Finally, a new reward function is introduced by considering both the relation paths and the target entity to alleviate sparse and delayed rewards. The empirical results on five benchmark datasets show that our model is more effective than state-of-the-art approaches. Compared with the strong baseline model SRN, the proposed model achieves performance improvements of 23.7% on MetaQA-3 using the metric Hits@1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call