Abstract

Knowledge question and answering (Q&A) is widely used. However, most existing semantic parsing methods in Q&A usually use cascading, which can incur error accumulation. In addition, using only one institution’s Q&A data definitely will limit the Q&A performance, while data privacy prevents sharing between institutions. This article proposes a knowledge graph-based reinforcement federated learning (KGRFL)-based Q&A approach to address these challenges. We design an end-to-end multitask semantic parsing model MSP-bidirectional and auto-regressive transformers (BART) that identifies question categories while converting questions into SPARQL statements to improve semantic parsing. Meanwhile, a reinforcement learning (RL)-based model fusion strategy is proposed to improve the effectiveness of federated learning, which enables multi-institution joint modeling and data privacy protection using cross-domain knowledge. In particular, it also reduces the negative impact of low-quality clients on the global model. Furthermore, a prompt learning-based entity disambiguation method is proposed to address the semantic ambiguity problem because of joint modeling. The experiments show that the proposed method performs well on different datasets. The Q&A results of the proposed approach outperform the approach of using only a single institution. Experiments also demonstrate that the proposed approach is resilient to security attacks, which is required for real applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call