Abstract

Current encoder–decoders for Knowledge Graph Question Answering (KGQA) commonly utilize teacher-forcing training to accelerate convergence. However, this training approach limits the model’s exposure to ground truths, resulting in exposure bias that hampers generalization performance during autoregressive inference. To alleviate the issue, we propose a contrastive framework that enables the model to access a variety of positive and negative examples, thereby enhancing generalization. Firstly, we introduce a sampling augmentation strategy to construct contrastive samples, which can ensure explicit semantic consistency of positive pairs and inconsistency of negative pairs. Secondly, we augment the training process by incorporating “hard” negatives to enhance the contrastive objective, along with augmented positives to improve the generation objective. Finally, we also sample multiple logical forms for each question during the inference to reduce the bias potential and train a contrastive ranking model to obtain the target logical form. We achieve improvements of 1.95% and 1% over the previous state-of-the-art methods on the KQA Pro and OVERNIGHT benchmarks, respectively. Furthermore, our approach obtains competitive results on the WebQSP dataset. These findings validate the efficacy of our contrastive framework for advancing KGQA performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.