Abstract

Claim verification based on factual evidence can become substantially difficult when the proof comprises several sentences, making it challenging for natural language processing (NLP) models to comprehend long-range dependencies. Inspired by the success of prompt learning in numerous NLP applications, in this paper, we introduce prompt learning for the multi-hop claim verification task. With extensive experiments, we achieve promising results using our proposed prompt-based method that leverages manually constructed prompts. By fine-tuning the language models with prompts, we have achieved an accuracy of 83.9% while obtaining an improved cross-domain generalization performance as well. Moreover, we conducted experiments in few-shot and zero-shot settings and obtained significantly better performance using prompt-based methods compared to traditional supervised learning techniques that are based on the fine-tuning paradigm to further demonstrate the effectiveness of prompt learning in claim verification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.