Abstract

Hybrid evidence fact verification aims to verify claims against retrieved text and table evidence. Existing methods primarily focus on evidence format conversion and unification strategies to reformat the input to align with the corpus of the pre-trained textual or tabular models. However, these models fail to extract task-specific knowledge from the language model (LM) to learn the correlation between input claim-evidence pairs and output verdicts. Therefore, we propose an input-oriented demonstration learning approach (IODL) for hybrid evidence fact verification. IODL activates the LM’s comprehension capability by incorporating a few examples (task demonstrations) into the input context. Our approach comprises a novel retrieval strategy for selecting examples with similar reasoning patterns to the inputs and a demonstration fusion network that mitigates the LM’s input length limitation, allowing it to leverage more examples. Consequently, the quality and quantity of task demonstrations can be simultaneously enhanced, enabling the demonstration learning paradigm to be well-adapted to our supervised fact verification task. Extensive experiments on the large-scale benchmark FEVEROUS dataset demonstrate the superiority of our proposed method over state-of-the-art baselines. IODL consistently outperforms conventional demonstration-based methods in low-resource and complex reasoning challenge scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call