Abstract

Fact verification aims to verify the authenticity of a given claim based on the retrieved evidence from Wikipedia articles. Existing works mainly focus on enhancing the semantic representation of evidence, e.g., introducing the graph structure to model the evidence relation. However, previous methods can’t well distinguish semantic-similar claims and evidences with distinct authenticity labels. In addition, the performances of graph-based models are limited by the over-smoothing problem of graph neural networks. To this end, we propose a graph-based contrastive learning method for fact verification abbreviated as CosG, which introduces a contrastive label-supervised task to help the encoder learn the discriminative representations for different-label claim-evidence pairs, as well as an unsupervised graph-contrast task, to alleviate the unique node features loss in the graph propagation. We conduct experiments on FEVER, a large benchmark dataset for fact verification. Experimental results show the superiority of our proposal against comparable baselines, especially for the claims that need multiple-evidences to verify. In addition, CosG presents better model robustness on the low-resource scenario.

Highlights

  • IntroductionThe information explosion makes people be trapped in fake news and misleading claims

  • By 1.17–1.54% and 1.79–1.80% in terms of label accuracy and FEVER Score on the development set, indicating that the graph mechanism can help capture the relation of evidences and generate better evidence representations

  • We propose a graph-based contrastive learning model (CosG) for the task of fact verification, which can leverage contrastive learning tasks to learn discriminative representations for semantic-similar cases with different labels, as well as alleviate the over-smoothing problem in the graph-based methods

Read more

Summary

Introduction

The information explosion makes people be trapped in fake news and misleading claims. News/claims authentication, especially in an automatic way, has been a fiercely-discussed topic in information retrieval. Targeted to this goal, the fact verification task [1,2,3] is proposed, which retrieves and reasons upon trustworthy corpora, e.g., Wikipedia, to verify the authenticity of a given claim. The authenticity is measured by three given labels, named “SUPPORT”, “REFUTE”, or “NOT

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call