Abstract

Complex fact verification (FV) requires fusing scattered sequences and performing multi-hop reasoning over these composed sequences. Recently, by employing some FV models, knowledge is obtained from context to support the reasoning process based on pretrained models (e.g., BERT, XLNET), and this model outperforms previous out-of-the-art FV models. In practice, however, the limited training data cannot provide enough background knowledge for FV tasks. Once the background knowledge changed, the pretrained models’ parameters cannot be updated. Additionally, noise against common sense cannot be accurately filtered out due to the lack of necessary knowledge, which may have a negative impact on the reasoning progress. Furthermore, existing models often wrongly label the given claims as ‘not enough information’ due to the lack of necessary conceptual relationship between pieces of evidence. In the present study, a Dynamic Knowledge Auxiliary Graph Reasoning (DKAR) approach is proposed for incorporating external background knowledge in the current FV model, which explicitly identifies and fills the knowledge gaps between provided sources and the given claims, to enhance the reasoning ability of graph neural networks. Experiments show that DKAR put forward in this study can be combined with specific and discriminative knowledge to guide the FV system to successfully overcome the knowledge-gap challenges and achieve improvement in FV tasks. Furthermore, DKAR is adopted to complete the FV task on the Fake NewsNet dataset, showing outstanding advantages in a small sample and heterogeneous web text source.

Highlights

  • Fact verification (FV) often requires retrieving a significant number of scattered evidential sequences, reasoning over the fused multiple sequences and labelling the given claim with ‘supported’, ‘refused’, or ‘not enough information’

  • fact verification (FV) tasks oftena require a FV system to have a deeper between the given claim and evidence from multiple dimensions, which only needs to, which needsthe to connections use deep learning methods learnbut thealso connections of complex knowledge to understand the illocutionary meaning

  • LA is to measure claim classification accuracy, and FEVER is to measure whether the FV system can provide at least one complete set of golden evidence

Read more

Summary

Introduction

Fact verification (FV) often requires retrieving a significant number of scattered evidential sequences (documents, paragraphs, or sentences), reasoning over the fused multiple sequences and labelling the given claim with ‘supported’, ‘refused’, or ‘not enough information’. Human will automatically fill the knowledge gaps and label the claims for data-driven. Most the research the used datasets provide systems with all knowledge that is necessary to finish tasks. To purposefully find and gapgap to assist reasoning based such as [15,16,17,18], embed syntactic or semantic knowledge into given context to enrich embeddings. This paper is the first attempt to introduce a joint knowledge-driven and data-driven mechanism into fact verification, and verify the effectiveness of the approach in a small sample and heterogeneous web text source, which will provide an important reference for further research

Pre-Training Language Processing and Background Knowledge for FV
Graph Neural Network for FV
Methodology
Document
External
Identifying the Knowledge Gaps
Constructing the Entity Graph
Claim Verification with the GNN Reasoning Method
Dataset
Baselines
Evaluation Metrics
Data Processing
Performance
Further Expeeriments of FV System on Diverse Web Information
Models
Limitation of the Study
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call