Abstract
Social media has transformed the landscape of news dissemination, characterized by its rapid, extensive, and diverse content, coupled with the challenge of verifying authenticity. The proliferation of multimodal news on these platforms has presented novel obstacles in detecting fake news. Existing approaches typically focus on single modalities, such as text or images, or combine text and image content or with propagation network data. However, the potential for more robust fake news detection lies in considering three modalities simultaneously. In addition, the heavy reliance on labeled data in current detection methods proves time-consuming and costly. To address these challenges, we propose a novel approach, Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning (MFCL). This method integrates intrinsic features from text, images, and propagation networks, capturing essential intermodal relationships for accurate fake news detection. Contrastive learning is employed to learn intrinsic features while mitigating the issue of limited labeled data. Furthermore, we introduce Image-text matching (ITM) data augmentation to ensure consistent image-text representations and employ adaptive propagation (AP) network data augmentation for high-order feature learning. We utilize contextual transformers to bolster the effectiveness of fake news detection, unveiling crucial intermodal connections in the process. Experimental results on real-world datasets demonstrate that MFCL outperforms existing methods, maintaining high accuracy and robustness even with limited labeled data and mismatched pairs. Our code is available at https://anonymous.4open.science/r/KBS-MFCL.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have