Abstract

In today’s digital era, rumors spreading on social media threaten societal stability and individuals’ daily lives, especially multimodal rumors. Hence, there is an urgent need for effective multimodal rumor detection methods. However, existing approaches often overlook the insufficient diversity of multimodal samples in feature space and hidden similarities and differences among multimodal samples. To address such challenges, we propose MVACLNet, a Multimodal Virtual Augmentation Contrastive Learning Network. In MVACLNet, we first design a Hierarchical Textual Feature Extraction (HTFE) module to extract comprehensive textual features from multiple perspectives. Then, we fuse the textual and visual features using a modified cross-attention mechanism, which operates from different perspectives at the feature value level, to obtain authentic multimodal feature representations. Following this, we devise a Virtual Augmentation Contrastive Learning (VACL) module as an auxiliary training module. It leverages ground-truth labels and extra-generated virtual multimodal feature representations to enhance contrastive learning, thus helping capture more crucial similarities and differences among multimodal samples. Meanwhile, it performs a Kullback–Leibler (KL) divergence constraint between predicted probability distributions of the virtual multimodal feature representations and their corresponding virtual labels to help extract more content-invariant multimodal features. Finally, the authentic multimodal feature representations are input into a rumor classifier for detection. Experiments on two real-world datasets demonstrate the effectiveness and superiority of MVACLNet on multimodal rumor detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call