Abstract

The increasing popularity of social media facilitates the propagation of fake news, posing a major threat to the government and journalism, and thereby making how to detect fake news from social media an urgent requirement. In general, multimodal-based methods can achieve better performance because of the complementation among different modalities. However, the majority of them simply concatenate features from different modalities, failing to well preserve the mutual information in common features. To address this issue, a novel framework named semantic-enhanced multimodal fusion network is proposed for fake news detection, which can better capture mutual features among events and thus benefit the detection of fake news. This model consists of three subnetworks, namely multimodal fusion and event domain adaptation networks as well as the fake news detector. Specifically, the multimodal fusion network aims to extract deep features from texts and images and fuse them into a common semantic feature known as a snapshot. Then, the fake news detector can learn the representation of posts. Finally, the event domain adaptation network can single out and remove the peculiar features of each event, and keep shared features among events. The experimental results show that the proposed model outperforms some state-of-the-art approaches on two real-world multimedia data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call