Abstract
The increasing popularity of social media facilitates the propagation of fake news, posing a major threat to the government and journalism, and thereby making how to detect fake news from social media an urgent requirement. In general, multimodal-based methods can achieve better performance because of the complementation among different modalities. However, the majority of them simply concatenate features from different modalities, failing to well preserve the mutual information in common features. To address this issue, a novel framework named semantic-enhanced multimodal fusion network is proposed for fake news detection, which can better capture mutual features among events and thus benefit the detection of fake news. This model consists of three subnetworks, namely multimodal fusion and event domain adaptation networks as well as the fake news detector. Specifically, the multimodal fusion network aims to extract deep features from texts and images and fuse them into a common semantic feature known as a snapshot. Then, the fake news detector can learn the representation of posts. Finally, the event domain adaptation network can single out and remove the peculiar features of each event, and keep shared features among events. The experimental results show that the proposed model outperforms some state-of-the-art approaches on two real-world multimedia data sets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.