Abstract
In recent years, major emergencies have occurred frequently all over the world. When a major global public heath emergency like COVID-19 broke out, an increasing number of fake news in social media networks are exposed to the public. Automatically detecting the veracity of a news article ensures people receive truthful information, which is beneficial to the epidemic prevention and control. However, most of the existing fake news detection methods focus on inferring clues from text-only content, which ignores the semantic correlations across multimodalities. In this work, we propose a novel approach for Fake News Detection by comprehensively mining the Semantic Correlations between Text content and Images attached (FND-SCTI). First, we learn image representations via the pretrained VGG model, and use them to enhance the learning of text representation via hierarchical attention mechanism. Second, a multimodal variational autoencoder is exploited to learn a fused representation of textual and visual content. Third, the image-enhanced text representation and the multimodal fusion eigenvector are combined to train the fake news detector. Experimental results on two real-world fake news datasets, Twitter and Weibo, demonstrate that our model outperforms seven competitive approaches, and is able to capture the semantic correlations among multimodal contents.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.