Abstract

The widespread dissemination of fake news on social media platforms can cause serious social impact, making the detection of fake news on social media platforms an urgent problem to be solved. Up to now, scholars have proposed various methods ranging from traditional manual feature extraction to deep learning algorithms for detecting fake news. However, these methods still have some limitations and two difficult problems: (1) How to learn informative news feature representations without losing information as much as possible? (2) How to effectively fuse multi-modal information to obtain high-order complementary information about news and enhance fake news detection? To overcome these two difficulties, this article proposes a multi-modal fusion fake news detection model. Firstly, the model uses BERT and VGG-19 to obtain the text and image feature representations of news content, respectively, and then further fuses multi-modal information through a multi-modal attention mechanism module to obtain high-order complementary information between different modalities, thereby obtaining informative news feature representations for fake news detection. Experimental results on two real-world public datasets demonstrate the effectiveness of our model compared to mainstream detection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call