Abstract

Traditional rumor detection methods that only focus on text content have achieved certain results. However, with the rapid development of social platforms, graphic information has occupied a large proportion. In this scenario, traditional detection methods cannot make full use of picture information for rumor detection. Aiming at the above scenarios, a rumor detection model integrating multi-modal features is proposed. Firstly, text features and visual features as well as their hidden states are extracted by using the pre-trained deep learning model, and then the preliminary fusion features are obtained by integrating the hidden states of text and image through the attention mechanism. Then, the text features, preliminary fusion features and social features are spliced, and the image features, preliminary fusion features and social features are spliced to obtain two final fusion features. Then the two features are input into different full connection layers to get their respective prediction results. Finally, the two prediction results are integrated to obtain the final detection results. Experimental results show that the proposed model is effective in detecting multimodal rumor data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call