Abstract

With the rapid development of social media platforms and the increasing scale of the social media data, the rumor detection task has become vitally important since the authenticity of posts cannot be guaranteed. To date, Many approaches have been proposed to facilitate the rumor detection process by utilizing the multi-task learning mechanism, which aims to improve the performance of rumor detection task by leveraging the useful information in the stance detection task. However, most of the existing approaches suffer from three limitations: (1) only focus on the textual content and ignore the multi-modal information which is key component contained in social media data; (2) ignore the difference of feature space between the stance detection task and rumor detection task, resulting in the unsatisfactory usage of stance information; (3) largely neglect the semantic information hidden in the fine-grained stance labels. Therefore, in this paper, we design a Multi-modal Meta Multi-Task Learning (MM-MTL) framework for social media rumor detection. To make use of multiple modalities, we design a multi-modal post embedding layer which considers both textual and visual content. To overcome the feature-sharing problem of the stance detection task and rumor detection task, we propose a meta knowledge-sharing scheme to share some higher meta network-layers and capture the meta knowledge behind the multi-modal post. To better utilize the semantic information hidden in the fine-grained stance labels, we employ the attention mechanism to estimate the weight of each reply. Extensive experiments on two Twitter benchmark datasets demonstrate that our proposed method achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call