Abstract

The rapid spread of rumors on social media and their potential impact has motivated the development of automatic rumor detection solutions. However, the existing solutions are mostly limited to detecting rumors in English which neglects the bulk of social media content in other low-resource languages. This paper aims to address the research gaps by proposing Multilingual Source Co-Attention Transformer (MUSCAT), which builds on a multilingual pre-trained language model to perform multilingual rumor detection. Specifically, MUSCAT pivots the source claims in multilingual conversation threads with co-attention transformers to improve detection performance in multilingual settings. We additionally construct multilingual rumor datasets to support our experimental evaluations. Our experimental results show that MUSCAT outperforms state-of-the-art methods in monolingual, cross-lingual, and multilingual rumor detection settings. We have also conducted empirical analysis and outlined the challenges of performing rumor detection in multilingual and cross-lingual settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call