Abstract

Rumors can cause devastating consequences to individuals and our society. Analysis shows that the widespread of rumors typically results from deliberate promotion of information with unknown veracity aiming to shape the collective public opinions on the concerned news event. In this paper, we attempt to combat such chaotic phenomenon with a countermeasure by mirroring against how such chaos is created in order to make automatic rumor detection more robust and effective. Our idea is inspired by adversarial learning method originated from Generative Adversarial Networks (GAN). We propose a GAN-style approach, where a generator is designed to produce uncertain or conflicting voices, further polarizing the original conversation threads with the intention of pressurizing the discriminator to learn stronger rumor indicative features from the augmented, more challenging examples. We reveal that feature learning effectiveness is highly relevant to the quality of generated parody, viz., how hard it is to get distinguished from real posts. Given the strong natural language generation performance of transformer, we propose a transformer-based method to improve the generated posts, so that they appear to be closely responsive to the source post and retain the authentic propagation structure and context of information. Different from traditional data-driven approach to rumor detection, our method can capture low-frequency but more salient non-trivial discriminant patterns via adversarial training. Extensive experiments on THREE benchmark datasets demonstrate that our rumor detection methods and the transformer-based model achieve much better results than state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call