Abstract

Nowadays, the dissemination of disinformation on social media has evolved from a purely textual form to multiple modalities consisting of both text and images. This further amplifies the misleading and deceptive nature of disinformation. Overcoming the misleading and confusing noise to achieve accurate disinformation detection presents a significant challenge. To address this challenge, we propose a method named Multimodal Fusion and Alignment for Entity-level Disinformation Detection (MFAE). MFAE first uses an improved dynamic routing algorithm to extract more comprehensive semantic visual entity features. Then, a graph matching network is used to capture the correspondences between entities within modalities. The experiment shows that MFAE is capable of capturing textual and visual semantic information more comprehensively. On the TWITTER and WEIBO datasets, MFAE achieves accuracy improvements of approximately 2.0% and 7.5%, respectively, compared to the state-of-the-art methods, resulting in accuracy of 89.5% and 96.7%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.