Abstract
Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus on the fusion of text and picture-region object features, so we propose a multimodal fusion neural network TDEDA (dual-attention based on textual double embedding) applied to rumor detection, which performs a high-level information interaction at the text–image object level and captures visual features associated with keywords using an attention mechanism. In this way, we explored the ability to enhance feature representation with assistance from different modalities in rumor detection, as well as to capture the correlations of the dense interaction between images and text. We conducted comparative experiments on two multimodal rumor detection datasets. The experimental results showed that TDEDA could reasonably handle multimodal information and thus improve the accuracy of rumor detection compared with currently relevant multimodal rumor detection methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.