Abstract

Sarcasm embodies a linguistic phenomenon that highlights a significant incongruity between the literal meanings of words and intended attitudes. With the proliferation of image–text content on social media, the task of multi-modal sarcasm detection (MSD) has gained considerable attention recently. Tremendous progress have been made in developing better MSD models, primarily relying on a straightforward extract-then-fuse paradigm. However, such a setting encounters two potential challenges. First, the utilization of separately pre-trained unimodal models for extracting visual and textual features frequently lacks the fundamental alignment capabilities required for effective multimodal data integration. Second, the detrimental modality gaps between vision and language make it challenging to comprehensively integrate multi-modal information solely via diverse cross-modal fusion techniques. Consequently, this poses a prominent challenge in further capturing cross-modal incongruity and improving the effectiveness of MSD. In this paper, we propose a Multi-modal Mutual Learning (MuMu) network to tackle these issues. Specifically, we initialize the MuMu network with image and text encoders from the large-scale Contrastive Language-Image Pretraining model to enhance the underlying image–text correspondence. Moreover, to improve the capability of capturing cross-modal inconsistency during the fusion process, we design an align-fuse-collaborate mechanism to align disparate modalities before fusion and enhance the collaborative modeling ability between the two modalities with mutual learning after fusion. The proposed MuMu achieves new state-of-the-art results on a public dataset, demonstrating a substantial improvement of approximately 3% to 9% in terms of accuracy, micro-F1, and macro-F1 scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call