Abstract

Warning: This paper contains examples of language and images which may be offensive.Misogyny is a form of hate against women and has been spreading exponentially through the Web, especially on social media platforms. Hateful content towards women can be conveyed not only by text but also using visual and/or audio sources or their combination, highlighting the necessity to address it from a multimodal perspective. One of the predominant forms of multimodal content against women is represented by memes, which are images characterized by pictorial content with an overlaying text introduced a posteriori. Its main aim is originally to be funny and/or ironic, making misogyny recognition in memes even more challenging. In this paper, we investigated 4 unimodal and 3 multimodal approaches to determine which source of information contributes more to the detection of misogynous memes. Moreover, a bias estimation technique is proposed to identify specific elements that compose a meme that could lead to unfair models, together with a bias mitigation strategy based on Bayesian Optimization. The proposed method is able to push the prediction probabilities towards the correct class for up to 61.43% of the cases. Finally, we identified the most challenging archetypes of memes that are still far to be properly recognized, highlighting the most relevant open research directions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.