Abstract

“Misogynoir” is a term that refers to the anti-Black forms of misogyny that Black women experience. To explore how current automated hate speech detection approaches perform in detecting this type of hate, we evaluated the performance of two state-of-the-art detection tools, HateSonar and Google’s Perspective API, on a balanced dataset of 300 tweets, half of which are examples of misogynoir and half of which are examples of supporting Black women and an imbalanced dataset of 3138 tweets of which 162 tweets are examples of misogynoir and 2976 tweets are examples of allyship tweets. We aim to determine if these tools flag these messages under any of their classifications of hateful speech (e.g. “hate speech”, “offensive language”, “toxicity” etc.). Close analysis of the classifications and errors shows that current hate speech detection tools are ineffective in detecting misogynoir. They lack sensitivity to context, which is an essential component for misogynoir detection. We found that tweets likely to be classified as hate speech explicitly reference racism or sexism or use profane or aggressive words. Subtle tweets without references to these topics are more challenging to classify. We find that the lack of sensitivity to context may make such tools not only ineffective but potentially harmful to Black women.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.