Abstract
Hate speech expresses prejudice and discrimination based on actual or perceived innate characteristics such as gender, race, religion, ethnicity, colour, national origin, disability or sexual orientation. Research has proven that the amount of hateful messages increases inevitably on online social media. Although hate propagators constitute a tiny minority—with less than 1% participants—they create an unproportionally high amount of hate motivated content. Thus, if not countered properly, hate speech can propagate through the whole society. In this paper we apply agent-based modelling to reproduce how the hate speech phenomenon spreads within social networks. We reuse insights from the research literature to construct and validate a baseline model for the propagation of hate speech. From this, three countermeasures are modelled and simulated to investigate their effectiveness in containing the spread of hatred: Education, deferring hateful content, and cyber activism. Our simulations suggest that: (1) Education consititutes a very successful countermeasure, but it is long term and still cannot eliminate hatred completely; (2) Deferring hateful content has a similar—although lower—positive effect than education, and it has the advantage of being a short-term countermeasure; (3) In our simulations, extreme cyber activism against hatred shows the poorest performance as a countermeasure, since it seems to increase the likelihood of resulting in highly polarised societies.
Highlights
In recent years, many concerns have arisen related to hate speech and hate dissemination on the Internet (a.k.a. cyberhate)
As counter activists react against existing hate spreading groups, we implement them as a mid-term measure that starts during the opinion diffusion phase, where activists are sampled from the group of nonhateful persons with a probability pconvince, instead of from persons who are just joining the network
Once we have been able to successfully replicate the behaviour of hateful users in a social network, we can proceed to evaluate the influence of the three countermeasures we have implemented
Summary
Many concerns have arisen related to hate speech (which can take several forms and is known by different names such as derogatory language [1], bigotry [2], misogyny [3], bullying [4], or incivility [5]) and hate dissemination on the Internet (a.k.a. cyberhate). Gab.com is an American social networking service launched publicly in May 2017 that is known for its far-right userbase It is critizised for using free speech as a shield for users and groups who have been banned from other social media. Some authors have noted that the spread of their messages seems to be inadvertently supported by the algorithms of the social networks [8] To counter this problem researchers and politicians have proposed several measures with different temporal horizons. The short-term measure of automatic message filtering (or blocking of hateful users) is criticised as, for some cases, it could be used against the freedom of expression human right This countermeasure bears hidden risks of having hateful users being just displaced to other platforms and not really eliminated [12]. The last section concludes the paper and discusses future work
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.