Abstract

Anecdotal evidence suggests social media are used by individuals and groups wanting to incite hatred and violence, yet the empirical evidence we present in this report suggests the these extreme forms of speech are actually marginal. Building on a collaboration between the University of Oxford and Addis Ababa University we examined thousands of comments made by Ethiopians on Facebook during four months around the time of the country’s general election. Hate speech – defined as statements to incite others to discriminate or act against individuals or groups on grounds of their ethnicity, nationality, religion or gender – was found in just 0.7% of overall statements in the representative sample. These findings may have wide implications for the many countries trying to address growing concerns about the role played by social media in promoting radicalisation or violence. Ethiopia represented an exceptional case study because of its distinct languages, which allowed gaining a realistic sample of the overall online debates focused on one country. We analysed Facebook statements made by Ethiopians, both in their homeland and abroad, in the run-up to and just after the general election on 24 May 2015. We found that fans or followers rather than people with any real influence online are mainly responsible for the violent or aggressive speech that appeared on Facebook pages in the sample. These individuals appear to use Facebook to vent their anger against more powerful sections of society. Around 18% of total comments in the sample were written by fans or followers compared with 11% of comments made by highly influential speakers (the owners of web pages). One fifth (21.8%) of hostile comments were grounded in political differences, only slightly higher than the overall average of 21.4% of all conversations containing hostile comments. Religion and ethnicity provoked fewer hostile comments (10% and 14% of overall comments in sample respectively). The findings are based on the analysis of more than 13,000 statements posted on 1,055 Facebook pages between February and June 2015. They mapped Facebook profiles, pages, and groups that had 100 or more followers or likes or members, respectively. All content in the sample studied had to include an Ethiopian language and raise discussion topics about Ethiopia. We focused on popular spaces on Facebook, analysing such pages daily to map ongoing trends, but also included comments on some online random pages or pages capturing particular events, such as a protest or publicised speeches. Posts, status updates and comments were tracked over time.

Highlights

  • AND RESEARCH STRATEGYMechachal’s research strategy has developed both in response to concerns about uses of social media that can incite hatred and violence, and to the unique conditions that have shaped the relationship between media, politics, and power in Ethiopia.RESEARCHING HATE SPEECH ONLINE: A CAUTIONARY TALEMechachal began with an interest in identifying and analyzing hate speech, especially messages with the highest likelihood of leading to violence

  • FINDING 1 - Hate and dangerous speech are marginal forms of speech in social media Only 0.4% of statements in our sample have been classified as hate speech and 0.3% as dangerous speech

  • No statements were found in our general sample to have a high risk that the speakers or the groups they appeal to could carry out violence. These findings are limited to the case of Ethiopia but there are broader implications for researchers studying online hate speech and for policy makers seeking to promote targeted responses to it

Read more

Summary

Introduction

Mechachal began with an interest in identifying and analyzing hate speech, especially messages with the highest likelihood of leading to violence. This spirit informed the project’s first pilot study, which was conducted between October 2013 and February 2014. The risk is severe in countries where civil and political liberties are already under threat. This challenge led to developing a sampling strategy, highlighted, that allows not just detect the most extreme forms of speech (as it has been the case so far for most projects focusing on hate speech online), and measuring how prevalent they are among conversations in social media

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call