The variable hate speech is an indicator used to describe communication that expresses and/or promotes hatred towards others (Erjavec & Kova?i?, 2012; Rosenfeld, 2012; Ziegele, Koehler, & Weber, 2018). A second element is that hate speech is directed against others on the basis of their ethnic or national origin, religion, gender, disability, sexual orientation or political conviction (Erjavec & Kova?i?, 2012; Rosenfeld, 2012; Waseem & Hovy, 2016) and typically uses terms to denigrate, degrade and threaten others (Döring & Mohseni, 2020; Gagliardone, Gal, Alves, & Martínez, 2015). Hate speech and incivility are often used synonymously as hateful speech is considered part of incivility (Ziegele et al., 2018). Field of application/theoretical foundation: Hate speech (see also incivility) has become an issue of growing concern both in public and academic discourses on user-generated online communication. References/combination with other methods of data collection: Hate speech is examined through content analysis and can be combined with comparative or experimental designs (Muddiman, 2017; Oz, Zheng, & Chen, 2017; Rowe, 2015). In addition, content analyses can be accompanied by interviews or surveys, for example to validate the results of the content analysis (Erjavec & Kova?i?, 2012). Example studies: Research question/research interest: Previous studies have been interested in the extent of hate speech in online communication (e.g. in one specific online discussion, in discussions on a specific topic or discussions on a specific platform or different platforms in comparatively) (Döring & Mohseni, 2020; Poole, Giraud, & Quincey, 2020; Waseem & Hovy, 2016). Object of analysis: Previous studies have investigated hate speech in user comments for example on news websites, social media platforms (e.g. Twitter) and social live streaming services (e.g. YouTube, YouNow). Level of analysis: Most manual content analysis studies measure hate speech on the level of a message, for example on the level of user comments. On a higher level of analysis, the level of hate speech for a whole discussion thread or online platform could be measured or estimated. On a lower level of analysis hate speech can be measured on the level of utterances, sentences or words which are the preferred levels of analysis in automated content analyses. Table 1. Previous manual and automated content analysis studies and measures of hate speech Example study (type of content analysis) Construct Dimensions/variables Explanation/example Reliability Waseem & Hovy (2016) (automated content analysis) hate speech sexist or racial slur - - attack of a minority - - silencing of a minority - criticizing of a minority without argument or straw man argument - - promotion of hate speech or violent crime - - misrepresentation of truth or seeking to distort views on a minority - - problematic hash tags. e.g. “#BanIslam”, “#whoriental”, “#whitegenocide” - - negative stereotypes of a minority - - defending xenophobia or sexism - - user name that is offensive, as per the previous criteria - - hate speech - ? = .84 Döring & Mohseni (2020) (manual content analysis) hate speech explicitly or aggressively sexual hate e. g. “are you single, and can I lick you?” ? = .74; PA = .99 racist or sexist hate e.g. “this is why ignorant whores like you belong in the fucking kitchen”, “oh my god that accent sounds like crappy American” ? = .66; PA = .99 hate speech ? = .70 Note: Previous studies used different inter-coder reliability statistics; ? = Cohen’s Kappa; PA = percentage agreement. More coded variables with definitions used in the study Döring & Mohseni (2020) are available under: https://osf.io/da8tw/
Read full abstract