Abstract

AbstractSocial media platforms have been hailed as “politically disruptive communication technologies” (Hong & Nadler, 2012). Individuals express opinions and engage with politicians, the press, and each other on social media, sometimes using offensive language (Rossini et al., 2020). Content moderation has been adopted by many social media platforms to screen and evaluate offensive speech. In the present study we trained offensive speech classifiers to analyze offensive speech examples by integrating three archival datasets. We then used the trained classifier to examine a large body of comments about YouTube videos posted during the 2018 midterm election cycle. This provided information on the prevalence of various kinds of offensive comments and the pattern of content moderation used by YouTube. We also examined comment negativity using offensive speech lexicons. Our results showed systematic variance in the prevalence of offensive speech topics depending upon the political orientation of the content. Language use was significantly different between left and right‐leaning videos for comments related to sexism.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.