Abstract

Social media has reportedly been (ab)used by Russian troll farms to promote political agendas. Specifically, state-affiliated actors disguise themselves as native citizens of the United States to promote discord and promote their political motives. Therefore, developing methods to automatically detect Russian trolls can ensure fair elections and possibly reduce political extremism by stopping trolls that produce discord. While data exists for some troll organizations (e.g., Internet Research Agency), it is challenging to collect ground-truth accounts for new troll farms in a timely fashion. In this paper, we study the impact the number of labeled troll accounts has on detection performance. We analyze the use of self-supervision with less than 100 troll accounts as training data. We improve classification performance by nearly 4% F1. Furthermore, in combination with self-supervision, we also explore novel features for troll detection grounded in stylometry. Intuitively, we assume that the writing style is consistent across troll accounts because a single troll organization employee may control multiple user accounts. Overall, we improve on models based on words features by ~9% F1.

Highlights

  • Social media platforms, such as Twitter, can be helpful in monitoring events, particular for ongoing emergency events (Yin et al, 2015)

  • Based on the hypothesis that a single troll organization employee can control multiple social media accounts, we introduce stateof-the-art stylometric and behavioral features, in combination with standard ngrams, to develop a novel troll detection method

  • We find that the model CBS+Self outperforms the other two baselines, with an improvement of nearly 2% over CBS and 9% over C

Read more

Summary

Introduction

Social media platforms, such as Twitter, can be helpful in monitoring events, particular for ongoing emergency events (i.e. time-critical situations) (Yin et al, 2015). Twitter has become the subject of public scrutiny regarding unwanted actors who are exploiting the social media platform to steer public opinion for their political gain.. 1https://nyti.ms/2Uwr36y working services, has both positive and negative sides of its rendered services. When it is used unfairly, malicious actors can manipulate Twitter to influence a potentially large audience by using fake accounts, or worse, by hiring troll farms (Zhang et al, 2016), organizations that employ people to provoke conflict via the use of inflammatory or provocative comments. For this paper, we study models for classifying users as being part of a troll farm

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.