Abstract

A recent strand of research considers how algorithmic systems are gamed in everyday encounters. We add to this literature with a study that uses the game metaphor to examine a project where different organizations came together to create and deploy a machine learning model to detect hate speech from political candidates’ social media messages during the Finnish 2017 municipal election. Using interviews and forum discussions as our primary research material, we illustrate how the unfolding game is played out on different levels in a multi-stakeholder situation, what roles different participants have in the game, and how strategies of gaming the model revolve around controlling the information available to it. We discuss strategies that different stakeholders planned or used to resist the model, and show how the game is not only played against the model itself, but also with those who have created it and those who oppose it. Our findings illustrate that while “gaming the system” is an important part of gaming with algorithms, these games have other levels where humans play against each other, rather than against technology. We also draw attention to how deploying a hate-speech detection algorithm can be understood as an effort to not only detect but also preempt unwanted behavior.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call