Individuals interact with algorithms in various ways. Users even game and circumvent algorithms so as to achieve favorable outcomes. This study aims to come to an understanding of how various stakeholders interact with each other in tricking algorithms, with a focus towards online review communities. We employed a mixed-method approach in order to explore how and why users write machine non-translatable reviews as well as how those encrypted messages are perceived by those receiving them. We found that users are able to find tactics to trick the algorithms in order to avoid censoring, to mitigate interpersonal burden, to protect privacy, and to provide authentic information for enabling the formation of informative review communities. They apply several linguistic and social strategies in this regard. Furthermore, users perceive encrypted messages as both more trustworthy and authentic. Based on these findings, we discuss implications for online review community and content moderation algorithms.
Read full abstract