Abstract
This study employs semistructured interviews and algorithmic ethnography to explore how algorithmic shadowbans have been used to moderate content related to Chinese gay men and achieve targeted algorithmic governance. Through a multimethod approach combining both thematic analysis and discourse analysis, this study claims that algorithms impose seemingly tolerant but actually restrictive shadowbans, which are thematized as “(im)permissible searching” and “(un)smooth posting,” on Chinese gay men. This study conceptualizes such algorithmic shadowbans as “algorithmic camouflage,” emphasizing the opacity of the roles, behaviors, and purposes of algorithms toward specific users from an interactive perspective, highlighting the “hypocrisy” of algorithms. Under hypocritical algorithmic shadowbans, this study suggests that a highly camouflaged “de-gaying” discourse—through compositions of dehumanization, de-emotionalization, and dramatization—is being shaped by algorithms on Chinese digital platforms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.