Abstract

Regulating how digital platforms use algorithms to determine and control content displayed to their users is both a controversial topic and an important societal challenge. Existing research acknowledges institutional tussles around regulating how digital platforms use algorithms to determine and control content. However, we lack research showing how the development of regulation unfolds. We build on data from a longitudinal discourse analysis of 410 media articles and 483 policy and industry documents to study two cases of algorithmic control regulation in Australia. The first involves algorithmic control for content display, the second for content moderation. We develop a process model of institutional work towards regulation of algorithmic control. It captures the institutional tussles between governments, digital platforms and third parties as each expresses their perspective on legitimate forms of algorithmic control and shapes the process and outcome of regulation. Building on our model, we discuss the dynamics of regulation development in light of the constellation of actors and their power position in the process. We further consider the regulatory outcome and highlight future research questions that build on our findings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.