Abstract

Although algorithms are imbued with a sense of objectivity and reliability, numerous high-profile incidents have demonstrated their fallibility. In response, many have called for algorithmic governance that mitigates their potential harms. Further, these incidents have inspired studies that consider algorithms as part of wider sociotechnical systems. In this article, we build on such work and focus on how the specific forms of algorithms may facilitate or constrain the ways in which they become embedded within these systems. More specifically, we suggest that (a) algorithms should be understood as models, with (b) divergent forms, and (c) associated representational qualities. We showcase this approach in three critical case studies of algorithmic models used in government: the SAFFIER II model that underpins the Netherlands government’s spending, the Ofqual DCP A-Level grading algorithm that was used (and later abandoned) in lieu of actual secondary school exams in the United Kingdom, and the Risk Classification Model used by the Dutch Tax and Customs Administration to identify social benefit fraud. With the three case studies, we show how the divergent forms of algorithms have implications for their responsiveness and ultimately their solidification in – or dissolution from – socio-technical systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.