Abstract
The algorithms underpinning many everyday communication processes are now complex enough that rendering them explainable has become a key governance objective. This article examines the question of 'who should be required to explain what, to whom, in platform environments'. By working with algorithm designers and using design methods to extrapolate existing capacities to explain aglorithmic functioning, the article discusses the power relationships underpinning explanation of algorithmic function. Reviewing how key concepts of transparency and accountability connect with explainability, the paper argues that reliance on explainability as a governance mechanism can generate a dangerous paradox which legitimates increased reliance on programmable infrastructure as expert stakeholders are reassured by their ability to perform or receive explanations, while displacing responsibility for understandings of social context and definitions of public interest
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.