Abstract

In this comment, we address the apparent exclusivity and paternalism of goal and standard setting for explainable AI and its implications for the public governance of AI. We argue that the widening use of AI decision-making, including the development of autonomous systems, not only poses widely-discussed risks for human autonomy in itself, but is also the subject of a standard-setting process that is remarkably closed to effective public contestation. The implications of this turn in governance for democratic decision-making in Britain have also yet to be fully appreciated. As the governance of AI gathers pace, one of the major tasks will be ensure not only that AI systems are technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are contestable and that governing institutions and processes are open to democratic contestability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.