Abstract

Responsible innovation in artificial intelligence (AI) calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of actors from the AI industry within a deliberative system. We develop a new framework of responsibilities for AI innovation as well as a deliberative governance approach for enacting these responsibilities. In elucidating this approach, we show how actors from the AI industry can most effectively engage with experts and nonexperts in different social venues to facilitate well-informed judgments on opaque AI systems and thus effectuate their democratic governance.

Highlights

  • A Framework of Responsibilities for artificial intelligence (AI) InnovationFor all the numerous guidelines that have been published on “ethical AI” by governments, private corporations, and nongovernmental organizations, especially over the past five years (Schiff et al, 2021), the lack of consensus still surrounding key areas threatens to delay the development of a clear model of governance to ensure the responsible design, development, and deployment of AI (Jobin, Ienca, & Vayena, 2019)

  • The literature on responsible artificial intelligence (AI) has identified and continues to discuss, the unique role of epistemic challenges ensuing from the poor “traceability” (Mittelstadt et al, 2016) and “explicability” (Floridi et al, 2018) of “opaque” (Burrell, 2016) AI systems

  • Most principles and translational tools currently being developed envisage active and collaborative involvement on the part of the AI industry, and those organizations that develop and employ semiautonomous systems, with actors from the public, private, and civil society sectors as a means of overcoming the limitations of government regulations (Buhmann & Fieseler, 2021b; Buhmann, Paßmann, & Fieseler, 2020; Morley et al, 2020; Rahwan, 2018; Veale & Binns, 2017), the matter of which specific actors to involve in solutions and how precisely to involve these actors is rarely elaborated in detail

Read more

Summary

A Framework of Responsibilities for AI Innovation

For all the numerous guidelines that have been published on “ethical AI” by governments, private corporations, and nongovernmental organizations, especially over the past five years (Schiff et al, 2021), the lack of consensus still surrounding key areas threatens to delay the development of a clear model of governance to ensure the responsible design, development, and deployment of AI (Jobin, Ienca, & Vayena, 2019). While some scholars have suggested cooperative and procedural audits of algorithms to address this issue (Mittelstadt et al, 2016; Sandvig et al, 2014), the focus of such scholarship has so far been mostly on expert settings Such approaches fall short of envisaging ways to increase comprehension across different expert and citizen fora and venues to enable the kind of broader deliberative process needed to facilitate a socially situated traceability and explicability of AI systems

A Systems Perspective
Design
DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call