Abstract

With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail to satisfy, as knowledge about the development process is insufficient to predict the properties of the resulting system. Current outcome-based approaches [Mitchell et al., 2019; Loi et al., 2021] are also criticized for a lack of attention to the broader socio-technical system and failure to account for empirical results that show that people care about more than just the outcomes of a process [as reported by Meyerson et al. (Procedural justice and relational theory: Empirical, philosophical, and legal perspectives, Taylor & Francis, 2021)]. Instead, I propose value-based transparency, on which the information we need to provide is what values have been considered in the design and how successful these have been realized in the final system. This can handle the objections to other frameworks, matches with current best practices on the design of responsible AI and provides the public with information on the crucial aspects of a system’s design.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.