Abstract

Public discussion about transparency for AI-enabled decisions tends to focus on the challenge of AI explainability. However, there are additional real-world factors which can hamper individuals seeking to understand or challenge decisions impacting them, even when the AI or algorithm is entirely explainable. Although AI enabled decision tools are relatively new, algorithmic and statistical decision tools are not. This paper examines past efforts by individuals to access algorithms, statistical models, and data used in making decisions which impacted them. The results of those attempts are considered in light of public expectations for transparency of AIenabled decision tools, as well as current and developing guidance. Legal changes will be needed if governments wish to meet citizen expectations for real-world transparency of AI-enabled decision systems. In the meantime, there are opportunities for AI experts and others to protect the potential for greater transparency through open data, open source licensing, and engagement in policy development

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.