Abstract
With the growing prevalence of AI-based systems and the development of specific regulations and standardizations in response, accountability for consequences resulting from the development or use of these technologies becomes increasingly important. However, concrete strategies and approaches of solving related challenges seem to not have been suitably developed for or communicated with AI practitioners. Studying how risk governance methods can be (re)used to administer AI accountability, we aim at contributing to closing this gap. We chose an exploratory workshop-based methodology to investigate current challenges for accountability and risk management approaches raised by AI practitioners from academia and industry. Our interactive study design revealed various insights on which aspects do or do not work for handling risks of AI in practice. From the gathered perspectives, we derived 5 required characteristics for AI risk management methodologies (balance, extendability, representation, transparency and long-term orientation) and determined demands for clarification and action (e.g., for the definition of risk and accountabilities or standardization of risk governance and management) in the effort to move AI accountability from a conceptual stage to industry practice.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.