Abstract
Financial institutions are increasingly leveraging on advanced technologies, facilitated by the availability of Machine Learning methods that are being integrated into several applications, such as credit scoring, anomaly detection, internal controls and regulatory compliance. Despite their high predictive accuracy, Machine Learning models may not provide sufficient explainability, robustness and/or fairness; therefore, they may not be trustworthy for the involved stakeholders, such as business users, auditors, regulators and end-customers.To measure the trustworthiness of AI applications, we propose the first Key AI Risk Indicators (KAIRI) framework for AI systems, considering financial services as a reference industry. To this aim, we map the recently proposed regulatory requirements proposed for Artificial Intelligence Act into a set of four measurable principles (Sustainability, Accuracy, Fairness, Explainability) and, for each of them, we propose a set of interrelated statistical metrics that can be employed to measure, manage and mitigate the risks that arise from artificial intelligence.We apply the proposed framework to a collection of case studies, that have been indicated as highly relevant by the European financial institutions we interviewed during our research activities. The results from data analysis indicate that the proposed framework can be employed to effectively measure AI risks, thereby promoting a safe and trustworthy AI in finance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.