Abstract
Modern data-driven Artificial Intelligence (AI), powered by advanced Machine Learning (ML) models, is transforming financial technologies by enhancing financial inclusion, transparency and reducing transaction costs. However, the opaque nature of some complex ML models requires new statistical approaches to manage risks and ensure trustworthiness. In this paper, we present a novel method to evaluate the key principles of trustworthy AI - Sustainability (Robustness), Accuracy, Fairness, and Explainability (SAFE). While Babaei et al. [A Rank Graduation Box for SAFE AI. Expert Syst Appl. 259;2025:125239, 2025] introduced the Rank Graduation Box as a streamlined approach for assessing the principles of trustworthy AI, we extend this work by employing the Wasserstein distance. Our method offers a more nuanced and geometrically oriented comparison of ML models, particularly in contexts where shifts in economic or environmental conditions alter the prediction distributions. We apply this method to compare popular ML models, including Support Vector Machines, Ensemble Trees, K-Nearest Neighbours Linear and Logistic Regression. The proposal is validated using both simulated data and real-world data in the context of financial risk assessment. Our findings demonstrate that the Wasserstein distance offers nuanced and interpretable insights into model behaviour across the SAFE dimensions, making it a valuable tool for model selection and regulatory compliance in AI applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.