Abstract

Abstract This study aims to compare deep learning explainability (DLE) with explainable artificial intelligence and causal artificial intelligence (Causal AI) for fraud detection, emphasizing their distinct methodologies and potential to address critical challenges, particularly in finance. An empirical evaluation was conducted using the Bank Account Fraud datasets from NeurIPS 2022. DLE models, including deep learning architectures enhanced with interpretability techniques, were compared against Causal AI models that elucidate causal relationships in the data. DLE models demonstrated high accuracy (95% for Model A and 96% for Model B) and precision (97% for Model A and 95% for Model B) but exhibited reduced recall (98% for Model A and 97% for Model B) due to opaque decision-making processes. By contrast, Causal AI models showed balanced but lower performance with accuracy, precision, and recall, all at 60%. These findings underscore the need for transparent and reliable fraud detection systems, highlighting the trade-offs between model performance and interpretability. This study addresses a significant research gap by providing a comparative analysis of DLE and Causal AI in the context of fraud detection. The insights gained offer practical recommendations for enhancing model interpretability and reliability, contributing to advancements in AI-driven fraud detection systems in the financial sector.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.