Abstract

Artificial Intelligence (AI) and Machine Learning (ML) are gaining increasing attention regarding their potential applications in auditing. One major challenge of their adoption in auditing is the lack of explainability of their results. As AI/ML matures, so do techniques that can enhance the interpretability of AI, a.k.a., Explainable Artificial Intelligence (XAI). This paper introduces XAI techniques to auditing practitioners and researchers. We discuss how different XAI techniques can be used to meet the requirements of audit documentation and audit evidence standards. Furthermore, we demonstrate popular XAI techniques, especially Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), using an auditing task of assessing the risk of material misstatement. This paper contributes to accounting information systems research and practice by introducing XAI techniques to enhance the transparency and interpretability of AI applications applied to auditing tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.