Abstract

Artificial Intelligence (AI) and Machine Learning (ML) are gaining increasing attention regarding their potential applications in auditing. One major challenge of their adoption in auditing is the lack of explainability of their results. As AI/ML matures, so do techniques that can enhance the interpretability of AI, a.k.a., Explainable Artificial Intelligence (XAI). This paper introduces XAI techniques to auditing practitioners and researchers. We discuss how different XAI techniques can be used to meet the requirements of audit documentation and audit evidence standards. Furthermore, we demonstrate popular XAI techniques, especially Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), using an auditing task of assessing the risk of material misstatement. This paper contributes to accounting information systems research and practice by introducing XAI techniques to enhance the transparency and interpretability of AI applications applied to auditing tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call