Abstract

Technology has grown rapidly in recent years, and new solutions that rely on Machine Learning (ML) and Artificial Intelligence (AI) are introduced every day. With such fast-paced advancement, inspecting and fully comprehending how given models make decisions is becoming problematic. The complex decision-making process of these models has become a black box, making it challenging to unravel how they work; therefore, eXplainable Artificial Intelligence (XAI) methods are crucial for further development. This paper discusses how state-of-the-art techniques determine classifications and why they need to be revised to understand the prediction-generating process fully. It compares those existing solutions with the new method called Principal Image Sections Mapping - PRISM, which relies on Principal Component Analysis and allows visualising the most significant features recognised by a given Convolutional Neural Network. PRISM is implemented in a piece of software called TorchPRISM that can generate and present the clustering based on the method's output. The result can indicate ambiguous classes discrimination; thus, the possibility of automating the output analysis process is also discussed. The paper's main objective is to examine how PRISM enhances the current understanding of the decision-making process and introduce a tool that can facilitate analysing the output. PRISM implementation (TorchPRISM) can be found in the public GitHub repository: https://github.com/szandala/TorchPRISM

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call