AbstractBiometric systems are largely based on Machine Learning (ML) algorithms which are often considered as a black-box. There is a need to provide them with explanations to make their decision understandable. In this paper, we conduct a Systematic Literature Review aiming at investigating the present adoption of explainable Artificial Intelligence (XAI) techniques in biometric systems. By examining the biometric tasks performed by the selected papers (e.g., face detection or face spoofing), the datasets adopted by the different approaches, the considered ML models, the XAI techniques, and their evaluation methods. We started from 496 papers and, after an accurate analysis, selected 47 papers. Results revealed that XAI is mainly adopted in biometric systems related to the face biometric cues. The explanations provided were all based on model-centric metrics and did not consider how the end-users perceived the explanations, leaving wide space for the biometric researchers to apply the XAI models and enhance the explanation evaluation into an HCI perspective.
Read full abstract