Interpreting the Fraudulence Level of Different Finger Photo Presentation Attack Instruments
Finger photo verification has emerged as a viable alternative to traditional biometric authentication methods in smartphones, offering improved hygiene and user experience while utilizing standard RGB cameras to capture images of human fingers. However, the vulnerability of finger photo technology to Presentation Attacks (PAs) necessitates the integration of a robust detection mechanism. The current study evaluates the effectiveness of deep-learning-based finger photo Presentation Attack detectors for various types of Presentation Attack Instruments (PAIs). This paper compares and interprets the performance of fine-tuned Convolutional Neural Networks (CNNs) and transformer models for Presentation Attack Detection (PAD). Experiments were conducted on three datasets, including MFPAD-i-22 with 112 subjects, MFPAD-g-23 with 100 subjects, and IIIT-D with 64 subjects, encompassing 19 attack scenarios involving devices such as iPhones, iPads, and HP printers. To interpret the PAD algorithm decisions against a range of PAs, eXplainable Artificial Intelligence (XAI) methodologies were employed to gain insight into the importance of the features used by PAD. The results indicated that the swine transformer outperformed CNNs in detecting various types of PAs. Furthermore, quantifying the interpretable results using the signal-to-noise ratio indicated the importance of features used by the PAD to detect the PAs.