Abstract

Face Recognition aims at identifying or confirming an individual's identity in a still image or video. Towards this end, machine learning and deep learning techniques have been successfully employed for face recognition. However, the response of the face recognition system often remains mysterious to the end-user. This paper aims to fill this gap by letting an end user know which features of the face has the model relied upon in recognizing a subject's face. In this context, we evaluate the interpretability of several face recognizers employing deep neural networks namely, LeNet-5, AlexNet, Inception-V3, and VGG16. For this purpose, a recently proposed explainable AI tool-Local Interpretable Model-Agnostic Explanations (LIME) is used. Benchmark datasets such as Yale, AT &T dataset, and Labeled Faces in the Wild (LFW) are utilized for this purpose. We are able to demonstrate that LIME indeed marks the features that are visually significant features for face recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call