Abstract

The analysis of human facial expressions from the thermal images captured by the Infrared Thermal Imaging (IRTI) cameras has recently gained importance compared to images captured by the standard cameras using light having a wavelength in the visible spectrum. It is because infrared cameras work well in low-light conditions and also infrared spectrum captures thermal distribution that is very useful for building systems like Robot interaction systems, quantifying the cognitive responses from facial expressions, disease control, etc. In this paper, a deep learning model called IRFacExNet (InfraRed Facial Expression Network) has been proposed for facial expression recognition (FER) from infrared images. It utilizes two building blocks namely Residual unit and Transformation unit which extract dominant features from the input images specific to the expressions. The extracted features help to detect the emotion of the subjects in consideration accurately. The Snapshot ensemble technique is adopted with a Cosine annealing learning rate scheduler to improve the overall performance. The performance of the proposed model has been evaluated on a publicly available dataset, namely IRDatabase developed by RWTH Aachen University. The facial expressions present in the dataset are Fear, Anger, Contempt, Disgust, Happy, Neutral, Sad, and Surprise. The proposed model produces 88.43% recognition accuracy, better than some state-of-the-art methods considered here for comparison. Our model provides a robust framework for the detection of accurate expression in the absence of visible light.

Highlights

  • The analysis of human facial expressions from the thermal images captured by the Infrared Thermal Imaging (IRTI) cameras has recently gained importance compared to images captured by the standard cameras using light having a wavelength in the visible spectrum

  • The Facial Action Coding System (FACS), introduced by Ekman and F­ riesen[2] and Ekman et al.[3] in the field of Psychology, refers to a set of muscle movements known as Action Units (AUs) that correspond to the specific emotions

  • Researchers are showing their interest in developing systems for facial expression recognition (FER) using machine learning (ML) and deep learning (DL) approaches, which is paving the way for the development of robust FER systems and discovering new parameters used for FER

Read more

Summary

Literature survey

Research on building FER systems from images captured by cameras using visible light has been popular due to the easy availability and low cost of such cameras. In thermal images, the thermal distribution in facial muscles is detected This fact allows better facial expression classification and leaves no room for ambiguity as it is not dependent on external factors like human viewing through naked eyes and inconvenient lighting conditions. Prabhakaran et al i­n28 proposed a model for emotion detection by predicting facial expressions using ResNet[152] on NVIE dataset In this experiment, authors have shown that Residual networks are easier to optimize and produce good recognition accuracy by increasing the depth. Authors ­in[29] have shown that facial expression can widely be used in biometric and security applications They have utilized the efficiency of deep learning methods for disguise invariant face recognition by incorporating a noise-based data augmentation method. As a pre-processing step, we have converted the input images into grayscale images and reshaped them into a uniform size of 200 × 200 pixels before feeding to the network

Proposed methodology
Results and discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.