Abstract

Facial expression recognition is a vital research topic in most fields ranging from artificial intelligence and gamingto human-computer interaction (HCI) and psychology. This paper proposes a hybrid model for facial expressionrecognition, which comprises a deep convolutional neural network (DCNN) and a Haar Cascade deep learningarchitecture. The objective is to classify real-time and digital facial images into one of the seven facial emotioncategories considered. The DCNN employed in this research has more convolutional layers, ReLU activationfunctions, and multiple kernels to enhance filtering depth and facial feature extraction. In addition, a HaarCascade model was also mutually used to detect facial features in real-time images and video frames. Grayscaleimages from the Kaggle repository (FER2013) and then exploited graphics processing unit (GPU) computation toexpedite the training and validation process. Pre-processing and data augmentation techniques are applied toimprove training efficiency and classification performance. The experimental results show a significantly improvedclassification performance compared to state-of-the-art (SoTA) experiments and research. Also, compared to otherconventional models, this paper validates that the proposed architecture is superior in classification performancewith an improvement of up to 6%, totaling up to 70% accuracy, and with less execution time of 2,098.8 s

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call