Abstract

Human emotion can be divided into multiple categories, which makes it possible to recognize emotions automatically. One critical approach for automated emotion recognition is applying the convolutional neural network to classify emotions on human expression images, but the performance decreases if input distortions occur. This paper introduced a hybrid neural network architecture to make the automated emotion recognition robust towards distorted input images and perform similarly to prediction on clean images. This hybrid neural network combines the Denoise Autoencoder (DAE) network with the Visual Geometry Group (VGG) network. Multiple standalone VGG and Hybrid network experiments were conducted with the control variables method. FER-2013 data set from Kaggle was used as the experimental data set. Distorted input images were generated by adding random noise to clean images. As a result, the research raised a valid hybrid network architecture. The hybrid network improved the emotion classification accuracy on the distorted data set from 16.70% to 57.73%, and the accuracy is similar to the classification result on the clean data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call