Abstract

The practical implications of human facial emotion recognition (FER) have sparked interest in the research community. The primary focus in FER is to map diverse face emotions to their corresponding emotional states. The traditional FER is split into two sections: feature extraction and emotion recognition. The fast evolution in the field of Artificial Intelligence has given a significant and an amazing contribution to the world of technology. Since traditional algorithms is failed to satisfy human demands in real time. Machine learning and deep learning algorithms have had a lot of success in diverse applications including classification systems, recommendation systems, pattern recognition, and so on. Human emotion is very critical to determining a person's thoughts, behaviours, and feelings. Because, it is inherent feature extraction mechanism from images. FER makes effective use of deep neural networks, notably Convolutional Neural Networks (CNN). Multiple works just with a few layers to overcome FER concerns have been presented on CNN. Standard shallow CNNs with simple learning algorithms, on the other hand, have limited extraction of features capabilities when it concerns extricating emotion information from resolution high photos. Most present approaches have the problem of just considering frontal photos (i.e., ignoring side views for convenience), despite the fact that views from all angles are necessary for a realistic facial emotion recognition system. Deep learning can be used to build an emotion detection system, and various applications such as feedback analysis, face unlocking, and so on can be carried out with high accuracy. The goal of this research is to develop a Deep Convolutional Neural Network (DCNN) model that can distinguish between five different human face expressions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call