Abstract

Facial emotion recognition (FER) forms part of affective computing, where computers are trained to recognize human emotion from human expressions. Facial Emotion Recognition is very necessary for bridging the communication gap between humans and computers because facial expressions are a form of communication that transmits 55% of a person's emotional and mental state in a total face-to-face communication spectrum. Breakthroughs in this field also make computer systems (robotic systems) better serve or interact with humans. Research has far advanced for this cause, and Deep learning is at its heart. This paper systematically discusses state-of-the-art deep learning architectures and algorithms for facial emotion detection and recognition. The paper also reveals the dominance of CNN architectures over other known architectures like RNNs and SVMs, highlighting the contributions, model performance, and limitations of the reviewed state-of-the-art. It further identifies available opportunities and open issues worth considering by various FER research in the future. This paper will also discover how computation power and availability of large facial emotion datasets have also limited the pace of progress.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call