Abstract

Humans have traditionally found it easy to determine emotions from facial expressions, but doing so by utilizing a computer algorithm is far more challenging. It is now feasible to discern emotions from photographs because of recent advances in computer vision as well as artificial intelligence. The following issues afflict existing face expression detection systems throughout the application process: As the extant shallow feature extraction framework has lost a great deal of effective feature data and also has limited detection performance, the paper proposes a novel technique termed “Improved Deep CNN-based Two Stream Super Resolution and Hybrid Deep Model-based Facial Emotion Detection”, which consists of three working phases: super-resolution, facial emotion recognition, as well as classification. Improved Deep CNN has been used in the super-resolution phase for two streams, structure and texture stream super-resolution, that delivers high pixel density to the pictures with minimal cross-entropy loss. The facial emotion identification phase involves face recognition utilizing the Viola–Jones face detection algorithm and feature extraction using three traditional methods: Texton, Bag of Words (BOW), and GLCM, as well as improved LGXP features. In addition, classification was performed in our work utilizing RNN as well as Bi-GRU neural networks. A voting mechanism known as Score level fusion was used to get precise categorization results. Our unique Deep CNN-based facial emotion recognition system provides 95% accuracy, which is superior to other traditional approaches, according to the results. Positive and negative performance metrics for various databases have also been assessed and compared with our proposed methodology, proving that our novel strategy surpasses conventional approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call