Abstract

In recent times, the growth of human computer interaction and Artificial Intelligence attains more interest especially in facial expression detection. Though the field has attained tremendous progress, still many issues are pertained as the facial expressions are complex. Therefore the present study aims to increase the efficiency in classifying the emotions of humans from the images of FER-2013 dataset and strives to attain optimal accuracy. Most of the studies have averted multi-modal parameters like a video for emotion detection. Moreover, existing studies lacked a detection rate due to ineffective feature extraction that negatively impacted the classification rate. For avoiding such pitfalls, this study proposes suitable data mining-based algorithms with the main intention to attain high accuracy. The present study proposes Deep Location Attention Forest method for emotion detection with higher accuracy. To accomplish this, DLFAT (Deep Location Feed Attention Transformers) is used for feature extraction from various sub-space representation at different position of the image. CK-PCA (Canonical Kernel-Principal Component Analysis) is performed feature fusion to improve the performance of the classifier. MRF (Modified Random Forest) is proposed to perform classification and predicts the emotions based on the facial expressions. The main intention of RF relies on learning all the data subset weights that alleviate loss amongst the predicted value for accomplishing better input classification to the related label. The proposed system is validated by the performance metrics in which the outcome is found to be 0.96197 as accuracy, 0.96 as F1-Score, 0.97 as recall and 0.96 as precision and the outcomes reveal the efficacy of the system in emotion detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call