Emotion recognition through facial expressions is vital for enhancing human-computer interaction, making systems more intuitive and responsive to user needs. This study introduces an innovative approach to emotion detection, leveraging Principal Component Analysis (PCA) and Recursive Feature Elimination (RFE) for feature extraction and optimization. The methodology focuses on refining facial feature representations to improve classification accuracy, which is critical for accurately detecting emotions like happiness, sadness, fear, and surprise. The approach was applied to two widely recognized facial emotion datasets: the Cohn-Kanade (CK) and the Japanese Female Facial Expression (JAFFE) datasets. By integrating PCA and RFE, the model efficiently selects the most relevant features, enhancing the overall performance of emotion recognition. A comparative analysis with existing deep learning models highlights the advantages of the proposed method. The effectiveness of this approach is further supported by the results, where the model achieves an accuracy of 98.49% on the CK dataset and 96.03% on the JAFFE dataset. These results demonstrate a significant improvement over recent methods, indicating the model's potential for real-world applications.
Read full abstract