Abstract

Human behaviour analysis has been an active area of research in computer vision and artificial intelligence. The recognition of emotions in images has been a difficult task, as emotions are subjective and can be expressed in many different ways. Multi-Embedded Learning has been proposed as a promising approach to tackle this problem by combining the outputs of multiple models. In this study, we aimed to enhance human behaviour analysis through Multi-Embedded Learning for emotion recognition in images using stacking. The objective of this study was to enhance human behaviour analysis by combining multiple models for emotion recognition in images. The study aimed to demonstrate the benefits of stacking, a specific ensemble learning algorithm, in improving the performance of emotion recognition. Multiple base models were trained using different architectures and techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and traditional machine learning algorithms like Support Vector Machines (SVMs) and k-Nearest Neighbors (k-NNs). The outputs of these base models were used as inputs to a meta-model, which was a Convolutional Neural Network. The dataset utilised in this study was the AffectNet dataset, a large collection of images depicting facial expressions. The dataset consists of over 400,000 images of faces labeled with one of the seven emotions mentioned above. To facilitate model training and evaluation, the dataset was partitioned into separate subsets for training, validation, and testing. The results showed that stacking improved the performance of emotion recognition in images compared to the individual base models. The accuracy of the stacked model was 85%, which was greater than the accuracy of any of the basic models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call