Abstract

Human emotion recognition, crucial for interpersonal relations and human-building interaction, identifies emotions from various behavioral signals to improve user interactions. To enhance the performance of emotion recognition, this study proposed a novel model that fuses physiological, environmental, and personal data. A unique dataset was created via experiments conducted in an environmental chamber, and an emotion recognition model was subsequently developed using a multimodal fusion approach. The model transforms physiological data into 2D images to capture time series and spatial features and uniquely incorporates metadata, including environmental and personal data. The model’s generalizability was validated using a leave-one-sample-out approach. The result showed 31.6% reduction of error with a predicted area when physiological, environmental, and personal data were fused in the emotion recognition model, suggesting that incorporating various contextual factors beyond physiological changes, such as the surrounding environment and inherent or acquired individual traits, can significantly enhance the model’s understanding of emotions. Furthermore, the model was to be robust to individual differences, offering consistent emotion recognition across different subjects. These findings suggest that the proposed model can serve as a potent tool for emotion recognition in built environmental applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call