Abstract

Negative emotions may induce dangerous driving behaviors leading to extremely serious traffic accidents. Therefore, it is necessary to establish a system that can automatically recognize driver emotions so that some actions can be taken to avoid traffic accidents. Existing studies on driver emotion recognition have mainly used facial data and physiological data. However, there are fewer studies on multimodal data with contextual characteristics of driving. In addition, fully fusing multimodal data in the feature fusion layer to improve the performance of emotion recognition is still a challenge. To this end, we propose to recognize driver emotion using a novel multimodal fusion framework based on convolutional long-short term memory network (ConvLSTM), and hybrid attention mechanism to fuse non-invasive multimodal data of eye, vehicle, and environment. In order to verify the effectiveness of the proposed method, extensive experiments have been carried out on a dataset collected using an advanced driving simulator. The experimental results demonstrate the effectiveness of the proposed method. Finally, a preliminary exploration on the correlation between driver emotion and stress is performed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.