Abstract

With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.

Highlights

  • Emotion-related human-machine systems are essential for the intelligent automobile.Driver’s emotion affects driving performance and is closely related to traffic accidents.The number of road traffic deaths continues to rise steadily, having reached 1.35 million [1]

  • This paper proposes a novel deep learning-based framework for on-road driver facial expression recognition in an end-to-end manner

  • The first stage extracts the faces from the input video frame recorded during the on-road driving process; the second stage employs the image re-sampling algorithm based on the grayscale dataset extracted from the first stage to handle the long-tailed issue; the third stage performs emotion recognition on the re-sampling dataset by applying some state-ofthe-art deep neural networks for backbone and implements a transfer learning training strategy

Read more

Summary

Introduction

Emotion-related human-machine systems are essential for the intelligent automobile.Driver’s emotion affects driving performance and is closely related to traffic accidents.The number of road traffic deaths continues to rise steadily, having reached 1.35 million [1]. Emotion-related human-machine systems are essential for the intelligent automobile. Driver’s emotion affects driving performance and is closely related to traffic accidents. The number of road traffic deaths continues to rise steadily, having reached 1.35 million [1]. Among these incidents, the inability to control emotions has been regarded as one of the critical factors degrading driving safety [2]. Driver emotion detection and recognition are emerging topics for intelligent automotive human-machine systems [3]. Emotion can be divided into internal response, such as electroencephalograph (EEG)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.