Abstract

Effective driver distraction detection (DDD) can significantly improve driving safety. Inspired by the definition of driver distraction, this work aims to detect driver distraction based on the true driver’s focus of attention (TDFoA). This is accomplished based on two stages, the prediction of TDFoA and DDD based on the driver’s focus of attention (DFoA) and the TDFoA. However, this process is challenging due to the complex and dynamic traffic environment, momentary glimpses not related to the TDFoA in normal driving, the interference of noise (information unrelated to the TDFoA), the ignoring of multi-scale information from driving scenarios, and the variability of thresholds used to distinguish between driver distraction and non-distraction. To solve these problems, we design a deep 3D residual network with attention mechanism and encoder-decoder (D3DRN-AMED) based on successive frames with convolution LSTM for the prediction of TDFoA, where successive frames are used to eliminate the impact of momentary glimpses. The convolution LSTM achieves the transition of features in successive frames, considering the historical variation of driving scenarios. Attention mechanism based on soft thresholding is inserted into the D3DRN-AMED as nonlinear transformation layers to eliminate the noise-related features, and the encoder-decoder module is introduced to extract multi-scale features. A method based on a neural network is then proposed to detect driver distraction according to the difference or similarity between DFoA and TDFoA, which can accurately detect driver distraction without determining the threshold. Experimental results show that the proposed method can effectively detect driver distraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call