Driver distraction is one of the primary reasons for fatal car accidents. Modern cars with advanced infotainment systems often take some cognitive attention away from the road, consequently causing more distraction. Driver behavior analysis can be used to address the driver distraction problem. Three important features of intelligence and cognition are perception, attention and sensory memory. In this work, we use a stacked LSTM network with attention to detect driver distraction using driving data and compare this model with both stacked LSTM and MLP models to show the positive effect of using attention mechanism on the model's performance. We conducted an experiment with eight driving scenarios and collected a large dataset of driving data. First, an MLP was built to detect driver distraction. Next, we increased the intelligence level of the system by using an LSTM network. Third, we used the attention mechanism increment on the top of the LSTM model to enhance the model performance. We show that these three increments increase intelligence by reducing train and test error. The minimum train and test error of the stacked LSTM were 0.57 and 0.9 that were 0.4 less than the MLP minimum train and test error. Adding attention to the stacked LSTM model decreased the train and test error to 0.69 and 0.75. Results also show diminished the overfitting problem and reduction in computational expenses when adding attention.
Read full abstract